added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2019-01-02T04:06:56.624Z
|
2015-06-17T00:00:00.000
|
108862214
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=57226",
"pdf_hash": "ab56a2809ebf1b1ea437607f72c995004f1bd5d5",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:508",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"sha1": "ab56a2809ebf1b1ea437607f72c995004f1bd5d5",
"year": 2015
}
|
pes2o/s2orc
|
Damped Harmonic Oscillator with Arduino
The benefits of using experiments in physics classes are widely discussed in the literature, but sometimes experimental setups are not available. In this paper we present different ways of using experiments in physics classes based on the Arduino board, since it involves low cost materials and can be built by the own students in several cases. In this work we addressed the well known damped harmonic oscillator and performed the data acquisition through the Arduino board, a LDR (Light Dependent Resistor), a infrared photodiode sensor and a computer. The setup of the proposed experiment and the technical details related to assembly are discussed in a clear way in order to be reproduced by anyone interested in the subject. We found a significant difference in the results obtained through the LDR and the photodiode. The later has given better results and has reproduced a regular decay in the amplitude of the oscillator even when the experiment was performed in a highly illuminated room. The Arduino board, alongside the referred peripherals, has shown great potential for building low cost experimental setups to be used in physics classes, both for expositive and hands on approaches.
Introduction
The use of didactic laboratories in Physics education is widely recommended because it is seen as a fundamental tool for the comprehension of physical phenomena and the underlying theoretical concepts.However, the equipment required for data acquisition is often of high value and their maintenance must be done by specialized technicians, which prevents their wide use in schools or even in colleges.For this reason, experiments based on low-cost materials have become very popular and contribute to improve the qualitative understanding of physical phenomena and to increase the interaction of students with the process of construction of scientific materials.The very act of building these materials carries many benefits, such as: greater understanding of theoretical concepts, practice with trial and error, analysis of the pieces which yield better results and participation of the student as a protagonist in the development of knowledge.
Despite the benefits of making experiments with low-cost materials Ref. [1], the impossibility of obtaining accurate measurements with them makes their use incomplete.In an attempt to fix this lack of accurate measurements, new technologies in Physics education have been implemented in several ways, for example: use of graphics software that allows for the visualization of certain phenomena; use of photosensors and microphones for direct measurements on the computer; development of applets which explore the versatility of changing physical parameters without the risk of breaking an apparatus; and use of electronic microcontrollers to which a large number of sensors can be attached.
All these examples account for a current need of teachers to interact in a more dynamic way with their students and thus make their participation more active.What will influence the choice of a new technology to be used is the relationship between cost, prior knowledge and feasibility of accurate data acquisition.Among these technologies, the most suited to this profile are electronic microcontrollers.The only difficulty in its large-scale use is the need for a prior knowledge of electronics and programming, which many teachers and students do not have.Therefore, the need of such prior knowledge can turn the use of microcontrollers unfeasible and make them accessible to few.The Arduino board was then created in an attempt to increase the large-scale use, by making the implementation of its programs of open access to the public.This community has grown considerably, and a large number of programs and implementations are now available to researchers from various fields, including Physics Ref. [2]- [4].
In this work we show the use of the Arduino platform in a physic experiment: the damped harmonic oscillator.The main focus in this implementation will be discussion between the use of the LDR and photodiode sensors, reporting the accuracy and differences in data acquisition.In Section 2, we briefly discuss about the Arduino platform, showing its versatility.In Section 3 we present the results of the proposed experiments.In Section 4 we present our conclusions and perspectives.The codes of the programs used are available in the Appendices.
The Arduino Microcontroller Platform
The Arduino microcontroller was initially developed around 2005 as a platform for design students in Italy to learn how to develop interactive art exhibits.Arduino is a very popular and easy to use programmable board for creating your own projects.Consisting of a simple hardware platform and a free source code editor, it is designed to be really easy to use without being an expert programmer.The typical Arduino board provides four basic functional elements: An Atmel ATmega328P AVR microcontroller, a simple 5 V power supply, a USB-toserial converter for loading new programs onto the board, I/O headers for connecting sensors, actuators and expansion boards.
Arduino makes several different boards, each with different capabilities.In addition, part of being open source hardware means that others can modify and produce derivatives of Arduino boards that provide even more form factors and functionalities.To mention some Arduino boards: Arduino Uno, LilyPad, Arduino Mega, Arduino Leonardo and Arduino Nano.
The standard way to program an Arduino is via an implementation of Wiring, a similar physical computing platform, which is based on the Processing multimedia programming environment, compiled in the Arduino integrated development environment (IDE).Alternatively, National Instrument's Labview programming environment can use the Arduino as a slave data acquisition device.Further, Arduino boards can communicate with a computer via USB using a virtual serial port.Nearly any programming language is able to communicate through a serial port, and so interfaces in Matlab, Mathematica, Python, and PERL are also available.
Experimental Setup
Harmonic oscillator is largely found in the literature Ref. [5] for being a well-known experiment which is widely used in laboratory classes.
In most cases a sonar captures the movement of the object being analyzed and thus provides the corres-ponding graph.Our experimental setup follows in the footsteps of Ref. [6], in which the experiment is based on a creative and simple assembly, that allow us to reproduce it and then to compare with our results, as can be seen in Figure 1.
The setup consists of a mirror attached to a ruler which in turn is rigidly fixed at one of its ends.The Arduino board in connected to an LDR (Light Dependent Resistor) sensor that captures the light reflected from the mirror which was emitted by a single LED.The ruler will be manually set in motion and since it is fixed at one of its ends, the motion will be strongly damped.We had to modify the original assembly replacing the attached mirror by a white sheet of paper, because the light reflected in the mirror saturated the LDR sensor.
The LDR sensor or photoresistor is a light-controlled variable resistor.The resistance of a photoresistor decreases with increasing incident light intensity which means the resistivity of any photoresistor may vary widely depending on the room light condition.This characteristic makes them unsuitable for applications in an ambient with high luminosity, such as a classroom, when precise measurements are required.The voltage is sent to an analogical gate in Arduino and a simple program reads and stores the data in a file.It is important to mention that we acquire the voltage difference and not the object position as a function of time.
In summary our pipeline consists of gathering the light from the LED (after it has been reflected by the ruler), convert it to a voltage using either the LDR or the photodiode and write both the voltage and the corresponding time to a file using the Arduino board, connected to a computer.One is them able to plot the voltage versus time, that has to decay alike the amplitude of the oscillator.The piece of code used in the data acquisition can be found in the Appendix.
Results and Discussion
The behaviour of the voltage, registered using the LDR, obtained for different room luminosites as a function of time is shown in Figure 2. As we can see, the damped oscillation is not fully observed in high luminosity condition because we cannot see the exponential decay, not even a regular decay, of the characteristic amplitude.The reason behind it is that photoresistors also exhibit a certain degree of latency between exposure to light and the subsequent decrease in resistance, and this lapse of time is crucial in experiments with fast movement.In a luminosity controlled room, with a weak photon emission, a regular decay was observed, as expected (Figure 2-lower panel).This comparison has shown a limitation of the LDR sensor and is one of the features one has to keep in mind during lab activities, specially when a large number of external parameters are involved, in order not to compromise the final result.
In order to obtain a better result in high luminosity rooms, a infrared photodiode was used instead of a LDR.A photodiode is a semiconductor device that converts light into current, which in turn, is generated when photons are absorbed into the photodiode.The benefits of this approach can be seen in Figure 3, in which a clearly regular decay in the registered voltage is observed for different luminosity conditions, in other words independently of light intensity the photodiode behaviour was the same, as expected since it is only sensitive to the IR part of the electromagnetic spectrum.So the benefits of using photodiode instead of LDR sensors in practical applications are evident, but it is also important to note that the difference in the results may be explored to enhance the student's comprehension of the subject.
Concluding Remarks
The Arduino board offers a wide range of possibilities in low cost material experiments for physics teaching.In this work we used a sensor to convert light into voltage and the Arduino board to send the acquired data to a computer.For the particular case of the damped oscillator, the use of a infrared photodiode has given better results, when compared to a LDR, and can be used even in highly illuminated rooms, such as a typical classroom.The decay in the registered voltage depicts the expected decay in amplitude and is consistent with the motion of a damped oscillator.
To summarize and clarify our comparison we can see, in Figure 4, the voltage difference obtained by LDR The experimental setup presented in this work is easy to assemble and can therefore be reproduced in classroom either by teachers or students, making it a good tool for physics teaching.The piece of code needed for the experiment is available in the Appendix providing a straightforward application of the experiment and also leaving room for changes and improvements.Furthermore, this setup is a good solution for Institutions with low budget since the full setup costs $32 and, given the possibilities it provides, is not expensive, specially when compared to commercial experimental setups.
In future projects we will explore advantages of photodiode sensors in more accurate assemblies in order to expand its usability to precise measurements and not only to classroom demonstrations.
Figure 1 .
Figure 1.Photo of the experimental setup (left).A mirror is attached to a ruler, which in turn is rigidly fixed in one of its ends.The Arduino platform is placed below the ruler, where measurements are done by using either LDR or photodiode sensors.Photo the Arduino board used in the assembly (center).Schematic circuit diagram used (right).
Figure 2 .
Figure 2. Plot of damped hamonic oscillator experiment using a LDR sensor in a high luminosity room (upper panel) and in a low luminosity room (lower panel).Solid line in the inset is a guide to the eye for amplitude decay.
Figure 3 .
Figure 3. Plot of damped hamonic oscillator experiment using a photodiode sensor in a high luminosity room (upper panel) and in a low luminosity room (lower panel).Solid line in the inset is a guide to the eye for amplitude decay.
Figure 4 .
Figure 4. Plots of damped harmonic oscillator experiment with difference voltage as function of time using both: LDR sensor and photodiode sensor in different luminosity room.and photodiode sensors in different luminosity conditions where once again the LDR behaviour is acceptable in an low luminosity environment.The experimental setup presented in this work is easy to assemble and can therefore be reproduced in classroom either by teachers or students, making it a good tool for physics teaching.The piece of code needed for the experiment is available in the Appendix providing a straightforward application of the experiment and also leaving room for changes and improvements.Furthermore, this setup is a good solution for Institutions with low budget since the full setup costs $32 and, given the possibilities it provides, is not expensive, specially when compared to commercial experimental setups.In future projects we will explore advantages of photodiode sensors in more accurate assemblies in order to expand its usability to precise measurements and not only to classroom demonstrations.
|
v3-fos-license
|
2022-03-24T06:22:56.122Z
|
2022-03-22T00:00:00.000
|
247616317
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "79a1c83542559c33c5d727a105568f7aec77cd86",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:511",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "36dd3170ba018f77d70089c5f2b1474779d7ca55",
"year": 2022
}
|
pes2o/s2orc
|
Precise stellarator quasi-symmetry can be achieved with electromagnetic coils
Magnetic fields with quasi-symmetry are known to provide good confinement of charged particles and plasmas, but the extent to which quasi-symmetry can be achieved in practice has remained an open question. Recent work [M. Landreman and E. Paul, Phys. Rev. Lett. 128, 035001, 2022] reports the discovery of toroidal magnetic fields that are quasi-symmetric to orders-of-magnitude higher precision than previously known fields. We show that these fields can be accurately produced using electromagnetic coils of only moderate engineering complexity, that is, coils that have low curvature and that are sufficiently separated from each other. Our results demonstrate that these new quasi-symmetric fields are relevant for applications requiring the confinement of energetic charged particles for long time scales, such as nuclear fusion. The coils’ length plays an important role for how well the quasi-symmetric fields can be approximated. For the longest coil set considered and a mean field strength of 1 T, the departure from quasi-symmetry is of the order of Earth’s magnetic field. Additionally, we find that magnetic surfaces extend far outside the plasma boundary used by Landreman and Paul, providing confinement far from the core. Simulations confirm that the magnetic fields generated by the new coils confine particles with high kinetic energy substantially longer than previously known coil configurations. In particular, when scaled to a reactor, the best found configuration loses only 0.04% of energetic particles born at midradius when following guiding center trajectories for 200 ms.
Controlled nuclear fusion is a promising candidate to satisfy rising electricity needs while avoiding carbon emission into the atmosphere. To generate electricity from fusion, a plasma needs to be maintained at extremely high temperatures over long time scales, which requires excellent particle confinement. This is typically achieved using powerful magnets. Tokamaks rely on a toroidally axisymmetric system of magnetic coils to achieve good confinement, at the cost of requiring a plasma current to generate a significant fraction of the magnetic field. This plasma current is challenging to drive in steady-state operation (1), and can be the source of disruptive instabilities (2,3). In contrast, the nonaxisymmetric coil systems of stellarators can generate a confining magnetic field in the absence of plasma currents, relieving many of the challenges for continuous, disruptionfree operation. However, the lack of axisymmetry implies that neither nested magnetic flux surfaces nor particle confinement is guaranteed (4). This motivates the need for a generalization of axisymmetry to the stellarator context, called quasi-symmetry.
A magnetic field is said to satisfy quasi-symmetry if there exists an invariant direction for the field strength B = |B| in a certain coordinate system (5). This condition leads to the conservation of canonical angular momentum, which, in turn, implies the remarkable property that such fields are guaranteed to confine charged particles without requiring plasma currents. However, it remains an open question whether three-dimensional magnetic fields that are perfectly quasi-symmetric over a volumetric region exist. Recently, using numerical optimization, Landreman and Paul (6) found vacuum magnetic fields that satisfy the quasi-symmetry property in toroidal geometries to a very high precision. Magnetic fields that are quasi-symmetric to such a high degree have not been discovered before. Simulations confirm excellent confinement properties even for particles with a large kinetic energy. Naturally, the question arises whether these magnetic fields can be accurately produced by a set of practical electromagnets, which would be a first step toward a new generation of fusion experiments. In this report, we show that such magnets, in fact, exist. Specifically, we find coils producing fields whose deviation from perfect quasisymmetry is more than four orders of magnitude smaller than the mean field strength. For a 1-T mean field, comparable to many stellarator experiments, this error can be on the order of Earth's magnetic field. Simulations confirm particle confinement comparable to the ideal fields discovered in ref. 6. This shifts the discovery of these highly quasi-symmetric fields from one of theoretical interest to one of practical relevance for fusion experiments and other confinement applications (7). . We note that the rectangular cross-section of the coils is for visualization only; in all computations for this manuscript, the coils are each approximated by a single wire of infinitesimal thickness. We conducted numerical experiments in which we approximated finite-thickness coils using multiple filaments, and obtained very similar results, not shown here.
Results and Discussion
We solve a constrained optimization problem to obtain coils that approximate the two quasi-axisymmetric fields of ref. 6, which we refer to as "QA-LP" and "QA+Well-LP," reflecting the absence or presence of a magnetic well. The design space consists of four distinct modular coils, which results in 16 coils in total after applying symmetries. The length of coils impacts the quality of the magnetic field approximation strongly, and hence we compute coil sets of different coil length L max and compare their performance, for example, "QA+Well[20]" refers to the coil set which approximates the "QA+Well-LP" configuration and for which the four modular coils have combined length 20 m. For both "QA-LP" and "QA+Well-LP," as we allow longer coils, the field induced by the coils becomes a better approximation of the target field. Values of L max are relative to an average major radius of 1 m. Fig. 1 shows coils obtained by choosing L max = 18 m and L max = 24 m in the approximation to QA-LP, the relative normal magnetic field B · n/|B| on the surface S, and a Poincaré plot for L max = 24 m. Here B is computed using the Biot-Savart law, and, if the normal magnetic field is exactly zero, then the field matches that discovered in ref. 6 everywhere in the volume contained by the surface. For the shortest coils, we observe the largest normal magnetic field on the surface, reaching values of up to 3.3 × 10 −3 . Its oscillatory nature is caused by the discrete nature of the electromagnetic coils and their close proximity to the surface. Longer coils enable a larger distance between the surface and the magnets, thereby reducing these discrete effects, and more accurately reproducing the target magnetic field, with a relative normal magnetic field of, at most, 1.6 × 10 −4 for the QA[24] configuration.
The Poincaré plot demonstrates the existence of nested magnetic surfaces in the entire volume. This feature is necessary (but not sufficient) for confinement, since charged particle trajectories are tangent to these surfaces in the limit of low energy.
Quasi-axisymmetric fields are characterized by the following property: When parametrized using Boozer coordinates φ, θ, the field strength on each magnetic surface only depends on the angle θ (5,8). When this property is satisfied exactly, even highly energetic collisionless particles are confined over long time scales, and collisional transport is minimized. In Fig. 2C We quantify this statement by performing a Fourier transformation of |B| and then studying the magnitude of those Fourier coefficients that break quasi-symmetry; that is, we write |B(s, θ, φ)| = m,n B m,n (s) cos(mθ − nφ) and then consider terms with n = 0. Here s indexes each surface by the normalized toroidal magnetic flux it encloses, and we plot the largest symmetry-breaking Fourier mode on each surface in Fig. 2A. As coil length is increased, the symmetry-breaking error is reduced significantly, and approximates those of the fields discovered in ref. 6. In fact, we are able to achieve errors smaller than Earth's magnetic field for the longest set of coils when the mean field strength is 1 T. For comparison, gray curves show the symmetry-breaking amplitudes for the eight previous quasi-symmetric configurations in figure 1 of ref. 6.
Finally, scaling the configurations to the mean field and minor radius of the ARIES-CS reactor (9), we compute collisionless guiding center trajectories for alpha particles initialized with a kinetic energy of 3.5 MeV, as by-products of a deuterium-tritium fusion reaction, on the surface with normalized toroidal flux s = 0.25 (half radius). Particle losses are shown in Fig. 2B. For L max = 24 m, the performance is nearly indistinguishable from the target equilibrium, and, for the QA+Well[24] configuration, less than 0.04% of particles are lost after 0.2 s, a typical time for the alphas to thermalize with the main plasma. This coil set is shown in Fig. 2D. Performance is only slightly worse for L max = 22 m, but is poor for the coils with L max = 18 m. For comparison, gray curves show calculations for the nine previous stellarator configurations of figure 6A in the SI Appendix of ref. 6, similarly scaled. We note that the thermal collisional transport magnitude <6 × 10 −8 for QA [24]. These values are orders of magnitude below the values for other optimized stellarators [e.g., ∼10 −3 for Wendelstein 7-X (4)], and are so small that collisional fluxes would be negligible compared to turbulent transport. This is the standard situation for tokamaks, but is unusual and desirable for stellarators, since collisional and turbulent losses are additive (4).
In conclusion, we have shown that the magnetic fields of ref. 6 can be produced very accurately using coils, making these fields practically relevant for stellarators. As a result, exceptionally good confinement of particle trajectories and remarkably small thermal collisional transport can be achieved. While longer coils are required for optimal performance, these coils are not particularly complex as measured in terms of curvature and coil-to-coil separation.
Materials and Methods
The electromagnetic coils were optimized using the SIMSOPT software (11). Similar to the approach in ref. 12, each coil is modeled as a closed, smooth curve in R 3 and represented using a Fourier series, truncated at order 16. Given a magnetic surface S (obtained from ref. 6), the objective that we minimize is given by where B is the field induced by the coils. If f B = 0, then the induced field is exactly equal to the target field up to a scaling factor.
Finding coils that minimize this objective is an ill-posed problem, so we require additional regularization. In practice, it is desirable to have coils that are not too long, avoid high curvature, and are well separated. In this work, we enforce constraints on the curvature (κmax ≤ 5 m −1 ), the mean squared curvature (κmsc ≤ 5 m −2 ), and the distance between coils (d min ≥ 0.1 m), and we vary the constraint on the total length of the magnetic coils (Lmax ∈ {18, 20, 22, and 24 m}). The units quoted in the above assume a major radius scaled to 1 m. We refer to SI Appendix for more detail on the exact implementation of these constraints.
The optimization objective has 399 parameters for the coils and is highly nonconvex. The minimization uses the L-BFGS-B algorithm with analytic gradients. To remedy the possibility of the optimizer being stuck in a local minimum, we start the optimization from eight different initial coil sets and choose the best minimizer as measured by the objective. The code is highly optimized and parallelized and is publicly available at https://github.com/ florianwechsung/CoilsForPreciseQS. Using eight cores of an Intel Xeon Platinum 8268 CPU, solving an individual optimization problem takes ∼30 min. Data Availability. Code and optimization results have been deposited in GitHub (https://github.com/florianwechsung/CoilsForPreciseQS) and Zenodo (10.5281/zenodo.5975323) (13).
|
v3-fos-license
|
2023-09-30T15:15:15.071Z
|
2023-09-27T00:00:00.000
|
263236366
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2023.1242426/pdf?isPublishedV2=False",
"pdf_hash": "0c29e1e20a2894a0ee0f7344c90dabe82d0d7b85",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:512",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "878a0b1a4873fc19764efbc0bc67974d9777e6dd",
"year": 2023
}
|
pes2o/s2orc
|
Role of Caveolae family-related proteins in the development of breast cancer
Breast cancer has become the most significant malignant tumor threatening women’s lives. Caveolae are concave pits formed by invagination of the plasma membrane that participate in many biological functions of the cell membrane, such as endocytosis, cell membrane assembly, and signal transduction. In recent years, Caveolae family-related proteins have been found to be closely related to the occurrence and development of breast cancer. The proteins associated with the Caveolae family-related include Caveolin (Cav) and Cavins. The Cav proteins include Cav-1, Cav-2 and Cav-3, among which Cav-1 has attracted the most attention as a tumor suppressor and promoting factor affecting the proliferation, apoptosis, migration, invasion and metastasis of breast cancer cells. Cav-2 also has dual functions of inhibiting and promoting cancer and can be expressed in combination with Cav-1 or play a regulatory role alone. Cav-3 has been less studied in breast cancer, and the loss of its expression can form an antitumor microenvironment. Cavins include Cavin-1, Cavin-2, Cavin-3 and Cavin-4. Cavin-1 inhibits Cav-1-induced cell membrane tubule formation, and its specific role in breast cancer remains controversial. Cavin-2 acts as a breast cancer suppressor, inhibiting breast cancer progression by blocking the transforming growth factor (TGF-β) signaling pathway. Cavin-3 plays an anticancer role in breast cancer, but its specific mechanism of action is still unclear. The relationship between Cavin-4 and breast cancer is unclear. In this paper, the role of Caveolae family-related proteins in the occurrence and development of breast cancer and their related mechanisms are discussed in detail to provide evidence supporting the further study of Caveolae family-related proteins as potential targets for the diagnosis and treatment of breast cancer.
Introduction
Breast cancer (BC) is one of the most common malignant tumors in women worldwide.In recent years, the incidence of breast cancer has increased significantly, and it shows a trend of younger age, which not only causes serious harm to women's health and quality of life but also seriously increases the resulting social and economic burden (Heer et al., 2020).At present, despite remarkable progress in targeted therapy, hormone therapy, surgery, chemotherapy and radiotherapy, breast cancer is still the main cause of cancer death in women (Waks et al., 2019;Sung et al., 2021).The occurrence and development of breast cancer is a multistep process, and the expression of Caveolae family-related proteins is closely related to the proliferation, apoptosis, migration, invasion, metastasis and drug resistance of breast cancer cells, playing a dual role in inhibiting and promoting cancer.
A Caveola is formed by the inward depression of the plasma membrane of the cell.It is a special cystic structure rich in cholesterol and forms a "flask" structure with a diameter of 60-80 nm on the cell membrane, also known as cell membrane Cave-like invagination.Its main function is to regulate certain signaling molecules.As a signal transduction hub, it is involved in transmembrane material transport and various intercellular interactions (Filippini and D'Alessio, 2020).It has always been believed that the function of Caveolae is mainly to participate in transmembrane material transport.Recent studies have found that Caveolae are also hubs of cell signaling molecule enrichment and signal transduction and play a direct regulatory role in the activity of many key signaling molecules.The integrity of the Caveolae is closely related to tumor cell function (Parton Caveolae, 2018;Singh and Lamaze, 2020).Caveolae, as cytoplasmic membrane structures, are mainly composed of lipids and proteins.The lipid components mainly include cholesterol, glycosphingolipids (GSLs) and sphingomyelin (SPH), which constitute the lipid core of Caveolae.If the cholesterol content of Caveolae is reduced too much, the number of Caveolae invaginations will be reduced.Similarly, blocking cholesterol transport to the cell surface pharmacologically indicates that cholesterol plays an important role in maintaining the structure of Caveolae (Fernandes and Oliveira-Brett, 2020).
Caveolae family-related proteins, also known as Caveolins and Cavins, are the main components of the Caveolae and play important roles in various physiological and pathological processes, such as cell endocytosis, maintenance of lipid homeostasis, signal transduction and the occurrence and development of tumors (Part et al., 2020).Mammalian Caveolins (Cavs) include Cav-1, Cav-2, and Cav-3.Cavins include Cavin-1 (polymerase 1 transcription release factor (PTRF)), Cavin-2 (serum deprivation response factor (SDPR)), Cavin-3 (c kinase binding SDR-associated gene product (SRBC)) and Cavin-4 (muscle-restricted friable protein (MURC)) (Parton et al., 2018).Because of the differences in the structure and function of these proteins, they have different biological effects in the body.However, the specific mechanism of action and the exact function are still unclear, and there are still many controversies.In this paper, the role of Caveolae family-related proteins in breast cancer is summarized as follows.
2 Relationship between Caveolins and breast cancer
Cav-1 and breast cancer
Cav-1, a member of the Caveolae family-related proteins was discovered and reported by Palade in 1953 when he observed mouse
FIGURE 1
The structure of Caveolin1.Caveolae are cholesterol-enriched, rigid membrane microdomains that are composed of scaffold proteins named caveolins.The most important constituent protein is Caveolin-1.
FIGURE 2
means to promote means to suppress.Cav-1 suppresses or promotes tumor signaling pathways, thereby affecting the development of breast cancer.
capillary endothelial cells (Palade, 1953) and was identified and cloned by Rothberg et al. (Rothberg et al., 1992) in 1992.Cav-1, an integrated membrane protein composed of 178 amino acid residues with a molecular weight of 22 kDa, is one of the main scaffold proteins of Caveolae (Figure 1).Its expression is missing in breast cancer, lung cancer, ovarian cancer and osteosarcoma and may be related to its abnormal promoter methylation (Parat and Riggins, 2012).Sgara et al. (Sagara et al., 2004) found that Cav-1 can regulate the distribution and translocation of ER in normal breast epithelium and is closely related to progesterone receptors to maintain normal breast development.At present, there are two views on the correlation between Cav-1 and breast cancer: 1) Cav-1 acts as a suppressor gene of breast cancer, which inhibits the malignant process of breast cancer; 2) Cav-1 expression promotes the occurrence and development of breast cancer.Some researchers support the first view because low or absent levels of Cav-1 mRNA and protein can be seen in tumor tissue samples from human primary breast cancer patients, oncogenetransformed cells, transgenic mouse breast cancer models, and some human or mouse transformed epithelial cell lines (Razani et al., 2000).Later, some scholars designed experiments to force the re-expression of Cav-1 in transformed breast cancer cells, which not only eliminated the cancerous potential of these cells but also inhibited aggressiveness (Zhang et al., 2000;Fiucci et al., 2002).Other researchers support the second view, and their evidence is that the Cav-1 gene locus is located at the aggregation point of suppressor genes in many human epithelial origin malignancies, including breast cancer (Zen et al., 1995;Zenklusen et al., 1995).However, until now, there has been no unified understanding of the mechanism of the association between Cav-1 and breast cancer.
Effect of Cav-1 on the proliferation of breast cancer cells
At present, the findings of most studies suggest that Cav-1 can inhibit the proliferation of breast cancer and act as a tumor suppressor in the occurrence and development of breast cancer (Qian et al., 2019).Research results have shown that compared with normal breast tissue and adjacent tissues, the expression of Cav-1 protein in breast cancer tissue is low, and inhibiting the expression of Cav-1 by gene knockout in normal breast epithelial cells can significantly upregulate the expression of related tumor growth factors (Shi et al., 2016).For example, stromal cell derived factor-1 (SDF-1), epidermal growth factor (EGF), and fibroblast specific protein-1 (FSP-1) promote the growth and carcinogenesis of mammary epithelial cells (Shi et al., 2016).
In addition, Cav-1 can also affect the proliferation of breast cancer by inhibiting aerobic glycolysis in breast cancer (Warburg effect) (Jiao et al., 2019).Aerobic glycolysis is the most common form of energy recombination in tumors.Even in the presence of sufficient oxygen, tumor cells still rapidly supply energy through glycolysis to meet their abnormally high proliferation and metabolic energy needs.Mechanistically, Cav-1 reduces the entry of p65/ p50 into the nucleus by inhibiting the phosphorylation of NF-κB, thus inhibiting the transcriptional activation of C-myc, which, as a classical aerobic glycolysis-related protein, mediates the orderly progression of aerobic glycolysis.Therefore, high expression of Cav-1 can lead to a decrease in glucose in the cell and in the production of lactic acid and to low expression of lactate dehydrogenase A (LDH-A) and 3-phosphoinositol-dependent protein kinase-1 (PDK1), the key enzymes in glycolysis, thus inhibiting aerobic glycolysis and hence the proliferation of breast cancer cells.In recent years, the role of Cav-1 in negatively regulating the proliferation of breast cancer by inhibiting autophagy has been widely studied.Cav-1 inhibits the production of autophagic lysosomes by blocking the fusion of lysosomes and autophagosomes, thus blocking autophagic flow and inhibiting the proliferation of breast cancer cells (Shi et al., 2015).
The inactivation of the Cav-1 gene leads to increased (Estrogen Receptor alpha) ERα expression, which can be seen in both human breast cancer epithelial cells and mouse primary breast cancer epithelial cells, and can promote the growth of stimulated threedimensional epithelioid structures (Li et al., 2006).Earlier studies have shown that downregulation of Cav-1 can stimulate the expression of ERα, which also indicates that Cav-1 may play an important regulatory role in the maintenance of normal cell growth in breast tissue (Zhang et al., 2005).Therefore, mutation or low expression of Cav-1 is undoubtedly a factor promoting the progression of ERα (+) breast cancer.It is a risk factor for a poor breast cancer prognosis.
Other studies have shown that downregulation of Cav-1 in breast cancer MCF-7 cells can increase the expression of large conductive Ca 2+ -activated potassium channels (BKCa) in the cell membrane and promote cell proliferation (Du et al., 2014).Cav-1 can also promote the binding of epidermal growth factor receptor (EGFR) to the kinase domain of the Caveolin binding motif and act as a pro-cancer factor to activate EGFR-mediated mitosis initiation (Liang et al., 2018).Studies have shown that abnormal expression of Cav-1 can enhance EGFR signaling and increase the malignant potential of MCF-7 breast cancer cells, while overexpression of Cav-1 can enhance the cell growth inhibition ability of EGFR tyrosine kinase inhibitors (Agelaki et al., 2009).
Cav-1 negatively regulates cell proliferation by inhibiting the expression of cyclin D1.Overexpression of Cav-1 can induce upregulation of the cell cycle Factors p21, p27 and cyclin B1 and downregulation of cyclin D2, which leads to G2/M phase arrest of cells.Inhibition of proliferation of breast cancer MDA-MB-231 and MCF-7 cells (Kang et al., 2016).In addition, under normal circumstances, Cav-1 can inhibit the Cyclin Dl promoter and downregulate the expression of Cyclin D1, thereby blocking cells in the G0/G1 phase and regulating the cell cycle.In addition, in breast cancer tumor tissues of ERa (+) breast cancer patients, Cyclin D1 overexpression was also observed in samples with Cav-1 mutations.ERa was not expressed in tissues negative for Cyclin D1 immunostaining, and Cav-1 gene expression was normal (Li et al., 2006).Therefore, the inactivation of Cav-1 is likely to occur in the early stage of breast transformation and induce increased sensitivity to estrogen, upregulating ERa, and simultaneously to induce upregulation of Cyclin D1 expression.In other words, the amplification or overexpression of Cyclin D1 and ERa in breast cancer may occur through the same pathway.Both are induced by mutations in Cav-1.
Studies have shown that the interstitium of breast cancer cells is usually in a high-oxygen microenvironment, and the transport efficiency of Cav-1 into cells is low (Monti et al., 2017).In the hypoxic TME, bisphenol A (BPA) in cells can induce the competitive binding of G protein estrogen receptor (GPER) to Cav-1, resulting in the release of heat shock protein 90 (HSP90), activation of hypoxia-inducing factor-1α (HIF-1α) and vascular endothelial growth factor (VEGF), and promotion of cell proliferation (Xu et al., 2017).
It is concluded that Cav-1 can affect cell proliferation in breast cancer by inducing changes in receptor activity and membrane surface ion channels, regulating the cell cycle, and regulating the tumor microenvironment (TME) and intercellular interactions.
Effect of Cav-1 on apoptosis of breast cancer cells
At present, the role of Cav-1 in breast cancer apoptosis is still controversial, and there are different studies on its ability to promote or inhibit apoptosis (Wang et al., 2014a;Chen et al., 2019).Regarding the promotion of Cav-1 apoptosis in breast cancer cells, studies have found that docetaxel (DTX) can upregulate Cav-1 in MCF-7 and MDA-MB-231 breast cancer cells, thereby regulating apoptosis pathway-related proteins, Bcl-2 phosphorylation, p53 and Bax expression, and cleaved PARP cleavage (Kang et al., 2016).In addition, Shi et al. (2016) found that the expression of P53 and apoptosis regulator (TIGAR) was upregulated in BT474 cells, and apoptosis was thus inhibited.Subsequently, Shi et al. (2015) found that Cav-1 deletion and lipid raft breakdown could increase V-ATPase activity, activate autophagy-lysosome fusion, improve autophagy levels and inhibit apoptosis.However, Wang et al. (2014b) found that in breast ductal cancer cells (BT474), Cav-1 can activate the extracellular signal-regulating kinase ERK1/2 signaling pathway, upregulate the expression of cell cyclin D1 and β-catenin-related factors, reduce G0/G1 phase arrest and increase the proportion of S-phase cells.At the same time, the expression of the autophagy-related proteins Beclin-1, light chain 3-Ⅱ and Atg12/5 was enhanced, which promoted the formation of autophagosomes and inhibited apoptosis.Cav-1 knockdown inhibited cell migration and invasion.Badana et al. (2018) also proposed that methyl-β-cyclodextrin (MβCD) in lipids can downregulate the expression of Cav-1 and Wnt receptor LRP6, synergistically affect the expression of survivin, Bcl-2 and Bax, induce the breakdown of lipid rafts, and mediate apoptosis.
At present, the effect of Cav-1 on the apoptosis of breast cancer cells is mainly dependent on apoptosis-related proteins and autophagy.However, there is no consensus on whether Cav-1 promotes or inhibits autophagy, which may be related to the stage of Cav-1's influence on autophagy.
Effect of Cav-1 on the invasion and metastasis of breast cancer
Invasion and metastasis of breast cancer are among the important characteristics of malignant progression of breast cancer and represent the main challenge in the clinical prevention and treatment of breast cancer.Studies have shown that the expression of Cav-1 protein in MDA-MB-231 cells of metastatic breast cancer is significantly higher than that in MCF-7 cells of nonmetastatic breast cancer, and there is also a positive correlation between metastasis ability and Cav-1 protein expression in breast cancer tissues (Alevizos et al., 2014).Current research on breast cancer metastasis mainly focuses on epithelial-mesenchymal transformation (EMT), extracellular matrix (ECM) changes, cytoskeleton reconstruction, and angiogenesis (Lu and Kang, 2019).Interestingly, Cav-1 plays an important role in these processes and can affect the migration and invasion of breast cancer cells by regulating the expression of epithelial mesenchymal transition (EMT)-related markers, matrix metalloproteinases (MMPs) and Rho-GTPases.It can affect the metastasis of breast cancer by regulating the expression of metastasis-related proteins and apoptosis of cells.
It has been reported that the decreased expression of Cav-1 affects the expression of EMT-related genes, such as the increased expression of E-cadherin, while the decreased expression of EMTrelated transcription factors such as Vimentin, Snail and Slug can inhibit the occurrence of EMT.In addition, during high sugarinduced EMT, it has been observed that inhibition of the estrogen receptor signaling pathway leads to an upregulation in both mRNA and protein expression of Cav-1.This subsequently promotes the upregulation of Slug expression, thereby enhancing the invasive and migratory capabilities of breast cancer cells.(Zielinska et al., 2018).This evidence suggests that Cav-1 may promote the invasion and migration of breast cancer by promoting the EMT process.
In addition, studies have shown that downregulation of Cav-1 in BT474 cells can also downregulate the expression of MMP-2, MMP-9 and MMP-1 and inhibit cell migration and invasion.In addition, in metastasis-associated macrophages, the deletion of Cav-1 can enhance VEGF-A/VEGFR1 activity, induce the downstream expression of MMP-9 and colony-stimulating factor-1 (CSF-1), and jointly promote angiogenesis and tumor metastatic growth.In vitro cell experiments showed that Scleromitrion diffusum can effectively inhibit the metastasis of breast cancer cells, possibly by inhibiting the expression of Cav-1 protein, thereby reducing matrix metalloproteinases (MMPs) and inhibiting its ability to invade and migrate (Yang et al., 2019).Zheng et al. (2018) also found that Antarctic Krill docosahexaenoic acid (DHA) could enhance the interaction between CD95 (Fas/APO-1) and Cav-1 in MCF-7 cells, inhibit the FAK/SRC/PI3K/AKT signaling pathway, and downregulate the expression of MMP-2, thereby inhibiting invasion and metastasis.
Rho family proteins are a group of guanosine triphosphate (GTP)-binding proteins with molecular weights ranging from 20 to 25 kDa that have GTPase activity and are also known as small G proteins, which are highly activated in a variety of malignant tumors and participate in the regulation of tumor cell morphology, extracellular matrix adhesion and cytoskeletal remodeling.Cav-1 plays an important regulatory role in tumor invasion and migration (Matsuoka and Yashiro, 2014).In inflammatory breast cancer (IBC) cells, the high expression of Cav-1 activates the Akt1 signaling pathway and phosphorylates the RhoC-GTP enzyme, thereby promoting the adhesion and migration of breast cancer cells and enhancing cell invasion (Joglekar et al., 2015).In addition, phosphorylation of the 14th tyrosine of Cav-1 can activate Rac1, another Rho family protein, by upregulating the expression of Rasrelated protein 5A (Rab5).Rac1 can bind to various intracellular molecules in the form of a molecular bridge and activate various signaling pathways.For example, the PI3K/Akt/mTOR signaling pathway promotes cytoskeletal remodeling and cell invasion and migration (Diaz et al., 2014).
In addition, Cav-1 can also act on integrin, another important molecule, to promote the invasion and migration of breast cancer cells (Kozyulina et al., 2015).Integrin is a Ca 2+ and Mg 2+ dependent heterophile cell adhesion molecule located on the cell surface that mediates the mutual recognition and adhesion between cells and between cells and the extracellular matrix.The invasion and metastasis of cells require stable adhesion between the cell pseudopods and ECM, providing a traction fulcrum for cells to migrate forward.Therefore, integrins, as transmembrane connectors connecting the extracellular matrix and intracellular actin skeleton, bind extracellular domains to extracellular ligands (such as fibridesmin and laminin), resulting in changes in the terminal configuration of intracellular domains and changes in the interactions between intracellular domains and neighboring proteins, thus activating a series of signaling cascades and connecting with the cytoskeleton.This provides anchor sites for cell migration (De Franceschi et al., 2015).The transferpromoting protein NEDD9 can regulate the intracellular transport of integrin and the localized expression of the cell membrane through Cav-1.Vesicles expressing Cav-1 can engraft ligand-bound integrins into the cells, promote the dissociation of ligands and integrins and release integrins, thus playing an important role in cell adhesion and migration.When NEDD9 is absent, although the expression of integrin does not change, it significantly affects the adhesion of integrin on the surface of the cell membrane.Overexpression of Cav-1 phosphorylated at 14-tyrosine can restore integrin activity and promote cell invasion (Kozyulina et al., 2015).However, Lv et al. (2016) found that macrophage migration inhibitor (MIF) can induce the phosphorylation of Cav-1 and promote the transfer of high mobility group protein B1 (HMGB1) from the cytoplasm to the ECM, thus activating TLR4 signaling and promoting breast cancer metastasis.Studies have shown that after MDA-MB231 cells escape the extracellular matrix (ECM) and enter the hemodynamic environment, low fluid shear stress can induce the upregulation of Cav-1 expression, promote the inactivation of protease caspase-8, and enhance the anti-nest loss ability of cells (Li et al., 2019).Breast cancer metastasis involves multiple steps, such as cell adhesion, movement, local invasion and migration, and circulating tumor cells (CTCs) have the ability to resist programmed apoptosis (loss of nests), so patients with metastatic breast cancer often have a poor prognosis.Gene expression profile analysis showed that the expression level of Cav-1 in bone marrow metastatic breast cancer cells was significantly higher than that in CTCs (Magbanua et al., 2018).Wang et al. (2018) found that in MDA-MB-231 cells, overexpression of Cav-1 can activate the PI3K/AKT and MEK/ERK survival signaling pathways and ITGB1-FAK signaling pathways and improve cell resistance to nest loss.
Therefore, Cav-1 may play an important role in the invasion and metastasis of breast cancer by regulating epithelial-mesenchymal transformation, extracellular matrix changes, Rho family proteins and integrin endocytosis.
Cav-1 inhibits the formation of breast cancer stem cells
Tumor stem cells are cells in tumors that have the ability to selfrenew and generate heterogeneous tumor cells.Breast cancer stem cells (BCSCs) have been found to play a key role in the proliferation, metastasis and recurrence of breast cancer (Yoon et al., 2019;Wang et al., 2020).Recently, an increasing number of studies have begun to focus on the physiological role of Cav-1 in BCSCs formation.In cloned spheres derived from breast cancer cells, the expression of Cav-1 is significantly downregulated, and inhibition of Cav-1 can upregulate indicators related to tumor stemness (CD44/CD24), promote the self-renewal ability of breast cancer stem cells, and thus promote the malignant tumor behavior of breast cancer, such as EMT, invasion and metastasis (Yoon et al., 2019).In addition, Cav-1 inhibits the self-renewal capacity and aerobic glycolysis of breast cancer stem cells through C-myc-mediated tumor metabolic reprogramming (Shi et al., 2015).The mechanism mainly shows that inhibition of Cav-1 can reduce the binding of the E3 ligase VHL to C-myc, alleviate the ubiquitination degradation of C-myc protein, and promote the accumulation of C-myc, thus leading to the orderly progress of aerobic glycolysis mediated by C-myc and providing energy support for the self-renewal ability of breast cancer stem cells.So far, some studies have indicated that Cav-1 plays an inhibitory role in BCSCs formation.
Cav-1 mediates endocytosis of breast cancer therapeutics
Cav-1-mediated selective endocytosis is the selective transport of extracellular substances to the membranous region of the cell through the invagination vesicles formed by the cell membrane.Therefore, vesicles containing Cav-1 can carry different specific proteins to trigger the endocytosis of pits and transport them to specific organelles.This selective endocytosis plays an important role in the cell movement and migration and has a profound impact on the metabolism of therapeutic drugs (Chung et al., 2015;Chatterjee et al., 2017;Chung et al., 2018).
In the treatment of breast cancer, the uptake of certain drugs, such as albumin-paclitaxel and trastuzumab-metan conjugate (T-DM1), depends on Cav-1-mediated selective endocytosis (Chatterjee et al., 2017;Chung et al., 2018).As a new generation of paclitaxel preparations, albumin-bound paclitaxel (albumin-paclitaxel) is a first-line drug for triple-negative breast cancer chemotherapy and occupies an extremely important position in the chemotherapy of breast cancer and other malignant tumors.As a carrier of paclitaxel, albumin can bind to Cav-1 and be transported into the cell through endocytic vesicles, and paclitaxel then enters the cell along with albumin to exert its antitumor activity (Chatterjee et al., 2017).Therefore, breast cancer with high expression of Cav-1 protein is often associated with better albumin-paclitaxel treatment (Borsoi et al., 2017;Ricci et al., 2019).Cav-1-mediated endocytosis is also an important mechanism of T-DM1 endocytosis.As a second-line drug for the targeted therapy of advanced HER-2 (+) breast cancer, T-DM1 is an antibody-coupled drug consisting of trastuzumab and the anti-microtubule drug metformin.Studies have shown that Caveolae-mediated endocytosis is an important mechanism of T-DM1 resistance in HER-2 (+) breast cancer.Cav-1 is highly expressed in T-DM1-resistant HER-2 (+) breast cancer cells, and T-DM1 fuses with lysosomes intracellularly through Caveolae-mediated endocytosis, while the acidic environment in the lysosomal Cavity weakens the drug efficacy of T-DM1 (Chung et al., 2018;Sung et al., 2018).Therefore, Cav-1 may be an important target for the treatment of T-DM1 resistance.Salis et al. (2014a) proposed that fluvastatin can enhance cytotoxicity in MCF-7 cells by downregulating the expression of Cav-1 and serum glucocorticoid-regulated kinase 1 (SGK1).Metformin, on the other hand, enhances cytotoxicity by upregulating Cav-1 and downregulating SGK1 (Salis et al., 2014b).Overexpression of Cav-1 can increase the expression of breast cancer drug resistance protein (BCRP).Downregulation of Cav-1 can decrease the activity of ATP-binding box family G protein subfamily 2 (ABCG2) in BCRP and improve the chemotherapy sensitivity of drug-resistant breast cancer cells.Cav-1 can participate in the regulation of T-DM1 resistance by mediating cellular endocytosis (Sung et al., 2018), promoting the internalization of T-DM1 into cells and enhancing its drug toxicity and sensitivity (Chung et al., 2018).Zheng et al. (2019) proposed that overexpression of Cav-1 in MCF-7 and MDA-MB-231 cells could affect chemical sensitization by inhibiting eNOS/NO/ONOO pathway activity and oxidative damage.In addition, overexpression of Cav-1 in breast cancer can promote EGFR nuclear translocation, activate DNA-dependent protein kinase (DNAPK), induce DNA repair and enhance radiation resistance, suggesting that Cav-1 is associated with breast cancer radiotherapy (Zou et al., 2017).
Cav-1 as an indicator of clinical prognosis
In invasive breast cancer, the expression of Cav-1 in stromal cells is associated with tumor size, histological grade, and lymph node metastasis, so Cav-1 may be a clinical diagnostic indicator of tumor prognosis.Yeong et al. (2018) conducted histological classification and staging of 71 patients with invasive breast cancer and evaluated the vascular metastasis, lymph node metastasis, inflammatory changes in breast cancer tissue, lymph node infiltration, ER expression of estrogen receptor, p53 mutation, HER-2 expression, Ki-67 hyperplasia index, and ability of in situ recurrence and metastasis in tissue samples of patients.Prognostic survival time was measured, and the protein expression of Cav-1 in stromal cells and tumor cells was detected.High expression of Cav-1 in stromal cells was associated with good prognosis.Breast cancer patients with high or high expression of Cav-1 in stromal cells had higher overall survival and disease-free survival than patients with no or low expression of Cav-1.Evidence suggests that the expression level of Cav-1 in stromal cells is positively correlated with the prognosis of breast cancer patients, but the expression level of Cav-1 in tumors is not indicative of prognostic survival.In addition, the expression of Cav-1 in stromal cells was negatively correlated with the expression of estrogen receptor (ER) in breast cancer and positively correlated with tumor metastasis traits.The expression of Cav-1 in breast cancer with lymph node metastasis is significantly higher than that in breast ductal carcinoma in situ without lymph node metastasis, indicating that the high expression of Cav-1 is related to the invasion and metastasis of breast cancer (Eliyatkin et al., 2018) .
Although the presence of Cav-1 predicts a high risk of breast cancer metastasis, Cav-1 is still associated with a favorable prognosis in terms of overall survival, which may be related to the spatiotemporal specific expression of Cav-1 and the competing effects of tumor suppressors and tumor promotion.In addition, high expression of Cav-1 in breast cancer is indicative of breast cancer sensitivity to albumin-paclitaxel.In metastatic breast cancer, immunohistochemical analysis indicated that patients with high expression of Cav-1 had a significantly higher pathological response rate to albumin-paclitaxel than those with low expression of Cav-1, which provided a reference for future clinical drug guidance.
The relationship between Cav-2 and Cav-3 and breast cancer
Cav-2 is colocalized with Cav-1, which is closely coexpressed and has a 38% homologous sequence.Cav-2 regulates a variety of signaling pathways, which can either directly bind to Cav-1 to form hetero-oligomers or act independently of Cav-1.Studies have found that the expression of Cav-2 is upregulated in MCF-7 cells (Hnasko andLisanti, 2003), while Shatseva et al. (2011) reported that miR-199a-3p can promote cell proliferation by inhibiting the expression of Cav-2, suggesting that Cav-2 plays a role in cancer inhibition.However, Huang et al. (2007) found that the expression of Cav-2 was downregulated after breast cancer cells received dasatinib.Savage et al. (2008) also proposed that breast cancer patients with high expression of Cav-2 had poor prognosis, and Cav-2 was negatively correlated with ER expression.Loss of Cav-2 expression can inhibit ERα phosphorylation induced by 17β-estradiol (E2), inhibit ERα transcriptional activity and activation of related signaling pathways, and thus inhibit cell proliferation.A study on IBC cells showed that Cav-2 and Cav-1 were highly expressed in IBC and were closely related to RhoC-GTP due to reduced promoter methylation (Van den Eynden et al., 2006).In addition, it has been reported that Cav-2 is correlated with Her-2 expression and triple-negative breast cancer (TNBC) (Elsheikh et al., 2008).
The Cav-3-encoding gene is located on human chromosome 3 p25, and its distribution is more limited than that of Cav-1 and Cav-2.Cav-3 is mainly expressed in vascular smooth muscle, myocardium and skeletal muscle (He et al., 2022), as well as myoepithelial cells and glial cells in the mammary gland.Mutations in the Cav-3 coding region P140L can weaken p38, AKT and endoplasmic reticulum stress signals (Stoppani et al., 2011).There are still few studies on the role of Cav-3 in breast cancer.Studies have shown that with the progression of breast cancer, the positive rate of Cav-3 in epithelial tissues decreases, while the positive rate of Cav-2 in interstitial tissues increases (Koo et al., 2011).In addition, loss of Cav-3 expression can form an anti-breast tumor microenvironment.In in vivo experiments, when Cav-3 is not expressed, the anti-mammary tumor formation ability of mice is significantly enhanced, while the growth of mammary tumors is inhibited and lung metastasis is significantly reduced (Sotgia et al., 2009).Therefore, Cav-2, Cav-3 and Cav-1 can also act as related regulatory factors in breast cancer progression and play dual roles in cancer inhibition and promotion.
3 Cavins and breast cancer
Relationship between Cavin-1 and breast cancer
Cavin-1 is encoded by a gene located on human chromosome 17 q21.2and plays an important role in maintaining the structure and function of the pit.It can be attracted to the cell membrane by Cav-1 and Cav-3, bind to phosphatidylserine, cholesterol and oligomerized Caveolins, and participate in the regulation of cell membrane curvature (Liu, 2020).Cavin-1 can be released from Caveolae during insulin signaling or membrane stretch to alter transcription and protein synthesis.Cavin-1 expression is significantly downregulated in breast cancer cell lines and breast cancer tissues and is closely related to its promoter methylation.Verma et al. (2010) found that Cavin-1 was related to Cav-1.In breast cancer SK-BR-3 cells, overexpression of Cavin-1 inhibited the formation of cell membrane tubules induced by Cav-1.Other studies have shown that receptor tyrosine kinase-like orphan receptor 1 (ROR1) maintains the downstream pro-survival signaling pathway in lung adenocarcinoma by promoting the interaction of Cavin-1 and Cav-1 (Yamaguchi et al., 2016).In addition, Cavin-1 and Cav-1 can bind to insulin-like growth factor-i receptor (IGF-IR) and regulate its internalization (Hamoudane et al., 2013).Breast cancer cells contain a large amount of IGF-IR.However, the specific role of Cavin-1 and Cav-1 in regulating IGF-IR in breast cancer is still controversial and needs further study.
Relationship between Cavin-2 and breast cancer
The gene encoding Cavin-2 is located on human chromosome 2 q32.3 as a substrate phosphorylated by protein kinase C-PKC, affecting cell localization and substrate specificity.The Cavin-2encoding gene can purify platelet phospholipid-binding protein (PSP68), and its expression is enhanced in serum-starved cells.It is therefore called serum deprivation response factor (Wang et al., 2019).Studies have shown that Cavin-2 expression is deficient in tumors such as breast, prostate and kidney (Nassar and Parat, 2020).Cavin-2 was found to be significantly positively correlated with the disease-free survival (DFS) and distant metastasis-free survival (DMFS) of breast cancer patients, and its loss of expression may be related to promoter methylation, while overexpression of Cavin-2 inhibited cell migration and reduced the tumor formation rate of lung metastatic tumors in NOD/SCID mice.
At the same time, the expression of the antiapoptotic protein Bcl-xL was inhibited, and apoptosis was promoted, suggesting that Cavin-2 could be used as a tumor metastasis inhibitor (Ozturk et al., 2016).Tian et al. (Tian et al., 2016) found that overexpression of Cavin-2 inhibited the proliferation and invasion ability of MDA-MB-231 cells, and loss of expression could activate the transforming growth factor TGF-β signaling pathway and induce an EMT-like phenotype in cells, suggesting that Cavin-2 could inhibit breast cancer progression by blocking the TGF-β signaling pathway.In addition, four semi-lim domain protein 1 (Fhl1) can induce Cavin-2 expression in Src protein kinase-transformed cells, independent of mitogen-activated protein kinase (MAPK) activity, and the expression of Fhl1 and Cavin-2 is significantly downregulated in breast cancer, suggesting that Cavin-2 plays a tumor suppressor role in breast cancer (Li et al., 2008).
Relationship between Cavin-3 and Cavin-4 and breast cancer
Cavin-3 was originally defined as a phosphatidylserine-binding protein, similar to Cavin-2, which also acts as a substrate of PKC and is induced in the serum deprivation response, and its coding gene is located in the p15.5-p15.4region of chromosome 11 and near the D11S1323 locus.Loss of heterozygosity (LOH) in this region is often observed in sporadic breast cancer and other types of tumors.Cavin-3 is highly expressed in normal breast and lung epithelial cells, while it is absent in breast cancer, lung cancer and gastric cancer, which may be related to its promoter methylation, suggesting that Cavin-3 may play an anticancer role in breast cancer.In addition, Cavin-3 can interact with breast cancer susceptibility gene 1 (BRCA1), and its loss of expression can affect BRCA1-mediated tumor inhibition (Xu et al., 2001).
Although some studies have shown that Cavin-3 plays a role in cancer inhibition in breast cancer, the specific molecular mechanism is not fully understood.Cavin-4, as an evolutionarily conserved musclespecific component of the Cavin complex related to the musclemembrane Caveolae complex, encodes a gene located at q31.1 of human chromosome 9, which is also expressed at a low level in other types of cells, such as embryonic fibroblasts, and can directly interact with Cavin-2 and Cavin-3.Studies have shown that overexpression of Cavin-4 in skeletal muscle can promote myogenesis and lead to conduction disorders, while overexpression of Cavin-4 in myocardial tissue can induce cardiac dysfunction, suggesting that Cavin-4 can be used as a new potential candidate gene for muscle-associated Caveolae lesions (Bastiani et al., 2009).However, the relationship between Cavin-4 and breast cancer remains unclear.
Discussion and conclusion
As the role of Caveolae family-related proteins in the occurrence and development of breast cancer and their specific molecular mechanisms have received extensive attention, Caveolae family-related proteins are known to play important roles in the proliferation, metastasis, treatment and clinical prognostic guidance of breast cancer.Caveolae plays an important role in various physiological and pathological processes.The formation and function of the Caveolae depend on the expression and interaction of Caveolins and Cavins.However, the role of Cavins in the development of breast cancer is still controversial.
It has been reported that alteration of the Cav-1 gene may change the risk of breast cancer (Fard and Nafisi, 2018).It has been reported that genetic changes in Cav-1 might modify the risk for breast cancer, and Cav-1 acts as both a tumor suppressor and an oncogene and plays a key role in breast cancer tumorigenesis (Deb et al., 2014).Currently, it is believed that Cav-1 can be associated with multiple proteins and signal transduction pathways and has various regulatory effects on tumor formation, proliferation, invasion and metastasis.Caveolae act as a platform for interactions between many receptors and related signal transduction proteins, allowing Cav-1 to play an important role in regulating the balance between tumor signaling pathways (Figure 2).
Cav-1 is the focus of current research, and other Caveolae family-related proteins are less studied.Depending on the molecular classification and stage of breast cancer, Cav-2 can also play a dual role as a tumor suppressor or a cancer-promoting factor.Existing studies suggest that Cav-3 is expressed in muscle tissue and plays a dual role in tumor development, but its role in breast cancer remains to be further explored.Cavin-1, Cavin-2 and Cavin-3 are not expressed in breast cancer cells and tissues, and the specific mechanism of action of Cavin-4 in breast cancer is not very clear.The relationship between Cavin-4 and breast cancer remains to be explored in the future.
The occurrence and formation of tumors in breast cancer dare a continuous process.Although we summarized the role of Caveolae family-related proteins in various processes of breast cancer progression in this review (Table 1), the influence of Caveolae family-related proteins on breast cancer development is continuous and involves multiple processes.Although the specific mechanisms of the interactions between the Caveolae family-related proteins and various signaling molecules in different stages of tumor development and development are still unclear, we believe that a summary of the interactions between the Caveolae family-related proteins and breast cancer can lay a foundation for further research, and promote the Caveolae family-related proteins as a potential target for breast cancer diagnosis and treatment, providing a new therapeutic idea for clinical research.authors contributed to the article and approved the submitted version.
|
v3-fos-license
|
2017-10-10T20:04:39.107Z
|
2012-01-23T00:00:00.000
|
30760899
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://sljog.sljol.info/articles/10.4038/sljog.v33i2.4011/galley/3267/download/",
"pdf_hash": "f32f03cf5dfc73b9189b9e7f25404b288fa8af6c",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:513",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f32f03cf5dfc73b9189b9e7f25404b288fa8af6c",
"year": 2012
}
|
pes2o/s2orc
|
Should we be screening for ovarian cancer ?
Ovarian cancer carries the worst prognosis of all gynaecological cancers, with an overall five-year survival rate of around 30-40 percent. For women diagnosed with early-stage disease the five-year survival rate is over 80 percent1. Majority of the ovarian cancers present in advanced stages due to vague symptoms. Advanced stage disease carries a poor fiveyear survival rate of around 15 percent. Ovarian cancer accounts for 4 percent of all cancers in women. In Sri Lanka the incidence is around 12 per 100000 and had shown an upward trend during the last two decades.
Point of View
Ovarian cancer carries the worst prognosis of all gynaecological cancers, with an overall five-year survival rate of around 30-40 percent.For women diagnosed with early-stage disease the five-year survival rate is over 80 percent 1 .Majority of the ovarian cancers present in advanced stages due to vague symptoms.Advanced stage disease carries a poor fiveyear survival rate of around 15 percent.Ovarian cancer accounts for 4 percent of all cancers in women.In Sri Lanka the incidence is around 12 per 100000 and had shown an upward trend during the last two decades.
Value of early diagnosis
Most improvements in survival have been in women presenting with stage I and stage II disease.This suggests that early detection of ovarian cancer could greatly improve the prognosis.However, due to lack of specific symptomatology, early diagnosis is often difficult.Dispelling the myth that ovarian cancer is a 'silent killer' studies have shown that the symptoms are present in more than 90 percent of women with early disease and may be present up to 15 months before diagnosis 2,3 .These symptoms include persistent pelvic and abdominal pain, abdominal distention and bloating.However, the natural history of ovarian cancer is not well understood and there is, at present, no evidence that ovarian cancer screening can reduce the mortality.
Potential screening tests
There is no single effective screening test for ovarian cancers.The main strategy for screening includes both biochemical markers and transvaginal or pelvic ultrasound.
Transvaginal ultrasound has a high sensitivity but a low specificity for malignant lesions 4 .This results in many women requiring further investigations and undergoing potential unnecessary surgery.
The serum cancer antigen 125 (CA 125) has a poor sensitivity.The antigen is expressed in only 80 percent of ovarian tumours and in only 50-60 percent of stage I disease 5 .Furthermore, it is nonspecific in that it may be increased in the presence of other cancers as well as benign conditions including fibroids, diverticulitis, endometriosis and menstruation.
Randomized controlled trials of multimodal screening (MMS)
There are two ongoing studies designed to test the efficacy of MMS, the United Kingdom Collaborative Trial of Ovarian Cancer (UKCTOCS) in UK 6 and the Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial in USA 7 .The UKCTOCS has randomly assigned more than 200,000 women to no screening, annual CA 125 followed by TVS as a second-lined test if risk is indicated by the CA 125 results, and annual TVS alone in the ratio 2:1:1.
The PLCO has recruited 74,000 postmenopausal women and is comparing a controlled group with a screened group undergoing primary screening with both CA 125 and TVS for three years, and then CA125 alone for a further two years.The results of both trials are expected in 2015.
Interim trial results
In the UKCTOCS, a recent report of the prevalence screen showed that, of the 98,308 women screened in either group, 942 had surgery 6 .Of these, 834 (47 in the MMS group and 787 USS group) had benign growths or normal ovaries.Of these, 2.9 percent had major complication of surgery.Overall, 8.7 women in USS group received surgery for every one woman in the TVS group.In the MMS group, of 97 surgical evaluations, 42 ovarian cancers were detected.Of these, 34 were primary invasive epithelial ovarian tumours and tubal cancers.In the USS group, of the 845 women undergoing surgery, 45 ovarian cancers resulted.In the MMS group, the overall sensitivity, specificity and positive predictive values for all primary invasive epithelial ovarian and tubal cancers were 89.5, 99.8 and 43.3 percent compared with 75.0, 98.2 and 2.8 percent for USS.The authors conclude that the results show that the 'screening strategies are feasible'.
In the PLCO study, of the 28 816 women tested 570 surgical interventions resulted, a total of 29 neoplasms being detected, of which 19 were epithelial invasive cancers 7 .
Conclusion
The results to date demonstrate the dilemma with all screening, in that early detection needs to be balanced against potentially treating many women unnecessarily.While the results are encouraging, particularly for MMS, it is not until the effect of ovarian cancer screening on mortality is known in 2015 that a population screening programme can be considered.The UKCTOCS will also provide comprehensive data on cost, acceptance, physical and psychological morbidity.
Until such time, improving awareness of the symptoms of ovarian cancer will be crucial in ensuring that women with ovarian cancer are diagnosed at an early stage.Although symptoms are non-specific, GPs and women need to be aware that there is increasing evidence that women with ovarian cancer do experience symptoms more frequently, severely and persistently than those without the disease 8 ,9 .Responding to these symptoms without delay is vital if we are to counter a disease that at present has a very poor prognosis.
|
v3-fos-license
|
2020-10-28T05:09:03.085Z
|
2020-10-26T00:00:00.000
|
225076604
|
{
"extfieldsofstudy": [
"Medicine",
"Business"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11135-020-01056-9.pdf",
"pdf_hash": "3289c2ad0ed94e19416f4a65227e686c88cf8089",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:516",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"sha1": "3289c2ad0ed94e19416f4a65227e686c88cf8089",
"year": 2020
}
|
pes2o/s2orc
|
Economic development trends in the EU tourism industry. Towards the digitalization process and sustainability
From an economic viewpoint, tourism is heralded as bringing income to local communities. From an ecological standpoint, tourism poses a threat to environments. Sustainable tourism should leave a minimum negative impact on the places visited and preferably have rather positive impact on society. The digitization of the tourism economy is conducive to increasing the efficiency of enterprises operations, but also have positive impact on consumers. The objectives of the study are: to seek an answer to the question whether there is a relationship between the development of the tourism industry and GDP growth. Based on it there are two specific questions: What is a relationship between the level of development of digitization (e-commerce) and the development of the tourism industry and what is a relationship between the development of the tourism industry and sustainability factors? The originality of our research results among others results from three groups of variables use in the analysis (ICT group, SDG group and E&T group). Our research explores the factors affecting the tourism industry and relations of the digitization of tourism economy, sustainability and economy growth.
Introduction
Digital transformation, as the integration of digital technology into the tourism industry (which includes all businesses that directly provide goods or services to facilitate business, pleasure and leisure activities away from the home environment), results in fundamental changes in the way the world does business, communicates and develops on national and international levels. Customer habits are also changing along with breakthroughs in technology (Hojeghan and Esfangareh 2011) ( Fereidouni and Kawa 2019). Digitization offers many new opportunities that can be exploited by providers in the tourism industry. At the same time, competition is intensifying and companies have to keep pace with digitization in order to remain on the same level.
The development of ICT technology over the last decades has dramatically affected the tourism sector, insofar as the accelerated connection of technologies and tourism in recent years has led to necessary changes in the understanding of the nature of tourism, and requires continuous research and analysis of how digitization affects the economic growth of enterprises in the tourism industry. Research most often targets information technology in tourism and has been almost exclusively focused on the benefits and the applications of technology (Del and Baggio 2015) (Huang et al. 2017) ( Fereidouni and Kawa 2019), and much less frequently on the drawbacks (Gretzel 2011). There are also studies in the literature on the economic effects of digitization. A question which remains unanswered, and which scientists are still looking for answers to, is: "Can digitization be viewed as the motor of transformation for the tourism industry in the age of the internet economy?" (Bauer et al. 2008).
Studies show that digitization offers promising potential in the tourism industry. All business processes occurring in the creation of development in the tourism industry are affected (Ralph and Searby 2004) (Ighalo 2014). In addition to the digital transformation of processes, digitization offers opportunities for new business models in the tourism industry (Souto 2015).
In the face of crises, changes on the market, and especially when it comes to specific environmental factors (e.g. the Greek crisis), the realities of the tourism industry are changing. In most cases, they also exhibit sharply redefined business models, which are highly disruptive to traditional paradigms (Cuesta et al. 2015); (Rayna and Striukova 2016). The recent global financial crisis has demonstrated how important the stability of the economy and financial system is in the modern world. The course of the crisis and its negative consequences in both the regulatory and real spheres have led to the verification (reevaluation) of many, seemingly solid, views on the functioning of economies. The EU financial perspective for 2014-2020 notably takes into account environmental factors, in particular compliance with the idea of sustainability. The tourism industry, which is one of the world's fastest-growing industries, is now trying to move towards sustainable and responsible practices. Perhaps we should write that this industry was the fastest growing, because it is known that the COVID-19 pandemic caused rapid changes in its development ( Welford et al. 1999) (Kişi 2019). The impact of COVID-19 on the economy is significant, not least in the tourism industry. Factors such as the COVID-19 pandemic, the disruption of ecological balance due to global warming, the loss of social values, and the failure to preserve natural, historical, social, and cultural assets make sustainable tourism a necessity (Kişi 2019).
In view of the fact that sustainability generally involves several separate issues such as the protection of ecological systems, intergenerational equity and the efficiency of resource use (Heal 1998), the valuation of environmental assets and the recognition of constraints implied by the dynamics of environmental systems (Jones and Dowling 2004) (Matthes 2007a, b) (Ziolo et al. 2019), then it also implies the need to look at externalities and their impact on tourism. The natural environment implies the development of a tourist economy. The basis for determining the types of externalities is the consideration of axioms which define sustainability. It is about considering a number of sustainability factors that will determine the development of the tourism economy 1 .
In the report "Our Common Future", four domains of sustainability are indicated: economy, ecology, politics, and culture (WCED 1987). It should be remembered that sustainable development is a guarantee of a good quality of life and is a way of organizing the social and economic life of a human being (Paul and Liam 2016). However, decisions made by public authorities in the area of striving for sustainable development imply the need to take sustainability factors into account in terms of the policy of enterprises in the tourism sector (WTO 2020).
The study contributes to existing research, covers the gap in the existing literature, and provides a complex theoretical framework for defining and understanding the problem of economic growth in the tourism sector and its role in the contemporary economy from the perspective of achieving sustainable development goals (United Nations 2015).
The paper aims to contribute to the body of knowledge of factors affecting the tourism industry, especially providing a new general theory pertaining to the influence of sustainability and digitalization on the tourism industry. We also want to broaden the abovementioned question as follows: Can digitization, taking into account the factors of sustainability, be viewed as the motor of transformation for the tourism industry in the age of the internet economy?
The paper is organized as follows: an introduction has been presented in Sect. 1. Section 2 discusses the literature review. Section 3 presents the data, the variable description and methodological framework. Finally, Sect. 4 provides the empirical results and conclusions.
Literature review
In view of the fact that sustainability generally involves several separate issues such as the protection of ecological systems, intergenerational equity and the efficiency of resource use (Heal 1998), the valuation of environmental assets and the recognition of constraints implied by use of the environment (Matthes 2007a, b) (Ziolo et al. 2019), then it also implies the need to look at issues related to tourism development, the role of digitization and its impact on economic growth.
Research interest in factors affecting the tourism industry, especially providing a new general theory pertaining to the influence of sustainability and digitalization on the tourism industry, has increased recently.
The tourism sector has undergone a remarkable change due to advances in digitization processes and transformation as a result of these advances. Digitization in tourism has implications for farming processes and has impacted the economic efficiency of the tourism industry. Research has indicated the presence of factors that influence both consumer and directly economic growth. In the course of our research, we will attempt to find a number of indications regarding the economic benefits associated with the evolution and use of digitization in tourism, in terms of opportunities and access to supply and information. However, there is a lack of research on the impact of digitization on the development of sustainable tourism. In addition to directional studies, numerous researchers point to a number of factors affecting the tourism industry, in particular economic growth factors. The literature on the subject shows trends in the evolution of the impact of the different factors on the tourism economy. Studies are evolving, as in turn are approaches to this topic, deepening existing research and exploring new trends. Pursuant to the literature review, there are many different relationships and possibilities inherent in analyzing the factors affecting the tourism industry. The directional evolution of the research is summarized in Table 1.
Although digitalization is a rapidly developing sphere of national interest-especially when it comes to the tourism economy-and has both advantages and disadvantages, scientists differ when it comes to the direction of their views. The emergence of new technologies is first to indicate a change in the economic systems, not to mention their reputation of providing tourist services as the drivers of economic development. The evolution of research trends in the tourism economy has shown that there are changes in the search for factors affecting economic growth in the tourism economy. Along with the development of society, progressive industrialization, environmental degradation and independent factors that rapidly affect the tourism economy (e.g. factors causing rapid changes such as COVID-19), there is a real need to study the impact of various factors on economic growth in tourism in changeable social and economic conditions. The fact that there is a real need (taking into account the general trend of sustainable development and a sense of social responsibility) for the development of sustainable tourism (Welford et al. 1999) (Haseeb et al. 2019) should be taken into account; however, the significant impact of digitization on economic growth in this industry is observed. Two main directions of research on factors influencing the development of the tourism industry are indicated (digitalization and research on economic growth- Table 1). There are no binding arrangements regarding the links between sustainability and digitization and GDP research in the tourism economy. The existing research gap is the combination of digitization, sustainable development and corporate social responsibility. Therefore, in our opinion, there is a need to determine which digital and environmental factors have a significant impact on economic growth in the tourism sector.
Through digitization, many processes in tourism companies have become more effective, and thus more cost-efficient. This results in a large potential sales volume because the use of the Internet makes the transition and distribution of information quicker, better, and cheaper regardless of geographical and time limitations.
Consumers and have faster (more direct) access to offers, knowledge and conditions as well as protection of their interests. They can become familiar with the specifics of a place and assess whether it meets their sustainability requirements (Haseeb et al. 2019). Digitization allows one to assess ESG risk factors (Ziolo et al. 2019) and incorporate them into the decision-making process. Externalities may be positive (benefits) or negative (costs) for enterprises. We can also consider them in the form of the provision of services and the effects of consumption. Although the discussion on externalities has been around for a long Table 1 Research directions on the effects of digitization on the tourism economy (with particular emphasis on factor analysis). Source: own study Research on digitization in the tourism economy shows that, despite the flexibility of operation, efficiency improves with rapid introduction of changes and the tendency to have a cheap structure is strengthened A quick redefinition of business models can lead to activity which is highly disruptive to traditional paradigms (Cuesta et al. 2015) (Rayna and Striukova 2016) The impact of new technologies on business models Research on the impact of digitization on the productivity of the tourism economy. In particular, researchers sought the answer raising the question of a possible productivity paradox in the digital economy was sought and they looking for the answer whether productivity in industrialized countries now confronts an apparent decline Research on factors such as GHG, energy, water in combination with improving the quality of tourism jobs, preserving natural and cultural resources, limiting negative impacts at tourist destinations, including the use of natural resources and waste production;-promoting the wellbeing of the local community;-reducing the seasonality of demand;-limiting the environmental impact of tourism-related transport Research on the thesis that reducing human intervention and making everything connected increases efficiency time, the concept is still controversial. From the point of view of sustainable development, the external effects will be associated with three basic pillars: the environmental pillar, the social pillar and the economic pillar (Zhao et al. 2018) (Ziolo et al. 2019). Corporate social responsibility (CSR) is a concept with constantly increasing importance for tourism businesses and their stakeholders. The environmental, social and governance (ESG) dimensions of CSR performance may contribute to economic performance of tourism businesses. Environmental, social, and governance issues are important for stakeholders and for the customers. Tourism businesses use CSR as a strategic tool to create favorable stakeholder and customers perceptions. CSR can ensure that customers perceptions are not influenced negatively by activities which they might deem unsustainable (Palazzo and Richter 2005;Yoon et al. 2006;Sila and Cek 2017). The literature on the subject indicates that other stakeholders are also demanding more and more CSR information (O' Dwyer et al. 2005). In the literature review, CSR performance is measured by the ESG performance scores. (Richardson 2009;Cuesta and Valor 2013;Sila and Cek 2017). It is also indicated that information on ESG factors may be disclosed in the CSR reports of tourism enterprises. This information may sometimes be biased, called "greenwashing" (Galbreath 2013;Sila and Cek 2017), where companies exaggerate the level of their CSR practices to create a more positive corporate image to their stakeholders and especially customers. The sphere of ESG and CSR is significantly linked by the digitization process, which enables the collection, provision of information and shaping customer attitudes and decisions of other stakeholders. Figure 1 presents the idea of connections between digitization and sustainable development, taking into account the consumers of the tourist industry.
When assessing the contribution of tourism, it can be stated that this sector plays a key role in the implementation of sustainable development goals (WTTC, WTO and the Earth Council 1995) (UN General Assembly 2015). The power of digital information for customers of tourism enterprises, associated with the digital transformation of business models, is creating a new ecosystem and a new way of doing business. The "blue ocean theory" is once again being proved, because the speed of technology is creating a higher level of pressure for the existence of movements and the transformation of companies to find new "Blue Oceans". This means that tourism companies are transforming their ways of doing business, not only in terms of internal agility and efficiency processes, but also when it Fig. 1 The concept of connections between digitization, sustainable development and changes in the efficiency of the tourism economy. Source: own elaboration comes to external interactions with their existing and prospective customers (Dellarocas 2003) (Zimmermann et al. 2016) (Ribeiro and Florentino 2016). A growing number of studies indicate expectations regarding the improvement in the social, environmental and economic results of enterprises which use sustainability ideas in their business models. This evidence, and the fact that we may observe a systematic increase in the costs of social and environmental damage as a result of negative externalities, indicates the need for a strong pressure to create sustainable value, especially in the tourism economy. From this point of view, there is a great deal of space for sustainability as a new element of economic growth in the tourism economy. The specific objectives of the study are: • to seek an answer to the question of whether the tourist economy has an impact on sustainable economic growth, • to investigate whether the digitization of the tourism economy affects the stability of revenues and functioning of business tourism, • determining if (and which) digital and environmental factors have a significant impact on economic growth in the tourism sector.
Statistical materials
Considering that the basic goal of our research was to answer the question of whether digitization, taking sustainability factors into account, can be seen as a motor for the transformation of the tourism industry in the age of the internet economy, we were obliged to select a representative set of variables to study. The empirical analyses presented in this paper take the countries of the European Union into account and are based on three groups of data (Table 2) related to: economic and tourism factors (E&T), ICT factors (ICT) and Sustainable Development Goals (SDGs). The first part of the study begins with a critical analysis of the field literature, highlighting both quantitative and qualitative studies on the impact of digitalization on tourism and economic growth. In the literature analysis, we looked at the relationship between digitalization and its impact on the business performance of tourism industry enterprises, as well as the relationship between the tourism economy and sustainability. The critical analysis of the field literature has led us to determine variables that affect the development of the tourism industry and GDP growth, and the level of development of digitization and the development of the tourism industry. We include these variables in Table 2. Furthermore, we make use of an empirical analysis of the E&T group, the ICT group and the SDG group. Empirical analysis was based on data from European statistics on Eurostat and the European Commission.
The study covers the period of 2011-2018, for which we were able to gather the latest statistical data. Not all data showing the impact of the global economic crisis is available, but data have been collected for certain indicators since 2005. We do not have statistical data to show the impact of COVID-19 on the tourism economy (only general studies were available) (WTO 2020). These statistics will be published later.
To monitor the progress towards the Agenda 2030 goals, the European Commission uses 100 different indicators, some of which are not available to all EU countries. This Ensure the availability and sustainable management of water and sanitation for all: Inland water bathing sites with excellent water quality Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels: General government total expenditure on law courts Climate action: Greenhouse gas emissions applies, among others, to the indicators describing EU countries with access to the sea in the case of countries that do not have such access. We analyzed all available indicators describing sustainable development and chose typical indicators affecting the tourism industry using an expert study (focus study). These indicators (especially bathing sites with excellent water quality, peaceful and inclusive societies without conflicts or incidents, accountable and inclusive institutions at all levels and climate action) influence the tourist attractiveness of a given country, while respecting the natural environment. However, it should be emphasized that the pursuit of achieving the assumed SDG goals may be tantamount to higher costs. On the other hand, meeting the expectations of this group of tourists who care about respect for the environment. Because we do not examine individual ESG risk factors, they are not grouped into individual factors that reflect environmental, social and economic risks. Bearing in mind the potential benefits of digitalization on the business environment on the one hand, and the evidence presented by other scholars (Table 1, Table 2 and the critical analysis of the field literature) on the other hand, we proceeded with major research assumptions directing our approach, thus defining three research hypotheses: H1: There is a positive relationship between the development of the tourism industry and GDP growth. This means that, in order to achieve a higher level of GDP, a state should plan budgets based on a balanced financial policy. This policy should take into account both the feasibility of revenues as well as sources and methods of spending the funds (expenditure policy). Therefore, it is important to analyze expenditure on digitization.
H2: There is a positive relationship between the level of development of digitization (e-commerce) and the development of the tourism industry.
H3:
There is a positive relationship between the development of the tourism industry and sustainability factors.
Description of statistical methods
The analyzed features were presented in graphical form as a time series for which basic descriptive statistics were calculated. The upward/downward trend was measured with Kendall's coefficient for monotonic trend. In this approach, the features are random and the time indicator is deterministic. Finally, bootstrap replicates of a statistic applied to a time series. Phase scrambling is described by Davison and Hinkley (Davison and Hinkley 1999). In our method, 1000 bootstrap replicates of time series are found by taking blocks of length l = 8. The results are given in the form of the T-statistic, Kendall coefficient and p value for the independence test.
The correlation between features was given as the Kendall coefficient without bootstrap procedures. The T-statistic and p value for the independence test are also provided. The calculations were made in the R language.
Kendall's Nonparametric Test for Monotonic Trend is a special type of independence test based on Kendall's statistic. The confidence interval for the slope is computed using modification of the Theil/Sen Method.
In order to select the tools for the study, the test of the variables distribution normality was first performed. Due to the rejection of the hypothesis about the normal distribution of the studied variables at work, non-paramentric tools such as the Kendall's Nonparametric Test were used.
Due to the small number of observations, the conclusions of the trend analysis should be approached with caution. The results show only a tendency, not a clear trend strength.
Annual data were collected from the Eurostat database for the period of 2011-2018. This is due to the fact that, in those years, there was maximum data availability in the field of variables. The variables came from the main tables of the Eurostat database-Data explorer, which is an interface designed for the reading of multi-dimensional tables. All data refer to NACE Rev 2. the nomenclature of economic activities in the European Union (EU). NACE Rev. 2 is to be used, in general, for statistics referring to economic activities performed as from 1 January 2008 onwards. (Eurostat 2008) Data for tourism industry, in a statistical context, refers to the activity of visitors taking a trip to a destination outside their usual environment, for less than a year. It can be for any main purpose, including business or leisure.
We used the following variables for analysis: • GDP-Gross Domestic Product is an indicator for a nation's economic situation. It reflects the total value of all goods and services produced minus the value of goods and services used for intermediate consumption in their production. Expressing GDP in PPS (purchasing power standards) eliminates differences in price levels between countries, and calculations on a per-head basis allow for the comparison of economies which are significantly different in terms of absolute size. • Turnover AFSA-Turnover for accommodation and food service activities comprises the totals invoiced by the observation unit during the reference period, which corresponds to market sales of goods or services supplied to third parties (excluding VAT and other similar deductible taxes directly linked to turnover as well as all duties and taxes on the goods or services invoiced by the unit). • Access AFSA-Percentage of enterprises which undertake accommodation and food service activities with internet access. • Web AFSA-Percentage of enterprises which undertake accommodation and food service activities with websites. • Web order AFSA-Percentage of enterprises which undertake accommodation and food service activities whose websites provide online ordering or reservation or booking, e.g. shopping carts. • Selling AFSA-Percentage of enterprises which undertake accommodation and food service activities that use online sales (at least 1% of turnover). • Inland water-The indicator measures the number and proportion of coastal and inland bathing sites with excellent water quality. The indicator assessment is based on microbiological parameters (intestinal enterococci and Escherichia coli). The Bathing Water Directive requires Member States to identify and assess the quality of all inland and marine bathing waters and to classify these waters as 'poor', 'sufficient', 'good' or 'excellent'. • Law courts-The indicator measures the total general government expenditure on law courts according to the classification of the functions of government (COFOG). This includes expenditure on administration, operation or support of civil and criminal law courts and the judicial system, including the enforcement of fines and legal settlements imposed by the courts and operation of parole and probation systems; legal representation and advice on behalf of government or on behalf of others provided by government in cash or in services.
Research results
The first part of the conducted analyses pertained to trends based on Kendall's trend coefficient. The results-the relationship between the development of the tourism industry and GDP growth in the EU-are presented in Fig. 2. As we can observe, GDP and Turnover AFSA trend indicators in the EU as a whole are similar (18.21% and 16.27%). The behavior of these variables during the analyzed period (2011-2018) is very similar. The development of the tourism industry and economic growth in EU exhibits very similar trend behavior. We can say that there is a causal relationship between the development of the tourism industry and GDP growth in the EU.
Using the same tools, we can analyze ICT and Sustainability development variables in the context of development of the tourism industry (Turnover AFSA). In the ICT group we singled out four variables: Internet Access AFSA, Web AFSA, Web order AFSA and Selling AFSA. They are presented in Figs. 2, 3, 4, 5, 6. The most similar Kendall's trend indicator values occur in the case of online sales in the tourism industry (Selling AFSA) and turnover in this part of the economy (Turnover AFSA) (17.08% and 24.01%). A trend less similar to turnover in the tourism industry Kendall trend coefficient-EU Selling AFSA and EU Turnover AFSA. Source: own elaboration may be observed in Web order AFSA ( − 1%), Web AFSA (0.76%), and Access AFSA (6.32%). Positive correlation can be seen in the field of ICT variables (excluding Web order). A smaller impact on the level of revenues in the AFSA sector can be clearly seen, in particular in the Web order field, which can be explained by the saturation of ICT tools in the field of online ordering and sales. The use of ICT tools in AFSA is becoming standard.
The third part of the analysis is connected with sustainability variables: Inland water and Law courts, which are presented in Figs. 7 and 8.
Both of them are only partly correlated with the development of the tourism industry as represented by Turnover AFSA. Kendall's trend coefficient indicators are 11.34% and 8.63% respectively, while the Turnover AFSA trend indicator is 24.01%. Positive correlation clearly indicates the impact of environmental quality and sustainable development conditions on the level of sales growth in the AFSA sector, especially in terms of security.
The next part of the research analyzed the level of correlation between variables. The correlation statistics for the EU as a whole are presented in Table 3.
The basic correlation between GDP and Turnover AFSA in EU is 1. There is a very strong positive correlation between GDP and the tourism industry (Turnover AFSA). The correlation between the development of the tourism industry and sustainability variables is ambiguous. The Inland water variable and Turnover AFSA are negatively It is interesting to note that all ICT variables are positively correlated with Turnover AFSA (0.59-0.88). Based on this, we can say that ICT is probably a more important factor in the development of the tourism industry than sustainability variables.
By analyzing GDP and Turnover AFSA in a group of 39 countries, we can find that 15 of them have tau_b Kendall correlation level 1. Another 11 countries have this correlation on a level higher than 0.9. Given that there is a lack of data for four countries, we can say that almost 75% of countries analyzed exhibit a very high correlation between GDP and the development of the tourism industry (Turnover AFSA).
In this context, it is interesting to observe that there is one country (Greece) with negative correlation ( − 0.14). It may be worth analyzing why this highly tourismoriented country has a negative correlation between GDP and the development of the tourism industry as represented by the Turnover AFSA variable. A very low level of correlation (albeit positive) is also observed in Norway (0.14).
Another part of the analysis relates to the problem of correlation between the development of the tourism industry (Turnover AFSA) and ICT development represented by Web order AFSA. EU countries as a group have a level of correlation of these variables of 0.59. There are countries with high positive correlation: Iceland (1), Denmark (0.89), Hungary (0.70), Sweden (0.69). There are also countries with negative correlation: Austria ( − 1), Cyprus ( − 0.40), Czechia ( − 0.37), Greece ( − 0.37). A third group is comprised of countries with close to zero correlation: Estonia (0), Italy (0.03), Malta (0.03), Norway (0.07), and Germany (0.18). Generally, there are eight countries with negative correlation, nine countries with no data, 20 with positive correlation and 2 with zero correlation. We can assess such findings as indicative of positive correlation between Turnover AFSA and ICT, represented by Web order AFSA.
The final stage of the analysis is connected to sustainable development, represented by Inland water, and the development of the tourism industry, represented by Turnover AFSA. There is a lack of data for 20 countries. 12 countries have positive correlation and six negative. At the EU level, slightly negative correlation ( − 0.28) may be observed.
Countries with higher positive correlation are: Bulgaria (0.84), Spain (0.78) and Croatia (0.71). Higher levels of negative correlation may be observed in Sweden ( − 0.89), Finland ( − 0.87)and Denmark ( − 0.78). The assessment of this phenomenon in the analyzed group of countries is somewhat ambiguous.
Conclusions and discussion
Existing studies confirm the link between the tourism economy and economic growth. However, research on these matters is still lacking when it comes to a sustainable approach. This paper set out to fill this gap in the literature. Our research has shown that countries in which economic growth is lower (so-called poorer countries) seek to improve the situation by using digitization and emphasizing sustainable development. They attempt to promote the goals of SDGs because, by implementing the idea of sustainability, they see the possibility of using digitization to improve their market position. Thus, they affect sustainable economic development.
Our research has confirmed that there is a relationship between the tourism economy and economic growth and sustainable economic mainstreaming. We can say that there is a causal relationship between the development of the tourism industry and GDP growth in EU. It is interesting to observe that there is one country with negative correlation. It is Greece. It may be worth analyzing why this highly tourism-oriented country has a negative correlation between GDP and the development of the tourism industry. Interesting is also very low level of positive correlation observed in Norway. The correlation between the development of the tourism industry and sustainability variables is ambiguous. Answering the question of whether the digitization of the tourism economy affects the stability of revenues and the functioning of the economy, we can say that ICT is probably a more important factor in the development of the tourism industry than sustainability variables. However, as research shows, ICT tools have had a more significant impact in previous years. At present, it can be concluded that ICT tools have become standard and are no longer such an important factor which influences the development of tourism and increased revenue.
Detailed figures show interesting relationships. It is worth to note that all ICT variables are positively correlated with Turnover AFSA.
Although sustainable development is recognized as an important element affecting the economy and EU countries pay special attention to the sustainability this factor is not taken into account in all countries surveyed (what our research confirmed). In this aspect we can observe that the Inland water variable and Turnover AFSA are slightly negatively correlated. On the other hand, the Law courts variable and Turnover AFSA are positively related. A lack of ambiguity in determining the impact of the tourist economy on sustainable economic growth may be associated with the impact of the effects of the financial crisis, as well as the strong impact of the goals of SDGs through access to EU funds.
Based on this, we can say that ICT is probably a more important factor in the development of the tourism industry than sustainability variables.
Although, as confirmed by the results of the condunducted research, not all analyzed countries show a strong commitment to linking digitization with the goals of sustainable development, it is still necessary to conduct a common policy supporting sustainable development. Through digitization, the society should see how the goals of sustainable development are realized. We make this postulate in connection with the results of our research.
However, without conducting research in this area, it is difficult to obtain an unambiguous answer. The fact is that SDGs have changed over time, and especially in 2020, whendespite the strong impact of digitization on service industries-a significant crisis may observed in the tourism industry in the wake of COVID-19. At the same time, SDGs have changed and were redirected to fight the effects of COVID-19.
Our research attempts to explain the link between sustainability and digitalization in the tourism industry and to contribute to the body of knowledge of factors affecting the tourism industry. The hypothesis (H1.) assuming that there is a positive relationship between the development of the tourism industry and GDP growth has been verified positively. This relationship was presented in Fig. 2 and Table 3. The hypothesis (H2.) assuming that there is a positive relationship between the level of development of digitization (e-commerce) and the development of the tourism industry, and the hypothesis (H3.) assuming that there is a positive relationship between the development of the tourism industry and sustainability factors, have been verified. However, our research shows both negative and positive relationships. For H2. positive relationships are visible for the Scandinavian countries, Iceland and Hungary. Negative relationships occur for Austria ( − 1) and Cyprus ( − 0.40). Hypothesis H3. has been positively verified in most of the surveyed countries, 12 of which have positive trend correlation and six negative. Countries with higher positive correlation are: Bulgaria (0.84), Spain (0.78) and Croatia (0.71). These countries should be considered as countries where the tourism economy belongs to significantly developing industries. But the Scandinavian countries, although they belong to a highly developed economy, they are less attractive to tourists. In addition, Scandinavian countries and Iceland are at the forefront in terms of sustainability Table 4.
The originality of the research consists of the inclusion of three groups of variables in the analysis (namely the ICT group, the SDG group and the E&T group). This allows the research to contribute to the body of knowledge of factors affecting the tourism industry, in particular providing a new general theory of the influence of sustainability and digitalization on the tourism industry Table 5.
Our recommendations and suggestions are as follows: 1. Governments should take the necessary steps to raise public awareness about relationships between the tourism economy and sustainable economic growth. 2. Tourism economy enterprises should broaden knowledge about the preferences of their customers in the field of sustainability factor. It is about determining whether customers are guided by the criteria of sustainability in making their own decisions regarding the choice of destinations and the environment and conditions of rest. 3. Digitization is not limited by supporting the sustainability factor. Governments and tourism businesses should consider how to establish permanent links between digital and environmental factors to achieve a significant impact on economic growth in the tourism sector. 4. Governments must monitor the tourism economy in terms of the application of green guidelines to build green products dedicated to entities using digitized services. 5. The tourism industry should become familiar with our research and learn which digital and environmental factors have a significant impact on economic growth in the tourism sector. 6. New ICT tools should be developed which incorporate sustainability. This task is addressed to both governments that create intervention programs or support policies and tourism industry enterprises.
Due to the accessibility and comparability of data over time and the specific nature of the phenomenon studied, the authors struggled with a number of limitations during the study. In particular, the selection of variables for the study generated problems. Data on sustainable development dedicated to the tourism economy is very limited Table 6. Within future in-depth research, the authors intend to expand the context of the effect of COVID-19, so as to ensure that ESG risk in the tourism industry is associated with digitization and economic growth. As we have already shown, the impact of the tourism economy on sustainable economic growth is ambiguous, which is why we wish to discern the causes of this ambiguity.
|
v3-fos-license
|
2024-01-26T16:22:34.630Z
|
2024-01-24T00:00:00.000
|
267220363
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1999-4915/16/2/176/pdf?version=1706107225",
"pdf_hash": "afd72ee21e27b1ca296565115a004e43f0f48b3f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:518",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "5832c40bff36825aa70c3b5d64d1179d14443813",
"year": 2024
}
|
pes2o/s2orc
|
Assessing the Risk of Dengue Virus Local Transmission: Study on Vector Competence of Italian Aedes albopictus
The frequency of locally transmitted dengue virus (DENV) infections has increased in Europe in recent years, facilitated by the invasive mosquito species Aedes albopictus, which is well established in a large area of Europe. In Italy, the first indigenous dengue outbreak was reported in August 2020 with 11 locally acquired cases in the Veneto region (northeast Italy), caused by a DENV-1 viral strain closely related to a previously described strain circulating in Singapore and China. In this study, we evaluated the vector competence of two Italian populations of Ae. albopictus compared to an Ae. aegypti lab colony. We performed experimental infections using a DENV-1 strain that is phylogenetically close to the strain responsible for the 2020 Italian autochthonous outbreak. Our results showed that local Ae. albopictus is susceptible to infection and is able to transmit the virus, confirming the relevant risk of possible outbreaks starting from an imported case.
Introduction
Dengue fever is a mosquito-borne tropical disease caused by four distinct but closely related serotypes of Dengue virus (DENV; family Flaviviridae, genus Flavivirus) that are transmitted by Aedes mosquitoes [1].The species Aedes aegypti, widely spread in endemic areas, is considered the main vector of DENV.Although considered a secondary vector of DENV, Ae. albopictus is however associated with virus transmission in several areas of the world, including Europe [2].The first Italian indigenous DENV outbreak was reported in Montecchio Maggiore (Vicenza province, Veneto region) in August 2020 with 11 cases secondary to an imported case from Indonesia [3].To evaluate the role of Ae. albopictus in DENV transmission, we analyzed the vector competence, through experimental infections, of two Italian populations of Ae. albopictus.Potential vertical (transovarial) transmission of DENV was also evaluated.Since it was not possible to isolate the DENV-1 strain responsible for the 2020 Italian outbreak, either from mosquito pools or from human sera, for the experimental infections, we used a phylogenetically highly related DENV-1 strain circulating in Singapore.
Virus and Mosquito Populations
DENV-1 isolate SG (EHI)D1/30889Y14 (accession number MG097876) was selected based on the high genetic homology of 98.27% with the DENV-1 strain of the 2020 Italian autochthonous outbreak (namely VI/Italy/2020, accession number MZ291446; Supplementary File).The dengue strain was kindly provided by the Environmental Health Institute, National Environment Agency, Singapore for the purposes of the study.The virus was grown in VERO cells and titrated via plaque assay [4].Two geographically different Ae.albopictus populations were collected from Rome (Lazio region, Central Italy) and Montecchio Maggiore (Veneto region, northeast Italy), the latter being the town where the 2020 DENV-1 outbreak occurred.A long-established Ae. aegypti laboratory colony (collected in Reynosa, Mexico, in 1998) was used as the reference.The eggs of two Ae.albopictus populations were collected by using ovitraps and were reared in the insectarium of the Istituto Superiore di Sanità in Rome.Adults were maintained before the test for a few generations (F3-F5) in climatic chambers under the following conditions: 26 ± 1 • C temperature; 70% relative humidity (RH); and a 14:10 h light/dark photoperiod.To check that the two Ae.albopictus populations were virus free, 5 pools of 20 specimens (males and females) per pool for each F0 offspring generation were analyzed for DENV via real-time PCR (qRT-PCR) [5].
Genetic Similarity Analysis
The sequence of the DENV-1 strain of the 2020 Italian autochthonous outbreak (VI/ Italy/2020; accession number MZ291446) [3] was compared to DENV-1 sequences available in a sequence repository by using the BLASTN tool in order to identify the most related viral strains.We found more than 200 isolates showing high homology with MZ291446 (<98% homology).Few of these were cultured.MG097876, circulating in Singapore where Ae. albopictus was described, was selected because it was highly related (98.27%) and isolated in cells.
Experimental Infection
The experimental infections were performed in a biosafety level 3 laboratory with 5-10 days old female mosquitoes that were allowed to feed for 60 min using a membrane feeding apparatus containing a blood-DENV mixture.The virus was diluted in rabbit blood to a final virus concentration of 5 log 10 plaque-forming units (PFU)/mL.The infectious blood was maintained at 37 • C through a warm water circulation system.Unfed and partially fed mosquitoes were excluded from the study; only completely engorged females were transferred to a climate chamber (set to the same environmental conditions as previously described) and were provided with a 10% sucrose solution.They were monitored for 28 days.At 0, 7, 14, 21, and 28 days post-infectious blood meal, 10-23 mosquitoes of each species and population were individually processed.To determine the vector competence, the whole body (head, thorax, and abdomen), legs plus wings, and saliva of the mosquitoes were screened for DENV RNA to estimate the infection, dissemination, and transmission rates (IR, DR, and TR), respectively [5].Mosquito saliva was collected by inserting the entire proboscis into a single quartz capillary filled with 1 µL of Vaseline oil.Vaseline enables the clear identification of saliva droplets, helping to rule out the possibility that a negative result for the virus in a saliva sample is due to a lack of saliva production.One microliter of 1% pilocarpine, a saliva stimulant that is an analogue of acetylcholine [6], prepared in phosphate-buffered saline (PBS) at 0.1% Tween 80, was applied on the mosquito thorax.After 30 min, the medium containing the saliva was expelled under pressure from the capillary into a 1.5 mL tube containing 500 µL of mosquito diluent consisting of PBS, 20% heat-inactivated FBS, and a 1% penicillin/streptomycin/amphotericin B mix (Invitrogen Corp., Carlsbad, CA, USA; GIBCO Brl, Rockville, MD, USA).Virus titers were quantified by using crossing point values obtained from a qRT-PCR [7] and comparing them with a standard curve obtained from 10-fold serial dilutions of virus stock of known concentration [8,9].Potential vertical transmission of DENV was also analyzed.For this, mosquitoes were allowed to lay eggs (first gonotrophic cycle-FGC) after the infectious blood meal.Larvae from the FGC were reared up to adulthood in the climatic chamber, and adults were tested for DENV RNA via qRT-PCR analysis on pools (5 pools for Ae.albopictus from Rome, 3 pools for Ae.albopictus from Montecchio Maggiore, and 4 pools for Ae.aegypti), consisting of 5 mosquitoes/pool.
Data Analysis and Statistics
Statistical significance tests were performed using a parametric Student's t test.All statistical analyses were performed using GraphPad Prism 5 software (GraphPad Software, San Diego, CA, USA).For all analyses, a p-value ≤ 0.05 was considered significant.
Results
Both Ae. albopictus field populations collected in Montecchio Maggiore and Rome were DENV free as all tested pools were negative.All tested Aedes populations showed susceptibility to DENV-1 infection, allowing the virus to replicate and spread to the salivary glands.The values of IR, DR, TR, and the mean viral titers are shown in Figure 1 and Table 1, respectively.To confirm the ingestion of infectious viral particles, the engorged mosquitoes were analyzed immediately after the infectious blood meal.The results showed a viral titer of approximately 1 log 10 PFU/mL in the tested specimens.The viral titers detected in the bodies of Ae. albopictus increased gradually in all mosquito specimens, reaching the highest mean values of 7.6 × 10 2 PFU/mL 14 days post-infection (dpi) and 7.4 × 10 2 PFU/mL 28 dpi in Rome and Montecchio Maggiore, respectively.In Ae. aegypti the highest mean titer was 1.4 × 10 3 PFU/mL achieved 21 dpi (Table 1).Viral titers were higher in Ae. aegypti compared to Ae. albopictus populations, in particular at 21 dpi.The analysis indicated a progressive increase in IR over time for both Ae.albopictus populations, reaching the maximum value 14 dpi (40% Rome, 30% Montecchio Maggiore); subsequently, IR values decreased at 21 and 28 dpi.Conversely, Ae. aegypti showed a steady increase in IR throughout the observation period.DENV-1 was detected in legs plus wings from 14 dpi in all tested populations.All three mosquito groups showed high DR values (range 67-100%).DENV-1 was detected in saliva 14 dpi in both Ae.albopictus populations, highlighting a shorter extrinsic incubation period (EIP) compared to Ae. aegypti, in which DENV-1 was first detected in saliva 21 dpi.However, it was not possible to detect DENV-1 in the saliva after 21 dpi in Ae. albopictus from Rome and 28 dpi in Ae. albopictus from Montecchio Maggiore.In contrast, DENV-1 was detected in the saliva of Ae. aegypti until 28 dpi.Nevertheless, DENV-1 titers were relatively low in saliva (0.1 × 10 1 -0.5 × 10 1 PFU/mL) in all analyzed populations (Table 1).Cumulative values of IR, DR and TR calculated from 7 to 28 dpi are reported in Table 2.The Aedes albopictus population from Rome exhibited a higher IR (30%) compared to the other populations (18% for Ae.albopictus of Montecchio Maggiore and 22% for Ae.aegypti).However, the results highlighted a higher mean value of TR for Ae.aegypti (27%) compared to both Ae.albopictus populations (14% for Rome and 23% for Montecchio Maggiore).There was no evidence of vertical transmission in the offspring of the three Aedes populations.
Discussion
Dengue fever is a mosquito-borne tropical disease caused by DENV that has become a major public health problem in recent years causing approximately 390 million infections globally every year [10].Several DENV strains have been implicated in autochthonous transmission in Europe since 2010.In recent years, DENV-1 and -2 have been the most prevalent serotypes in infections among European travelers [11].Although Ae. albopictus is considered a secondary vector of DENV, its widespread distribution and high density in temperate areas represent a real risk for local outbreaks originating from imported cases.Until November 2023, 126 autochthonous/non-travel associated dengue cases have been reported in Europe in Italy (82), France (41), and Spain (3) [12,13].In the summer of 2022, 65 autochthonous cases of DENV transmitted by Ae. albopictus were reported in France [14].The unexpectedly high number of indigenous cases was however associated with nine distinct virus introduction and transmission events.This demonstrates the high risk of autochthonous transmission from imported cases but also the relatively small number of secondary cases associated with each individual index case.The first dengue outbreak in Italy occurred in August-September 2020.The outbreak was geographically limited to a small town in the Veneto region and caused very few indigenous human cases.Aedes albopictus was the mosquito vector incriminated as it is present and abundant in the area.This event could have been affected by various concomitant biotic and abiotic factors such as the lower vectorial competence of the Ae.albopictus population compared to the global primary vector, Ae.Aegypti, or the sudden change in atmospheric conditions with heavy rains and a sudden drop in temperatures which, together with vector control operations, led to a drastic decrease in the mosquito density.Another important determinant was the concomitant SARS-Cov-2 pandemic, during which the health authorities imposed a lockdown, effectively limiting people's movements.This is confirmed by the fact that the people infected with DENV-1 during the outbreak were all relatives and neighbors [3].It is known that the pathogenetic variation in virus strains, geographic distribution, vector abundance, and climatic factors influence vector susceptibility to DENV [15].Studies on the role of possible mutations responsible for the better adaptation of arboviruses in Ae. albopictus have recently been conducted.Bellone et al. demonstrated that consecutive in vivo passages in Ae. albopictus resulted in the emergence of specific DENV-1 strains exhibiting increased infectivity for this vector both in vivo and in cultured mosquito cells.These alterations were facilitated by numerous adaptive mutations in the virus genome [2].A comparative examination of the CHIKVs responsible for the Italian epidemics in 2007 and 2017 revealed that only the 2007 strain possessed the adaptive mutation E1 A226V for Ae.albopictus.These results highlight the significance of genomic studies in elucidating the potential role of the mutations in determining the adaptive capacity of a virus to different vectors [16].Moreover, arboviruses adapt to secondary vectors as exemplified by the adaptation of Chikungunya virus to Ae. albopictus [17].Therefore, a similar scenario cannot be ruled out for DENV.Our findings demonstrated that the Asiatic DENV-1 strain that was closely related to the 2020 Italian outbreak strain equally infected both Ae.aegypti and two local populations of Ae. albopictus.Interestingly, the virus reached the salivary glands of Ae. albopictus earlier than in Ae. aegypti.However, the virus survived longer in Ae. aegypti than in both populations of Ae. albopictus, suggesting a longer transmission potential of DENV-1 in Ae. aegypti.In our study, strains of Ae. albopictus collected in different areas of the country were infected with the same titer of DENV-1 in the same environment and experimental conditions to evaluate their vector competence.Of note, in agreement with our results, the DENV-1 strain used in the present study belonged to an established lineage during the DENV-1 outbreak in Singapore in 2013-14 and demonstrated efficient infection of an Ae.Albopictus-derived cell line [18].The IR, DR, and TR rates in our study were not significantly different between Ae. albopictus and Ae.aegypti populations.In addition, the viral titers detected in the Ae.albopictus and Ae.aegypti populations were comparable.This result highlights the potentially important role of Ae. albopictus in the transmission of DENV-1 in non-endemic areas.In fact, the shorter EIP observed in Ae. albopictus compared to Ae. aegypti is noteworthy with regard to its potential to transmit DENV-1 effectively.Dengue outbreaks in France and Italy in 2023 confirm the increasing risk of DENV transmission in Europe potentially by Ae. albopictus, originating from imported cases.Data on the presence, abundance, seasonal fluctuations, and evaluation of vectorial competence of invasive mosquito species circulating in a non-endemic country are pivotal to assess the risk of arboviral transmission chains and, together with strengthened surveillance systems, for implementation of prevention and control measures to mitigate the adverse impacts on human health.
Table 1 .
Infection rate (IR): number of positive bodies/number of tested fed females; dissemination rate (DR): number of positive legs plus wings/number of positive bodies; transmission rate (TR): number of positive saliva/number of positive bodies.
Table 2 .
Infection rate (IR): number of positive bodies/number of tested fed females; dissemination rate (DR): number of positive legs plus wings/number of positive bodies; transmission rate (TR): number of positive saliva/number of positive bodies.Vector competence index (VCI): maximum value 1.0.
|
v3-fos-license
|
2018-12-08T14:49:10.772Z
|
2015-06-10T00:00:00.000
|
55358098
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://sljpsyc.sljol.info/articles/10.4038/sljpsyc.v6i1.8058/galley/5943/download/",
"pdf_hash": "c37e86a65d00005c5f637c18471f132d8323a391",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:519",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "c37e86a65d00005c5f637c18471f132d8323a391",
"year": 2015
}
|
pes2o/s2orc
|
Challenges and opportunities in stigma for psychiatrists : an analysis of effective coping mechanisms to reduce stigma associated with mental illnesses
Stigma leads to discrimination of the patient, family, health care providers and services. A workshop-type qualitative analysis was conducted with a panel of 40 psychiatrists to attempt to apply evidence based antistigma strategies to five given hypothetical case vignettes. Various combinations of protest, education and challenge strategies were selected as effective by the panel. The analysis also revealed a number of stigmatising beliefs related to psychiatry as a profession and behaviour of patients. Psychiatrists themselves need to change such beliefs as part of reducing stigma related to mental illnesses. Disciplines Medicine and Health Sciences Publication Details Rajasuriya, M., Fernando, S. M. & Gunawardhana, U. (2015). Challenges and opportunities in stigma for psychiatrists: an analysis of effective coping mechanisms to reduce stigma associated with mental illnesses. Sri Lanka Journal of Psychiatry, 6 (1), 26-28. This journal article is available at Research Online: http://ro.uow.edu.au/ihmri/566 Rajasuriya, Fernando and Gunawardhana 26 Challenges and opportunities in stigma for psychiatrists: An analysis of effective coping mechanisms to reduce stigma associated with mental illnesses M Rajasuriya, SM Fernando, U Gunawardhana Abstract Stigma leads to discrimination of the patient, family, health care providers and services. A workshop-type qualitative analysis was conducted with a panel of 40 psychiatrists to attempt to apply evidence based anti-stigma strategies to five given hypothetical case vignettes. Various combinations of protest, education and challenge strategies were selected as effective by the panel. The analysis also revealed a number of Brief reportStigma leads to discrimination of the patient, family, health care providers and services. A workshop-type qualitative analysis was conducted with a panel of 40 psychiatrists to attempt to apply evidence based anti-stigma strategies to five given hypothetical case vignettes. Various combinations of protest, education and challenge strategies were selected as effective by the panel. The analysis also revealed a number of Brief report Introduction Stigma is a mark of shame and is associated with mental illnesses, poverty, HIV and many other life conditions. It has been defined as a negative attribute that reduces the value of the person with whom it is associated with (1). Stigma in mental illnesses leads to discrimination in education, employment and leisure activities (2,3). Stigma extends its effects towards medical professionals who treat persons with mental illness, in addition to its effects on persons with a mental illness, their families, and places where they receive care. As psychiatrists we face stigma in everyday practice, and we may be responsible for propagating stigma as well as mitigating it (4). There are three recognised methods of reducing stigma: protest, education and challenge, out of which most evidence is available for education and challenge (5, 6). However, each strategy used to reduce stigma needs to be tailored to the particular situation. A mixture of coping strategies may be most effective for most situations. Psychiatrists are frequently at the receiving end of discrimination, which requires them to be skilled in applying these coping strategies in their daily professional and practical life (4).
Introduction
Stigma is a mark of shame and is associated with mental illnesses, poverty, HIV and many other life conditions.It has been defined as a negative attribute that reduces the value of the person with whom it is associated with (1).Stigma in mental illnesses leads to discrimination in education, employment and leisure activities (2,3).
Stigma extends its effects towards medical professionals who treat persons with mental illness, in addition to its effects on persons with a mental illness, their families, and places where they receive care.As psychiatrists we face stigma in everyday practice, and we may be responsible for propagating stigma as well as mitigating it (4).
There are three recognised methods of reducing stigma: protest, education and challenge, out of which most evidence is available for education and challenge (5,6).However, each strategy used to reduce stigma needs to be tailored to the particular situation.A mixture of coping strategies may be most effective for most situations.Psychiatrists are frequently at the receiving end of discrimination, which requires them to be skilled in applying these coping strategies in their daily professional and practical life (4).
Aims and methods
Applying evidence-based effective coping strategies to counter stigma and discrimination in a given practical situation needs careful and skillful judgement.We felt that inviting a group of psychiatrists to analyse few hypothetical but realistic situations where stigma is expressed may lead to better practical recommendations on how to cope in such situations.The participants (n= 40) were psychiatrists with different levels of experience, ranging from doctors who had recently entered psychiatry training to retired veteran psychiatrists.The moderator (first author) divided participants into five groups of eight members each by allocating each participant a number from one to five according to the seating order.On observation the five groups looked broadly similar in composition as far as age, gender and psychiatry experience were considered.
The moderator allocated case vignettes, one per group, which were applicable to daily practice in psychiatry and addressed various aspects of stigma and discrimination.This was followed by a focus group discussion on how to manage the situation.Each group shared their findings with other groups.Discussions within the group and between groups were structured to recognise aspects of stigma and discrimination related to the contents of the case vignettes, distinguish contributory beliefs and affect states, i.e. attitudes and best coping strategies.The discussions were carefully documented and framework approach was used to analyse qualitative data.
Vignette 1
The ward sister of the psychiatry inpatient unit of a provincial hospital made a request to the medical superintendent (MS) for four new beds for the unit.The MS agrees to give old beds discarded from another ward.
Discussion on vignette 1
Aspects of stigma/ discrimination: Place that provides psychiatric services, namely the psychiatry inpatient unit in this case, was given lower priority.
Challenges and opportunities in stigma for psychiatrists
Contributory attitudes: Psychiatry and psychiatric services were deemed to be inferior to other specialities and medical services.
Best coping strategy or combination: Protest is the best coping strategy in this case, demanding transparency and good governance.Policy of distribution of resources among the wards should be according to an agreed priority list that would be based on needs rather than prejudices.It is important for mental health professionals to liaise with other professionals in other medical specialities, caregiver groups and patient groups to promote this equality.
Vignette 2
A middle aged woman enters the psychiatrist's room and quickly informs, before her son, the patient, comes in, that he does funny things, such as buttoning shirts wrongly, each time he misses his tablets.She pleads with the doctor to admit him to hospital, despite absence of acute symptoms, as a family wedding was coming up.
Discussion on vignette 2
Aspects of stigma/ discrimination: There is stigmatisation within the family.There is high expressed emotion with low tolerance of behaviour of the patient.The patient, himself, may have internalised this stigma.
Contributory attitudes: Every odd action or behaviour of a patient with a mental illness must be due to the effects of that illness.
Best coping strategy or combination: Psychoeducation for the family is of critical importance.However the panel felt not obliging carers by seeing ''proxy'' consultation without the patient present (except when clinically indicated such as in risk assessment) was of higher effectiveness.Seeking the patient's view of this situation should be sought and respected.If he is found to have the same attitudes about himself, i.e. internalised stigma, this needs to be addressed as well.
Vignette 3
As the consultant psychiatrist enters the consultant lounge of the hospital, a colleague, with a smile on his face, asks her: ''How are the pissas (lunatics) getting on?Have they kept everyone amused?''
Discussion on vignette 3
Aspects of stigma/ discrimination: The stigma is directed towards the psychiatric profession and patients highlighting the stereotyping of persons with mental illness (that they are stupid comedians).
Contributory attitudes:
Persons with mental illnesses have stupid comic nonsensical behaviour effectively rendering them clowns.Psychiatrists are the 'doctors' who control these 'clowns'.
Best coping strategy or combination: Angry protest or timid acceptance may be harmful.It was suggested that this comment could be used to highlight the unprofessional behaviour and contributory attitudes of the other consultant in a subtle but assertive way.
One suggestion was to reply, ''They are ok.How are your patients getting on?''Challenging this stereotype is also possible by saying assertively that ''I do not like that comment because persons with mental illness are not comedians''.One participant, a trainee in psychiatry, shared the experience of a friend asking ''Why in the world did you select psychiatry?''The participant had said in response: ''Why in the world did you select gynaecology and obstetrics?''
Vignette 4
A 31-year old man who was diagnosed with schizophrenia recently during a one-week hospital admission, returns to work after a further two more weeks medical leave at home, to find out that he had been sacked.The management maintained that nobody with a 'problem in the head' (sheershabadha in Sinhala) should be employed in their organisation.
Discussion on vignette 4
Aspects of stigma/ discrimination: The employer decides to take away employment from a person with a mental illness, violating human rights.
Contributory attitudes:
The employer apparently stereotypes persons with mental illness perceiving them as irresponsible, unpredictable or dangerous individuals.Best coping strategy or combination: In this situation the treating clinician will have an additional role to play.She/he may involve the family and possibly the social worker in seeking help to educate the employer and reverse this decision.The employer would need to be educated on why it would be unfair to deprive opportunities from a recovered patient.Examples of leading successful persons who have mental illness may be used to emphasize this point.Another course of action would be to use the legal system to challenge this decision.
Vignette 5
Brief transcript of the daily comedy drama of a popular Sinhala radio show: A new young doctor wants to marry a female patient of a psychiatric hospital.Meanwhile this patient saves another from drowning.Director of the hospital recommends her release allowing the marriage to take place as her heroic act proved she was 'normal'.Then the young doctor notes that the person saved by the female patient had apparently committed suicide by hanging himself using his sarong, for which she gives an explanation: ''He was soaked when I saved him, so I hung him to dry.''
Discussion on vignette 5
Aspects of stigma/ discrimination: The pivotal point of this joke is that ''the psychiatric patient'' is stupid to the level shown.
Contributory attitudes:
The creators of this segment do not appear to have any knowledge about mental illnesses.There is a diverse range of mental illnesses and not just one illness.None of these mental illnesses have got stupidity as a diagnostic criterion.
Best coping strategy or combination: The ignorance, whether deliberate or not, of the media about the true nature of mental illnesses needs to be addressed by protest and education.Another suggestion that emerged was to use the words ''crazy'' and ''mad'' to indicate stupidity and one should avoid such terms to describe mental illnesses.
As an immediate practical step, protest with an attempt to educate, such as writing a letter to the relevant radio station, was also discussed.This scenario highlights the role of health professionals to educate the media personnel with a view to improving their attitudes towards mental illness thereby minimising mental illness being portrayed in a stigmatising way through media.
Limitations
A limitation of this research is the fact that the interventions described above were not objectively evaluated.But the in-depth qualitative exploration of this theme, among specialists working in the field is a novel aspect and a strength of the study.
Conclusions and recommendations
At the end of the discussion it appeared that a number of potential steps could be taken by psychiatrists, to mitigate stigma and discrimination.Some of them are acts of commission such as writing letters of protest, and some are acts of omission, such as not obliging third parties' request to see doctors without the patient except when clinically indicated.A majority of these interventions necessitated a change of attitudes, but not an excessive investment of time or other indispensable resources.
This research was completed as part of a pre-congress workshop for the Annual Academic Sessions of the Sri Lanka College of Psychiatrists.
Declaration of interest
stigmatising beliefs related to psychiatry as a profession and behaviour of patients.Psychiatrists themselves need to change such beliefs as part of reducing stigma related to mental illnesses.Key words: stigma, beliefs, behaviour, discrimination, mental illness, psychiatrist SL J Psychiatry 2015; 6(1): 26-28 National Hospital of Sri Lanka SM Fernando, Illawarra Shoalhaven Local Health District and Graduate School of Medicine, University of Wollongong, NSW, Australia U Gunawardhana, Retired Consultant Psychiatrist Corresponding author: M Rajasuriya Email: mahesh.rajasuriya@gmail.com
|
v3-fos-license
|
2019-02-14T14:02:49.705Z
|
2012-07-25T00:00:00.000
|
62706387
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.adv-sci-res.net/8/135/2012/asr-8-135-2012.pdf",
"pdf_hash": "04c22a9367f50520ada1cf679562c3e8dbb55da6",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:520",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "9d069bf8116128c40fbd0307cc0a879fb33ece04",
"year": 2012
}
|
pes2o/s2orc
|
Statistical processing of forecasts for hydrological ensemble prediction: a comparative study of different bias correction strategies
. The aim of this paper is to investigate the use of statistical correction techniques in hydrological ensemble prediction. Ensemble weather forecasts (precipitation and temperature) are used as forcing variables to a hydrologic forecasting model for the production of ensemble streamflow forecasts. The impact of different bias correction strategies on the quality of the forecasts is examined. The performance of the system is evaluated when statistical processing is applied: to precipitation and temperature forecasts only ( pre-processing from the hydrological model point of view), to flow forecasts ( post-processing ) and to both. The pre-processing technique combines precipitation ensemble predictions with an analog forecasting approach, while the postprocessing is based on past errors of the hydrological model when simulating streamflows. Forecasts from 11 catchments in France are evaluated. Results illustrate the importance of taking into account hydrological uncertainties to improve the quality of operational streamflow forecasts.
Introduction
Probabilistic information is of special importance for users vulnerable to climatic and hydrological hazards at different scales (agriculture and irrigation, navigation, public safety, energy companies, etc.). In hydrology, a common approach to produce probabilistic information is the use of ensemblebased streamflow forecasting systems (see review by Cloke and Pappenberger, 2009). The key advantage of these systems is that they can provide future scenarios of streamflow evolution in time, with information on the uncertainty of the predictions, which can be potentially more useful at longer forecast lead times, notably in terms of increasing preparedness for severe flood events and reducing losses (Bartholmes et al., 2009;Boucher et al., 2012;Verkade and Werner, 2011). However, model output predictions may lack precision and reliability due to several reasons like imperfect numerical representation of physical process or insufficiency account of all sources of uncertainty involved in the system being modelled (e.g. Thirel et al., 2008;Jaun and Ahrens, 2009;Randrianasolo et al., 2010;Velazquez et al., 2011). To im-prove the quality of probabilistic forecasts and provide reliable estimates of uncertainty, statistical processing of forecasts is recommended (Schaake et al., 2010). The aim is to remove forecast biases and to improve ensemble dispersion. Several techniques have been proposed in meteorology and hydrology, mainly based on empirical dressing techniques, Bayesian methods or regression analysis (e.g. Krzysztofowicz, 1999;Raftery et al., 2005;Fortin et al., 2006;Hashino et al., 2007;Olsson and Lindström, 2008;Brown and Seo, 2010;Zhao et al., 2011).
In hydrologic forecasting systems, statistical correction techniques can be applied to the forecast input of the hydrological model (meteorological variables like precipitation and temperature), to the forecast output of the hydrological model (streamflows) or to both. As shown in Fig. 1, from the hydrological model point of view, a forecasting system can comprise pre-processing approaches (statistical correction applied previously to the hydrologic modelling) and post-processing approaches (statistical correction applied to flow predictions). In all cases, calibration against observations and extensive testing over different hydrologic conditions are usually required to develop an operationally robust system. In order to optimize the implementation of post-processing techniques in real-time operational forecasting systems, a better understanding of the propagation of uncertainty from weather forecasts through hydrologic models and on the impact of non-linear hydrological transformations and hydrological updating on the ensemble streamflow forecasts is needed.
The aim of this paper is to investigate the use of statistical correction techniques in hydrological ensemble forecasting. We focus on the evaluation of different correction strategies (pre-processing, post-processing or both) and on their impact on the quality of operational streamflow forecasts. The context of the study and the modelling framework, including data and model used, are presented in Sect. 2; methodology and verification measures are described in Sect. 3; Sect. 4 presents the results, and in Sect. 5, conclusions are drawn.
Study context and modelling framework
The study is based on a modelling framework set up at the French electricity company (EDF) for the forecast of streamflows in France. EDF has produced hydrological forecasts for the past 60 yr (Lugiez and Guillot, 1960). Their operational interest includes flood forecasting (for human safety and dam security), short-term forecasting of water inflows to reservoirs, long-term prediction and reservoir management. For the last decade, EDF has invested into ensemble-based hydrological forecasting, comprising the acquisition of realtime meteorological forecast data and the set up of an appropriate hydrological modelling framework. Moreover, special attention has been paid to the performance of experiments on the use and communication of uncertainty in decisionmaking . EDF forecasting chain follows the schematic description in Fig. 1. In this study, meteorological forecasts come from the 50 perturbed members of the ensemble prediction system produced by the European centre of medium-range weather forecasts (ECMWF-EPS). Opera- tionally, EDF also uses the ECMWF-EPS control forecast, as well as deterministic forecasts produced by Météo-France. Forcing data (temperature and precipitation) are spatially aggregated at the catchment scale and evaluated by forecasters before being used as input to the hydrological model. The hydrological model is the MORDOR model. It is a lumped soilmoisture-accounting type rainfall-runoff model developed at EDF (Garçon, 1999). MORDOR has four reservoirs representing the physical processes in a river basin and a snow module that accounts for snow storage and melting in the catchment. The model version used in this study has 11 free parameters that were calibrated against observed data.
This study focuses on 11 catchments located in France, with areas ranging from 220 to 3600 km 2 (Fig. 2). Meteorological input fields are available at a horizontal resolution of 0.5 × 0.5 degree in latitude/longitude, ca. 50 km over France. The number of grid points falling within each catchment (at different percentages of coverage) varies from 2 to 42, with a median value of 15 grid points inside a catchment. Although some catchments are much smaller than the available meteorological grid scale, they were kept in the study and the sensitivity of the results to the catchment size was evaluated. At each catchment, forecasts are run at the daily time step and for a maximum forecast horizon of 7 days. Verification is performed over a 48-month forecast evaluation period (2005)(2006)(2007)(2008). Forecasts are evaluated against observed daily areal precipitation and daily discharge data available at the outlet of the catchments.
Statistical correction strategies
Four main scenarios corresponding to different strategies for the processing of forecasts within the hydrologic modelling framework are tested: -Scenario 1 -Raw forecasts: no pre-or post-processing of forecasts is performed and raw model outputs are evaluated; -Scenario 2 -Pre-processing of meteorological ensemble forecasts: meteorological forecasts of temperature and precipitation are corrected using a technique developed at EDF, which is based on the search for analog situations previously archived in a database. At a given forecast day, for each perturbed member of the ECMWF-EPS system, the 50 most analog situations (according to the forecast fields of geopotential height at 700 and 1000 hPa) are retrieved from the database (for details on the use of analog techniques, see Zorita andvon Storch, 1999 or Obled et al., 2002). A total of 2500 scenarios (50 ECMWF-EPS × 50 analogs) is obtained. Each scenario consists of a pair of an analog situation (precipitation and temperature) and its corresponding ECMWF-EPS forecast. A corrected forecast scenario is calculated for each lead time t as indicated in Eq. (1): This combination of ECMWF-EPS and analog-based forecasts aims at perform an ensemble dressing of ECMWF forecasts (to improve reliability and to remove biases). The values of the parameters α and k can vary according to the studied catchment and lead time. In this study, they were considered constant and equal to 0.3 and 1.2, respectively. The 2500 scenarios obtained were sorted in ascending order and 50 scenarios (equidistant quantiles) were selected to be used as input to the hydrological model.
-Scenario 3 -Post-processing of hydrological ensemble forecasts only: the statistical correction technique used here takes into account only the errors from the hydrological model. It is thus independent of the meteorological forecasts (raw or pre-processed) used during the forecast evaluation period. Based on the past performance of the model, empirical errors are evaluated by taking the logarithm of the ratio between the observed and the simulated streamflows. Simulations are obtained using observed precipitation and temperature available in the period 1970-2000 as input to the hydrological model. Subsamples of the errors are defined according to 20 classes of streamflow values (corresponding to a discretization of the cumulative distribution function at steps of 8 % between the 10 % and 90 % quantiles and 2 % for the tails of the distribution) and for each lead time (Mathevet, 2010). During forecasting, to "dress" each hydrological ensemble member, error values are drawn from the subsamples, according to the category to which the forecast discharge belongs, and added to the raw forecast value.
-Scenario 4 -Pre-processing of meteorological forecasts and post-processing of hydrological ensemble forecasts: the last scenario tested is the combination of the two correction approaches described above: the pre-processing of meteorological forecasts (scenario 2) and the post-processing of hydrological forecasts (scenario 3).
Forecast evaluation methods
Forecasts were evaluated against observations for a 48-month period from 2005 to 2008. Various measures are available in the literature to evaluate probabilistic forecasts and some have already been applied for the evaluation of hydrological forecasts (Wilks, 2011;Casati et al., 2008;Laio and Tamea, 2007). The scores used here are briefly presented below (see the references for details): -Normalized RMSE: the root-mean-square error of the ensemble mean is normalized by the mean value of the observations during the forecast evaluation period to allow comparison among catchments of different sizes. Although the RMSE is not a score adapted to ensemble or probabilistic forecasts, we included it here as it is a score commonly used in hydrology.
-Brier Score (BS): one of the most common accuracy measure for forecast verification, the BS is essentially the mean squared error between the predicted probabilities for a set of events and their outcomes (= 1, if the event occurs and = 0 if it does not occur). The score takes values between 0 and 1; the lower the score, the higher the accuracy. In this study, we focus on the evaluation of severe events given by predictions exceeding the 80 % quantile of the empirical distribution of observed values.
-Rank Probability Score (RPS): the RPS is an extension of the BS to the many-event situation, computed, however, with respect to the cumulative probabilities in the forecast and observations vectors. The score takes values between 0 and 1; a "perfect forecast" receives RPS = 0. Here, we used 10 forecast categories to define -Skill Scores (BSS or RPSS): the BS and the RPS are compared to a reference, which in this study corresponds to the raw forecasts (scenario 1). A skill score of 0 indicates a forecast with skill similar to the reference, while a forecast which is less (more) skilful than the reference will result in negative (positive) skill score values.
-Probability Integral Transform (PIT) histogram: the PIT histogram is a continuous analog of the rank histogram Wilks, 2011), frequently used to verify the consistency of the forecasts, i.e. if the ensemble members of a forecast and the corresponding observations are samples from the same population (Wilks, 2011). If the ensemble consistency condition is satisfied, the relative frequencies given by the ensembles should estimate the actual (observed) prob-ability. In this case, the PIT histogram shows as a uniform histogram, giving an indication of reliable forecasts. Under-dispersed forecasts will give U-shaped PIT histograms, while over-dispersed forecasts show relative frequencies concentrated in the middle ranks (archshaped). Asymmetrical histograms are an indication of over-or under-forecasting bias.
the 5th percentile, respectively. PIT histograms are evaluated for each catchment. While Fig. 3 illustrates the results for precipitation (main meteorological input to the hydrological model), the other figures focus on the evaluation of streamflows. Figure 3 shows the normalized RMSE values obtained from the evaluation of daily areal precipitation forecasts against observed precipitation data. The quality of raw forecasts (scenario 1) is compared to the quality of statistically processed forecasts (according to scenario 2), for leadtimes of 3, 5 and 7 days. The statistical correction applied reduces significantly the forecast errors and improves forecast precision, especially for short lead times. The results from the other scores (not shown here) indicate the same tendency, confirming the efficiency of the applied statistical correction technique to improve the precision and the reliability of the meteorological input to the hydrological model.
Concerning the impact of statistical correction on the quality of streamflow forecasts, Figs. 4 and 5 show, respectively, the Brier Skill Scores for flow forecasts exceeding the 80 % percentile of observed flows and the Rank Probability Skill Scores for the four scenarios of correction strategies studied.
Both scores show that the use of a pre-processing technique (scenario 2) generally improves the quality of ensemble streamflow forecasts in the studied catchments: BSS and RPSS values are higher than 0, showing an improvement in forecast skill with respect to the raw forecasts (scenario 1). This positive impact of pre-processing meteorological forecasts on the quality of hydrological forecasts is greater at shorter lead times.
Furthermore, results for the scenarios 3 and 4 illustrate the added value of implementing also a post-processing correction approach of the hydrological model outputs. For the prediction of events of high flows (Fig. 4), the implementation of pre-and post-processing techniques together (scenario 4) conducts to the highest score values (better forecast quality). When considering the RPSS values (Fig. 5), differences between scenario 3 and scenario 4 are more significant for shorter lead times. For longer lead times, skill scores achieved for scenario 3, where raw meteorological forecasts are used and only hydrological outputs are post-processed, are basically equivalent to those achieved when using scenario 4, where both pre-processing and post-processing are performed. The analysis of BSS and RPSS as a function of catchment size (not shown) did not indicate a clear sensitivity of the results to the catchment area. Only at longer lead times, hydrological forecasts based only on pre-processed meteorological forecasts showed negative values of skill scores for the largest catchments, i.e. less skilful forecasts comparatively to the reference (raw forecasts). Further analysis, with a larger sample of catchments, would be necessary to better detect any general tendency.
The PIT histograms in Fig. 6 illustrate the impact of statistical correction strategies on the reliability of streamflow forecasts for two catchments representative of the studied sample and for forecast lead time of 7 days. For both catchments, scenario 1 with no bias correction strategy (raw forecasts) displays biased under-dispersive streamflow ensemble forecasts. The use of scenario 2 (statistical correction applied only to meteorological forecasts) does not improve the PIT histograms of streamflow forecasts. Since the PIT histograms of statistical corrected precipitations (not shown here) do not display significant under-dispersion problems, the examples shown in Fig. 6 illustrate the impact of the rainfall-runoff transformation on the spread of the ensemble streamflow forecasts. It is possible that the added value of pre-processing meteorological input forcings in these cases has been obscured by the mixed evaluation of high and low (more frequent) streamflow periods. Biases in the modelling of the recession part of hydrographs can be of different nature from biases in the modelling of high flows. It would be interesting to separate the evaluation of hydrological forecasts by considering separately flood and recession periods. This is part of an ongoing study and is beyond the scope of this paper. Furthermore Fig. 6 presents also the impact of applying the post-processing approaches described in scenarios 3 and 4. For these scenarios a significant improvement of forecast reliability is observed for both catchments studied.
Conclusions
This paper investigates the use of statistical bias correction techniques in hydrological ensemble forecasting. Our main focus is on evaluating of the impact of different strategies of statistical bias correction on the quality of operational streamflow forecasts. From the hydrological model point of view, forecasters can use pre-processing approaches (statistical corrections applied prior to the hydrologic modelling, i.e. on the meteorological forcing), post-processing approaches (statistical corrections applied only to the output of the hydrological model, i.e. streamflow predictions) or both. We compared performance measures obtained for 11 catchments in France during a 48-month evaluation period (2005)(2006)(2007)(2008) according to four scenarios of statistical bias correction: raw forecasts, only pre-processed meteorological forecasts, only post-processed hydrological forecasts and with statistical processing applied to both meteorological and hydrological forecasts.
Results show that even though correcting the meteorological uncertainties is of high importance to obtain precise and reliable inputs to the hydrological model, the errors linked to hydrological modelling remain a key-component of the total predictive uncertainty of hydrological ensemble forecasts. Statistical corrections made to precipitation forecasts can lose their effect when propagated through the hydrological model. As a result efforts to also implement a posthydrological model correction may be necessary. In this paper we showed that even a relatively simple empirical postprocessing approach can be useful to achieve reliable hydrological forecasts for operational needs. Future work should include the application of other statistical correction techniques and the use of other hydrological models and performance measures on a larger set of catchments.
|
v3-fos-license
|
2020-05-21T09:10:07.376Z
|
2020-05-01T00:00:00.000
|
218834344
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2218-273X/10/5/785/pdf",
"pdf_hash": "4b6f2bec249a15986eed7bc122fc5f68a986e978",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:523",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "cfee24ab163b9492873871bd8f16fa90f03ecfef",
"year": 2020
}
|
pes2o/s2orc
|
Green Synthesis of MnO Nanoparticles Using Abutilon indicum Leaf Extract for Biological, Photocatalytic, and Adsorption Activities
We report the synthesis of MnO nanoparticles (AI-MnO NAPs) using biological molecules of Abutilon indicum leaf extract. Further, they were evaluated for antibacterial and cytotoxicity activity against different pathogenic microbes (Escherichia coli, Bordetella bronchiseptica, Staphylococcus aureus, and Bacillus subtilis) and HeLa cancerous cells. Synthesized NAPs were also investigated for photocatalytic dye degradation potential against methylene blue (MB), and adsorption activity against Cr(VI) was also determined. Results from Scanning electron microscope (SEM), X-ray powder diffraction (XRD), Energy-dispersive X-ray (EDX), and Fourier-transform infrared spectroscopy (FTIR) confirmed the successful synthesis of NAPs with spherical morphology and crystalline nature. Biological activity results demonstrated that synthesized AI-MnO NAPs exhibited significant antibacterial and cytotoxicity propensities against pathogenic microbes and cancerous cells, respectively, compared with plant extract. Moreover, synthesized AI-MnO NAPs demonstrated the comparable biological activities results to standard drugs. These excellent biological activities results are attributed to the existence of the plant’s biological molecules on their surfaces and small particle size (synergetic effect). Synthesized NAPs displayed better MB-photocatalyzing properties under sunlight than an ultraviolet lamp. The Cr(VI) adsorption result showed that synthesized NAPs efficiently adsorbed more Cr(VI) at higher acidic pH than at basic pH. Hence, the current findings suggest that Abutilon indicum is a valuable source for tailoring the potential of NAPs toward various enhanced biological, photocatalytic, and adsorption activities. Consequently, the plant’s biological molecule-mediated synthesized AI-MnO NAPs could be excellent contenders for future therapeutic applications.
Introduction
Bacterial infections are still a major cause of fatalities over the globe. The rapid emergence of resistance to multiple drugs in different bacteria have further made the condition more problematic. It has been estimated that about 50% of hospitalized patients are infected by multiple drug-resistant bacteria over the globe every year [1]. Therefore, it is necessary to develop new alternatives to combat such drug-resistant bacteria. In addition to microbial diseases, cancer is also considered a major cause of fatalities in humans. It is estimated that from 1975 to 2000, the number of cancer patients has doubled [2]. Many advances have taken place in the area of molecular and cellular biology for improving the treatment of cancerous cells by chemotherapy. However, chemotherapy treatments also
Collection of Plant Material
Fresh Abutilon indicum leaves were collected and identified by Dr. Zaheer-u-din Khan (Department of Botany, GC University, Pakistan). The voucher specimen was deposited in the herbarium under number GC. Herb. Bot. 68. The plant was dried for about two to three weeks in a shady area. These leaves were then ground into a powder and used for extraction.
Plant Extract Preparation and Synthesis of AI-MnO NAPs
The preparation of plant extract and synthesis of AI-MnO NAPs were carried out following the procedure reported by [33] with slight modifications. Accordingly, the 20-g of leaves powder of A. indicum was added to a 500-mL beaker, then methanol/deionized water (1:1 ratio) with a 150-mL volume of each was transferred for extraction ( Figure 1). It was placed on a magnetic hot plate and underwent stirring for about 30 min at 55 • C and allowed to settle overnight. Then, it was filtered with filter paper to get the plant extract. After that, 100 mL of 0.1 M MnSO 4 ·H 2 O was taken in a 500-mL beaker, and 100 mL of plant extract was added to it. A total of 0.1-M NaOH solution was added dropwise to the beaker with constant stirring for about 1 h at pH 8.0 and 50 • C ( Figure 1). Afterwards, they were filtered and washed with hexane or ethanol to remove impurities. Then, obtained precipitates were dried in an oven at 90 • C for 1 h and put in a muffle furnace for calcination at 150 • C for 2 h. They were then used for further characterization.
Collection of Plant Material
Fresh Abutilon indicum leaves were collected and identified by Dr. Zaheer-u-din Khan (Department of Botany, GC University, Pakistan). The voucher specimen was deposited in the herbarium under number GC. Herb. Bot. 68. The plant was dried for about two to three weeks in a shady area. These leaves were then ground into a powder and used for extraction.
Plant Extract Preparation and Synthesis of AI-MnO NAPs
The preparation of plant extract and synthesis of AI-MnO NAPs were carried out following the procedure reported by [33] with slight modifications. Accordingly, the 20-g of leaves powder of A. indicum was added to a 500-mL beaker, then methanol/deionized water (1:1 ratio) with a 150-mL volume of each was transferred for extraction ( Figure 1). It was placed on a magnetic hot plate and underwent stirring for about 30 min at 55 °C and allowed to settle overnight. Then, it was filtered with filter paper to get the plant extract. After that, 100 mL of 0.1 M MnSO4·H2O was taken in a 500-mL beaker, and 100 mL of plant extract was added to it. A total of 0.1-M NaOH solution was added dropwise to the beaker with constant stirring for about 1 h at pH 8.0 and 50 °C (Figure 1). Afterwards, they were filtered and washed with hexane or ethanol to remove impurities. Then, obtained precipitates were dried in an oven at 90 °C for 1 h and put in a muffle furnace for calcination at 150 °C for 2 h. They were then used for further characterization.
Characterization
The green-synthesized AI-MnO NAPs were then characterized by employing different spectroscopic techniques. X-ray diffraction crystallography (XRD) was used for determining the crystalline structure of the fabricated NAPs. Energy-dispersive X-ray spectroscopy (EDX) analysis was performed to verify the chemical composition of the green-synthesized AI-MnO NAPs. Morphology determination of the synthesized NAPs was carried out by using a scanning electron microscope (SEM). Fourier transform infrared (FTIR) analysis of the synthesized NAPs was executed to determine whether biological molecules of plant's leaves extract are involved or not. The photocatalytic and adsorption activities were performed against methylene blue (MB) dye and Cr(VI), respectively, using a UV-visible spectrophotometer.
Antibacterial Activity
Antibacterial activity of the synthesized AI-MnO NAPs was evaluated on bacterial species, which include Escherichia coli, Bordetella bronchiseptica, Staphylococcus aureus, and Bacillus subtilis by using the disk-diffusion agar method [34]. Leflox (standard antibacterial drug) and DMSO at 0.001 g/mL were used as a positive and negative control, respectively. Different concentrations of the green-synthesized AI-MnO NAPs were prepared in DMSO-i.e., 10 µg/mL, 20 µg/mL, 30 µg/mL, and 40 µg/mL-and placed in a sonicator at 36 • C for 15 min to ensure uniform dispersion. A total of 30 mL of molten nutrient agar was added in sterile Petri dishes, and 3 mL of inoculum was inoculated in it. At four peripheral positions, holes were made and filled with the reference standards and sample dilutions. After 24 h of incubation at 37 • C, a clear zone was formed around disks. The diameter of the inhibition zones was measured using a Vernier caliper.
Cytotoxicity Activity
HeLa cancer cell lines were used to assess the cytotoxicity activity of synthesized AI-MnO NAPs using MTT colorimetric assay [35]. The HeLa cancer cells were kept in Dulbecco's Modified Eagle's Medium (DMEM) supplied with streptomycin (100 µg/mL), penicillin (100 U/mL), and 10% FBS (fetal bovine serum) in a humidified atmosphere comprising of CO 2 (5%) and air (95%) at 37 • C. The HeLa cancer cells were cultured in 100 µL of DMEM in a 96-microtiter plate for 24 h at 37 • C in 5% CO 2 to get cell confluency up to 5 × 10 4 cells/well. The cultured cancer cells were further incubated for 24 h at 37 • C with different concentrations (1,5,10,15,30,60, 120 µg/mL) of green-synthesized AI-MnO NAPs. Cells treated with the standard drug (doxorubicin) were termed as positive control and cells without any treatment as negative control. Afterwards, cancer cells were centrifuged to remove the supernatant and subsequently washed using the phosphate buffer saline (PBS) solution. A total 10 µL of MTT labeling agent at the concentration of 0.5 mg/mL was then added to each well; the 96-microtiter plate was again incubated for 4 h at 37 • C in a humidified atmosphere comprising of CO 2 (5%) and air (95%); and subsequently, 100 µL of DMSO was added to each well to solubilize the undissolved crystals of formazan, then kept in a shaker for about 10 to 15 min. The absorption maxima of the formazan in each well were determined using a Varian Eclipse spectrophotometer at 570 nm with reference at 655 nm. The percentage of cell viability can be calculated using the given formula [35]: Percentage of cell viability = (OD value of treated cells)/(OD value of negative control) × 100.
Biocompatibility Analysis
The biocompatibility of the green-synthesized AI-MnO NAPs was determined following the standard protocol reported by Khan et al. [36].
Photocatalytic Activity Against Methylene Blue
The photocatalytic degradation activity of green-synthesized AI-MnO NAPs was determined using methylene blue dye as a model system [3]. A total of 0.05 g of green-synthesized AI-MnO NAPs were added to 100 mL of methylene blue dye solution (0.1%, w/v), and this suspension was further kept in the dark for 1 h to reach adsorption-desorption equilibrium. Methylene blue solution without any treatment served as control. The resultant suspensions were then irradiated with the UV lamp (254-370 nm, Osram ultra-vitalux, 300 W, Munich, Germany) and solar light (80-90 Klux, LT300, Extech, Leeds, UK) without any external pressure and changing pH. The distance between suspensions surface and UV lamp was 12 cm. The temperature of the samples under the sunlight and UV lamp was 30 • C. During the whole irradiation process, the suspension was stirred continuously, employing a magnetic stirrer for uniform mixing. A 5-mL aliquot was then sampled out for initial concentration. About 5 mL was further sampled in regular intervals of time-i.e., after every 30 min for 3 h-and centrifuged to separate the photocatalyst (AI-MnO NAPs). The suspension was then studied using a UV spectrophotometer at the specific absorption peak (665 nm) of methylene blue to determine the remaining dye contents. The percentage degradation of the dye with a UV lamp and solar light was calculated using the following formula: where C o is the initial dye concentration and C t is the dye concentration at time t (min).
Three successive runs in sunlight irradiation were conducted to examine the recovery and stability of AI-MnO NAPs photocatalyst. Then, after each run, AI-MnO NAPs photocatalyst was removed, washed with deionized water and ethanol, dried, and the same NAPs reused. After that, their degradation efficiency was investigated.
Cr(VI) Adsorption Capacity of Synthesized AI-MnO NAPs
The green-synthesized AI-MnO NAPs were investigated for adsorption activity against Cr(VI) ions in a glass flask under magnetic stirring with constant speed [37]. In brief, 20 mg of potassium dichromate as Cr(VI) source was dissolved in 100 mL of deionized water. Different amounts (0.01, 0.02, 0.03, 0.04, and 0.05 g) of adsorbent (AI-MnO NAPs) were then mixed separately with 100 mL of Cr(VI) solution at pH 4.50, 7, and 9 at 30 • C. A 5-mL aliquot was sampled after every 15 min and centrifuged for removing adsorbent from solution. The remaining Cr(VI) ions were determined using a UV spectrophotometer at their specific peaks. The adsorption capacity (R) (mg/g) was calculated using the following formula: where C o is the initial concentration of Cr(VI), C t is the concentration of Cr(VI) at time t, m is the adsorbent mass, and v is the solution volume. [38]. The highest peak intensity was shown at the Miller index (101) at an angle of 37 • . The higher peak intensity and their sharpness further corroborated that the synthesized NAPs are highly crystalline. Moreover, it has also been perceived from the XRD pattern that no peak associated with impurity was detected, which displayed that the synthesized AI-MnO NAPs are pure. Biomolecules 2020, 10, x FOR PEER REVIEW 6 of 19
SEM Studies
SEM was used for identifying the morphology and size of NAPs. Figure 2b shows the SEM image of the green-synthesized AI-MnO NAPs. The SEM image of the green-synthesized AI-MnO NAPs revealed that they have spherical morphology. The SEM image further showed that the synthesized AI-MnO NAPs were uniformly distributed. An average size of 80 ± 0.5 nm was observed for the synthesized AI-MnO NAPs investigated by particle size distribution analysis. Figure 3a indicates that the particle size distribution of AI-MnO NAPs is right-skewed.
EDX Analysis
The EDX analysis was carried out to examine the formation and chemical composition of the green-synthesized AI-MnO NAPs. Figure 2c,d displays the EDX pattern and EDX mapping, respectively, and these results were validated with the successful green synthesis of the NAPs using biological molecules of leaf extract of A. indicum. The presence of chemical element (Mn) in the synthesized NAPs was validated by its EDX peaks at 0.63 keV and 5.91 keV in the EDX spectrum [39]. In addition to Mn EDX peaks, the peaks corresponding to carbon (C) and oxygen (O) also clearly appeared in the EDX spectrum, which corroborated the adsorption of biological molecules on NAPs surface from the plant extract. The EDX results were further affirmed that the NAPs were free from impurity. Hence, it is apparent from the EDX results that NAPs of our interest have been successfully
SEM Studies
SEM was used for identifying the morphology and size of NAPs. Figure 2b shows the SEM image of the green-synthesized AI-MnO NAPs. The SEM image of the green-synthesized AI-MnO NAPs revealed that they have spherical morphology. The SEM image further showed that the synthesized AI-MnO NAPs were uniformly distributed. An average size of 80 ± 0.5 nm was observed for the synthesized AI-MnO NAPs investigated by particle size distribution analysis. Figure 3a indicates that the particle size distribution of AI-MnO NAPs is right-skewed.
EDX Analysis
The EDX analysis was carried out to examine the formation and chemical composition of the green-synthesized AI-MnO NAPs. Figure 2c,d displays the EDX pattern and EDX mapping, respectively, and these results were validated with the successful green synthesis of the NAPs using biological molecules of leaf extract of A. indicum. The presence of chemical element (Mn) in the synthesized NAPs was validated by its EDX peaks at 0.63 keV and 5.91 keV in the EDX spectrum [39]. In addition to Mn EDX peaks, the peaks corresponding to carbon (C) and oxygen (O) also clearly appeared in the EDX spectrum, which corroborated the adsorption of biological molecules on NAPs surface from the plant extract. The EDX results were further affirmed that the NAPs were free from impurity. Hence, it is apparent from the EDX results that NAPs of our interest have been successfully synthesized using plant leaf extract. EDX pattern results of green-synthesized AI-MnO NAPs were in agreement with the previously reported literature [39].
Biomolecules 2020, 10, x FOR PEER REVIEW 7 of 19 synthesized using plant leaf extract. EDX pattern results of green-synthesized AI-MnO NAPs were in agreement with the previously reported literature [39].
UV-Visible and FTIR Analysis
The UV-visible and FTIR analyses of plant extract and green-synthesized AI-MnO NAPs were further carried out to determine the absorption maxima and different functional groups of biological molecules of A. indicum leaf extract, respectively, that are involved in reduction and capping. The UV-visible analysis demonstrated that plant leaf extract displayed absorption maxima in the UV region (200-390 nm) which corroborated that the leaf extract is a rich source of polyphenols and flavonoids as these biological phytomolecules absorb UV light due to the presence of hydroxyl (OH) moieties [40,41]. Green-synthesized AI-MnO NAPs presented a broad absorption band with two characteristic absorption peaks at 380 nm (biological phytomolecules) and 460 nm (Mn-O) ( Figure 3b). The slight redshift in the absorption maxima of Mn-O was observed compared to reported works [42,43]. This might be attributed to the fact that the donation of non-bonding electrons from phytomolecules to the vacant d-orbital of Mn facilitated the electron transition, which shifted the absorption maxima to the higher wavelength [44].
The FTIR analysis demonstrated that plant leaf extract has biological phytomolecules with different molecular functionalities-such as O-H, C-H, CO2NH3, C=O, C=C, N-H, C-O (Figure 3c)and these FTIR signals were found in close agreement with the previous literature [40,41]. On the other hand, green-synthesized AI-MnO NAPs not only presented the characteristic FTIR signal for Mn-O at about 580 cm -1 , but also showed other FTIR signals for O-H, C=O, N-H, C-O. The FTIR signal of Mn-O was consistent with the previous report [42,43]. Hence, these observations
UV-Visible and FTIR Analysis
The UV-visible and FTIR analyses of plant extract and green-synthesized AI-MnO NAPs were further carried out to determine the absorption maxima and different functional groups of biological molecules of A. indicum leaf extract, respectively, that are involved in reduction and capping.
The UV-visible analysis demonstrated that plant leaf extract displayed absorption maxima in the UV region (200-390 nm) which corroborated that the leaf extract is a rich source of polyphenols and flavonoids as these biological phytomolecules absorb UV light due to the presence of hydroxyl (OH) moieties [40,41]. Green-synthesized AI-MnO NAPs presented a broad absorption band with two characteristic absorption peaks at 380 nm (biological phytomolecules) and 460 nm (Mn-O) (Figure 3b). The slight redshift in the absorption maxima of Mn-O was observed compared to reported works [42,43]. This might be attributed to the fact that the donation of non-bonding electrons from phytomolecules to the vacant d-orbital of Mn facilitated the electron transition, which shifted the absorption maxima to the higher wavelength [44].
The FTIR analysis demonstrated that plant leaf extract has biological phytomolecules with different molecular functionalities-such as O-H, C-H, CO 2 NH 3 , C=O, C=C, N-H, C-O (Figure 3c)-and these FTIR signals were found in close agreement with the previous literature [40,41]. On the other hand, green-synthesized AI-MnO NAPs not only presented the characteristic FTIR signal for Mn-O at about 580 cm -1 , but also showed other FTIR signals for O-H, C=O, N-H, C-O. The FTIR signal of Mn-O was consistent with the previous report [42,43]. Hence, these observations demonstrated that green-synthesized AI-MnO NAPs are entirely capped with the biologically active phytomolecules of plant leaf extract having such functional groups.
Synthesis Mechanism
Literature demonstrated that leaf extract of A. indicum possesses numerous biologically active phytochemical compounds such as lignin, volatile oils, fixed oils, carbohydrates, steroids, terpenoids, tannins, saponins, flavonoids, phenolics, alkaloids, proteins anthraquinones, cardiac glycosides, etc. [40,41]. The presence of these phytomolecules in plant leaf extract and green-synthesized NAPs was further evident from the UV-visible and FTIR results. Therefore, because of the existence of these phytomolecules in leaf extract, they might be acting as reducing and capping agents during the synthesis of MnO NAPs. During synthesis, phytomolecules (flavonoids, phenolics, carbohydrates, etc.) reduced the Mn + into their zero-valent species Mn 0 by proving electrons through a redox reaction [45]. Afterwards, other phytomolecules such as alkaloids, proteins, surfactants, etc., simultaneously capped Mn 0 zero-valent species to stabilize them ( Figure 1). A similar mechanism of NAPs synthesis using plant leaf extract was also reported by [45].
Antibacterial Activity
The antibacterial potential of green-synthesized AI-MnO NAPs was assessed following the disk-diffusion method against gram-positive and gram-negative bacteria in comparison to plant extract and standard antibacterial drug (Leflox). The results demonstrated that the highest antibacterial propensity in terms of zone of inhibition (ZOIs) was presented of course by the antibacterial drug in all its tested concentrations (10 µg/mL, 20 µg/mL, 30 µg/mL, 40 µg/mL) (Figure 4a). Meanwhile, the green-synthesized AI-MnO NAPs also displayed higher and comparable antibacterial activity to the standard antibiotic drug (Figure 4a,b), and the least antibacterial effect was observed with the plant extract. However, it is interesting to note that plant leaf extract itself was found to be biologically active against all the tested pathogenic bacterial strains. Moreover, the green-synthesized AI-MnO NAPs and plant extract were also found to exhibit concentration-dependent bactericidal activity against all the tested bacteria. The concentration of 10 µg/mL of the synthesized AI-MnO NAPs and plant leaf extract demonstrated the antibacterial activity in terms of ZOIs (5 ± 0.01 mm, 6 ± 0.03 mm, 7 ± 0.02 mm, and 9 ± 0.06 mm) and (8 ± 0.02 mm, 9 ± 0.04 mm, 10 ± 0.07 mm, and 12 ± 0.03 mm) against S. aureus, B. subtilis, E. coli, and B. bronchiseptica, respectively. With the increasing concentration of synthesized NAPs and plant extract from 10 µg/mL to 40 µg/mL, an increase in antibacterial activity in terms of ZOIs was observed against all the tested gram-positive bacteria and gram-negative bacterial strains. The same concentration-dependent antibacterial activity was reported in the previous literature [46,47]. However, it has been observed that the highest zone of inhibition was observed in gram-negative bacteria (B. bronchiseptica and E. coli) followed by gram-positive bacteria (B. subtilis and S. aureus) [46]. The good antibacterial results of the synthesized NAPs might be attributed to the NAPs' physical characteristics (size, morphology, surface area) and their functionalization with the biologically active phytomolecules of plant leaf extract, because plant extract itself also exhibited good bactericidal performance [40,41]. Our green-synthesized AI-MnO NAPs were found to have good antibacterial performance in comparison to those of Arasu et al. which were synthesized by using aqueous extract of Acorus calamus rhizome [48].
Antibacterial results demonstrate that gram-negative bacteria are found to be more susceptible, and their growth was inhibited more strongly upon treatment with the green-synthesized AI-MnO NAPs than gram-positive bacteria. The same trend of growth inhibition was observed with the treatment of plant extract and standard drug. This is attributed to the differences found in the structure and composition of their cell walls [13,46,47]. The literature shows that a thicker layer of peptidoglycan is present in gram-positive bacterial cell wall along with covalently attached teichuronic and teichoic acid; while it is a thinner layer with an extra outer covering layer of lipopolysaccharides (periplasm) in gram-negative bacteria ( Figure 5) [49]. Therefore, gram-negative bacteria having thin peptidoglycan layer are more vulnerable to NAPs than gram-positive bacterial strains. Further, the presence of negatively charged lipopolysaccharide biomolecules coatings is also the causative factor for the gram-negative bacteria being more sensitive to NAPs because they have a significant affinity for NAPs with positive surface charge. As a result, the NAPs are more effective against gram-negative than gram-positive bacteria [49]. Antibacterial results demonstrate that gram-negative bacteria are found to be more susceptible, and their growth was inhibited more strongly upon treatment with the green-synthesized AI-MnO NAPs than gram-positive bacteria. The same trend of growth inhibition was observed with the treatment of plant extract and standard drug. This is attributed to the differences found in the structure and composition of their cell walls [13,46,47]. The literature shows that a thicker layer of peptidoglycan is present in gram-positive bacterial cell wall along with covalently attached teichuronic and teichoic acid; while it is a thinner layer with an extra outer covering layer of lipopolysaccharides (periplasm) in gram-negative bacteria ( Figure 5) [49]. Therefore, gram-negative bacteria having thin peptidoglycan layer are more vulnerable to NAPs than gram-positive bacterial strains. Further, the presence of negatively charged lipopolysaccharide biomolecules coatings is also the causative factor for the gram-negative bacteria being more sensitive to NAPs because they have a significant affinity for NAPs with positive surface charge. As a result, the NAPs are more effective against gram-negative than gram-positive bacteria [49]. The exact antibacterial mode of action of MnO NAPs against gram-positive and gram-negative bacteria is still unclear. However, much literature has proposed the possible mechanism of action of NAPs. Accordingly, MnO NAPs come into the vicinity of the negatively charged cell membrane of bacterial strains because of electrostatic attraction between them [46,48]. Finally, they bind to the cell membrane, which leads the cell membrane from a well-ordered to a disordered state. As a result of this alteration, the cell membrane loses its permeability and causes the leakage of bacterial cell electrolytes, which further destroys the structure and functioning capacity of mesosomes [13]. The NAPs further react with the thiol (-SH) functionality present in the cytosol, inhibit protein synthesis, and suppress the enzyme's activity [50]. Such types of interference of NAPs with the different cellular organelles disrupt the intracellular cell-signaling process and decrease ATP synthesis in the cell powerhouse (mitochondria), which further enhances reactive oxygen species production by destroying the cellular antioxidant defense system and finally leads to cell demise [50,51]. The exact antibacterial mode of action of MnO NAPs against gram-positive and gram-negative bacteria is still unclear. However, much literature has proposed the possible mechanism of action of NAPs. Accordingly, MnO NAPs come into the vicinity of the negatively charged cell membrane of bacterial strains because of electrostatic attraction between them [46,48]. Finally, they bind to the cell membrane, which leads the cell membrane from a well-ordered to a disordered state. As a result of this alteration, the cell membrane loses its permeability and causes the leakage of bacterial cell electrolytes, which further destroys the structure and functioning capacity of mesosomes [13]. The NAPs further react with the thiol (-SH) functionality present in the cytosol, inhibit protein synthesis, and suppress the enzyme's activity [50]. Such types of interference of NAPs with the different cellular organelles disrupt the intracellular cell-signaling process and decrease ATP synthesis in the cell powerhouse (mitochondria), which further enhances reactive oxygen species production by destroying the cellular antioxidant defense system and finally leads to cell demise [50,51].
Cytotoxicity Activity Against HeLa Cells
The plant extract and green-synthesized AI-MnO NAPs were assessed for their cytotoxicity activity using MTT assay against the HeLa cancer cell line in vitro in comparison to standard anticancer drug doxorubicin. After incubation of HeLa cells with the samples and drug, a remarkable drop in cell viability was observed by increasing the concentration (1 µg/mL, 5 µg/mL, 10 µg/mL, 15 µg/mL, 30 µg/mL, 60 µg/mL, and 120 µg/mL) of tested samples ( Figure 6). The maximum cytotoxicity activity was shown at 120 µg/mL concentration by standard drug and AI-MnO NAPs, respectively, because more than 50% of the HeLa cancer cells died. Dose-dependent cytotoxicity activity was observed with all the samples (plant extract, AI-MnO NAPs) and drugs, similar to antibacterial activity. Moreover, synthesized AI-MnO NAPs interestingly demonstrated a comparable cytotoxic effect on HeLa cancer cells to the standard drug. These substantial cytotoxic results were attributed to the NAPs' physical characteristics (size, morphology, surface area) and their functionalization with the biologically active phytomolecules (polyphenols, flavonoids, terpenoids, alkaloids, etc.) of plant leaf extract because plant extract itself displayed good cytotoxicity [13,16]. Many studies revealed that Abutilon indicum leaf extract possesses biologically active phytomolecular functionalities [40,41]. The same results were reported for cytotoxicity of other NAPs in which higher toxicity was observed
Cytotoxicity Activity Against HeLa Cells
The plant extract and green-synthesized AI-MnO NAPs were assessed for their cytotoxicity activity using MTT assay against the HeLa cancer cell line in vitro in comparison to standard anticancer drug doxorubicin. After incubation of HeLa cells with the samples and drug, a remarkable drop in cell viability was observed by increasing the concentration (1 µg/mL, 5 µg/mL, 10 µg/mL, 15 µg/mL, 30 µg/mL, 60 µg/mL, and 120 µg/mL) of tested samples ( Figure 6). The maximum cytotoxicity activity was shown at 120 µg/mL concentration by standard drug and AI-MnO NAPs, respectively, because more than 50% of the HeLa cancer cells died. Dose-dependent cytotoxicity activity was observed with all the samples (plant extract, AI-MnO NAPs) and drugs, similar to antibacterial activity. Moreover, synthesized AI-MnO NAPs interestingly demonstrated a comparable cytotoxic effect on HeLa cancer cells to the standard drug. These substantial cytotoxic results were attributed to the NAPs' physical characteristics (size, morphology, surface area) and their functionalization with the biologically active phytomolecules (polyphenols, flavonoids, terpenoids, alkaloids, etc.) of plant leaf extract because plant extract itself displayed good cytotoxicity [13,16]. Many studies revealed that Abutilon indicum leaf extract possesses biologically active phytomolecular functionalities [40,41]. The same results were reported for cytotoxicity of other NAPs in which higher toxicity was observed with the green-synthesized NAPs as compared with those manufactured by using other methods [10][11][12][13][14]. Hence, the synthetic process, size, shape, surface area, and functionalization with biologically active phytomolecules significantly affect the cytotoxicity of synthesized NAPs [17].
Biocompatibility Study
The biocompatibility of the green-synthesized AI-MnO NAPs was evaluated in vitro on red blood cells (RBCs) via the hemolytic activity. The hemolytic activity result is presented in Figure 7. The result demonstrates that triton X-100 (positive control) has the highest toxic effect while PBS (negative control) exhibited no toxicity to RBCs. On the other hand, the green synthesized AI-MnO NAPs at the concentration of 100 µg/mL displayed the least toxic effect on RBCs. The slight increase in the percentage of RBCs lysis was observed with the increasing concentration from 200 to 500 µg/mL. According to the ISO 10993-5, a material to be considered toxic or nontoxic, cell variability must be in the following order: >80% no cytotoxic, 80-60% weak cytotoxic, 60-40% moderate cytotoxic, and <40% strong cytotoxic [52]. Hence, it is interesting to note that the percentage of RBCs lysis remained less than 5% at the tested concentrations, which indicates that the green-synthesized AI-MnO NAPs are biocompatible. Our hemolytic activity results found in close agreement with the previous work reported by Khan et al. [53]. They reported the less-toxic nature of manganese oxide NAPs at the concentration of 666.44 µg/mL and considered these NAPs reliable and suitable for their biological applications [53].
Biocompatibility Study
The biocompatibility of the green-synthesized AI-MnO NAPs was evaluated in vitro on red blood cells (RBCs) via the hemolytic activity. The hemolytic activity result is presented in Figure 7. The result demonstrates that triton X-100 (positive control) has the highest toxic effect while PBS (negative control) exhibited no toxicity to RBCs. On the other hand, the green synthesized AI-MnO NAPs at the concentration of 100 µg/mL displayed the least toxic effect on RBCs. The slight increase in the percentage of RBCs lysis was observed with the increasing concentration from 200 to 500 µg/mL. According to the ISO 10993-5, a material to be considered toxic or nontoxic, cell variability must be in the following order: >80% no cytotoxic, 80-60% weak cytotoxic, 60-40% moderate cytotoxic, and <40% strong cytotoxic [52]. Hence, it is interesting to note that the percentage of RBCs lysis remained less than 5% at the tested concentrations, which indicates that the green-synthesized AI-MnO NAPs are biocompatible. Our hemolytic activity results found in close agreement with the previous work reported by Khan et al. [53]. They reported the less-toxic nature of manganese oxide NAPs at the concentration of 666.44 µg/mL and considered these NAPs reliable and suitable for their biological applications [53].
Photocatalytic Activity against Methylene Blue (MB) Dye
The photocatalytic disintegration ability of green-synthesized AI-MnO NAPs was determined against MB dye under the irradiation of a UV lamp and sunlight spectrum. The photocatalytic activity result of AI-MnO NAPs is presented in Figure 8. First, the MB dye was evaluated for its self-disintegration without adding synthesized photocatalyst (AI-MnO NAPs) under both spectra, and no self-degradation was found. Photocatalysis results show that green-synthesized AI-MnO NAPs significantly photocatalyzed the MB dye under the sunlight spectrum within 120 min. However, they demonstrated the least photocatalytic activity in the UV-lamp irradiation spectrum. A sufficient amount of MB dye remained unphotocatalyzed even after 180 min (Figure 8a,b). Biomolecules 2020, 10, x FOR PEER REVIEW 12 of 19 Figure 7. The biocompatibility of green-synthesized AI-MnO NAPs (F-value = 120,651.238, p < 0.05).
Photocatalytic Activity Against Methylene Blue (MB) Dye
The photocatalytic disintegration ability of green-synthesized AI-MnO NAPs was determined against MB dye under the irradiation of a UV lamp and sunlight spectrum. The photocatalytic activity result of AI-MnO NAPs is presented in Figure 8. First, the MB dye was evaluated for its selfdisintegration without adding synthesized photocatalyst (AI-MnO NAPs) under both spectra, and no self-degradation was found. Photocatalysis results show that green-synthesized AI-MnO NAPs significantly photocatalyzed the MB dye under the sunlight spectrum within 120 min. However, they demonstrated the least photocatalytic activity in the UV-lamp irradiation spectrum. A sufficient amount of MB dye remained unphotocatalyzed even after 180 min (Figure 8a,b).
The UV-visible spectrum shows four absorption peaks for MB dye in the range of 200-800 nm wavelength (Figure 8a,b). Absorption peaks in the visible region (612 nm and 664 nm) are attributed to nitrogen-sulfur conjugated system (chromophore); while in the UV region (246 nm and 292 nm) they are from the phenothiazine structure in MB dye [3]. Upon degradation, the intensity of these four absorption peaks was gradually reduced, which indicated the oxidative degradation of the nitrogen-sulfur conjugated system with ring-opening of phenothiazine [54]. The absorbance of MB dye solution under solar spectrum irradiation with AI-MnO NAPs approached zero after 120 min compared with UV-lamp irradiation, and the blue color of the dye was changed to colorless, indicating its complete decomposition into other products (CO2, H2O, CH3OH, etc.). Figure 8c demonstrates that the highest disintegration percentage of MB took place during the first 30 min and ultimately reached 100% in 120 min under sunlight spectrum, while, even after 180 min, only 31% was achieved under a UV lamp. Similar results were reported by Arasu et al. and Sheikhshoaie et al. in the degradation of MB with MnO and Mn3O4 photocatalyst under the sunlight spectrum, respectively [48,54]. The rate of disintegration of the synthetic dyes entirely relies on physical features such as less bandgap, highly crystalline nature, shape, large pore size, and specific surface area of the photochemical-catalyst [54,55]. In our study, green-synthesized AI-MnO NAPs demonstrated the moderate rate of photocatalytic degradation of MB. This might be because of the smaller pore size The UV-visible spectrum shows four absorption peaks for MB dye in the range of 200-800 nm wavelength (Figure 8a,b). Absorption peaks in the visible region (612 nm and 664 nm) are attributed to nitrogen-sulfur conjugated system (chromophore); while in the UV region (246 nm and 292 nm) they are from the phenothiazine structure in MB dye [3]. Upon degradation, the intensity of these four absorption peaks was gradually reduced, which indicated the oxidative degradation of the nitrogen-sulfur conjugated system with ring-opening of phenothiazine [54]. The absorbance of MB dye solution under solar spectrum irradiation with AI-MnO NAPs approached zero after 120 min compared with UV-lamp irradiation, and the blue color of the dye was changed to colorless, indicating its complete decomposition into other products (CO 2 , H 2 O, CH 3 OH, etc.). Figure 8c demonstrates that the highest disintegration percentage of MB took place during the first 30 min and ultimately reached 100% in 120 min under sunlight spectrum, while, even after 180 min, only 31% was achieved under a UV lamp. [48,54].
The rate of disintegration of the synthetic dyes entirely relies on physical features such as less bandgap, highly crystalline nature, shape, large pore size, and specific surface area of the photochemical-catalyst [54,55]. In our study, green-synthesized AI-MnO NAPs demonstrated the moderate rate of photocatalytic degradation of MB. This might be because of the smaller pore size and smaller specific surface area of the AI-MnO NAPs. Further, we calculated the specific surface area theoretically by using the following equation, S = 6 × 10 3 /D p ρ (S = specific surface area, Dp = particle size, ρ = density of MnO NAPs-which is 5.37 g/cm 3 ) [56], for the confirmation of our hypothesis. The calculated specific surface area of the synthesized AI-MnO NAPs was 13.966 m 2 /g, which was smaller and supported our hypothesis.
In photocatalysis reactions, the recyclability and stability of the photocatalyst have significant importance [3,55]. The recyclability and stability of the green-synthesized photocatalyst were assessed. The green-synthesized photocatalyst was run for three consecutive cycles (one cycle equal to 180 min) under the same set of experimental conditions. The results demonstrated 99%, 98%, and 88% MB dye degradation efficiency with the recycled photocatalyst after each successive cycle of 180 min (Figure 7). This exhibited that the degradation efficiency of the AI-MnO NAPs in the second cycle remained statistically equivalent (p > 0.05) to the first cycle [57]. However, the degradation efficiency at the third cycle was observed as statistically inequivalent (p < 0.05) to the first and second cycles.
Based on our findings, the possible mode of action of AI-MnO NAPs (photocatalyst) to disintegrate the MB dye into other degraded products is as follows [54,55]: Accordingly, in sunlight spectrum irradiation, the electrons are generated from AI-MnO NAPs by creating a positive charge on the NAPs' surface. The generated electrons react with the existing oxygen molecules and produce superoxide radicals ( • O 2 − ) [54,55]. On the other hand, holes (h + ) produced on AI-MnO NAPs further react with the present water or hydroxyl ions H 2 O/OHand produce hydroxyl radicals ( • OH). These generated free radical species ( • O 2 − and • OH) and holes (h + ) are highly reactive, they readily react with the MB dye and disintegrate into less harmful degraded products (CO 2 , H 2 O, CH 3 OH, NH 3 , SO 2, etc.) ( Figure 9) [3,54,55].
Accordingly, in sunlight spectrum irradiation, the electrons are generated from AI-MnO NAPs by creating a positive charge on the NAPs' surface. The generated electrons react with the existing oxygen molecules and produce superoxide radicals ( • O2 − ) [54,55]. On the other hand, holes (h + ) produced on AI-MnO NAPs further react with the present water or hydroxyl ions H2O/OHand produce hydroxyl radicals ( • OH). These generated free radical species ( • O2 − and • OH) and holes (h + ) are highly reactive, they readily react with the MB dye and disintegrate into less harmful degraded products (CO2, H2O, CH3OH, NH3, SO2, etc.) (Figure 9) [3,54,55]. The concentration of adsorbent has a significant role in the adsorption of Cr(VI) as a direct relation present between adsorbent concentration and chromium adsorption. Therefore, we have evaluated different concentrations (0.01-0.05 g) of green-synthesized AI-MnO NAPs for Cr(VI) adsorption. The adsorption capacity results demonstrated that maximum adsorption (15.25 mg/g) of Cr(VI) was observed at the concentration of 0.05 g of AI-MnO NAPs, while least (6.38 mg/g) Cr(VI) adsorption was detected at the concentration of 0.01 g of AI-MnO NAPs. The results show the concentration-dependent adsorption of Cr(VI) (Figure 10a). The similar concentration-dependent adsorption results were also reported by Du et al. by the adsorption of Cr(VI) on flower-, wire-, and sheet-like MnO 2 -deposited diatomites [58].
Effect of Solution pH on Cr(VI) Adsorption
In addition to adsorbent concentration, the pH of the solution significantly affects the percentage of Cr(VI) adsorption by affecting the NAPs' surface charge and the chemical state of chromium. This effect was investigated by varying the pH of the solution from alkaline to acidic. The results displayed the maximum Cr(VI) adsorption (11.50%) at acidic pH 4.50; on the other hand, least adsorption (0.63%) was observed at alkaline pH 9.0. Intermediate adsorption of Cr(VI) (5.46%) was anticipated at neutral pH 7.0 (Figure 10b). These results demonstrated the acidic pH-dependent Cr(VI) adsorption activity. More acidic pH produces more Cr(VI) adsorption by the green-synthesized AI-MnO NAPs (adsorbent). Moreover, the decrease in adsorption of Cr(VI) on amine-crosslinked wheat straw (adsorbent) with the increasing pH of the solution towards alkaline was also reported by Xu et al. [37].
This might be because of the presence of Cr(VI) in three different forms in solution depending on the solution pH, such as in HCrO 4 − or Cr 2 O 7 2forms at highly acidic pH, and in CrO 4 2− form at alkaline pH ( Figure 10c) [37,58]. At higher acidic pH value 4.50, the surface of green-synthesized AI-MnO NAPs becomes more positively charged. It attracts the anionic species (HCrO 4 − or Cr 2 O 7 2-) of Cr(VI) by the electrostatic force of attraction (Figure 10c) [37]. This can be further explained by two potential mechanisms as follows: (i) the electrostatic attraction between the positively charged surface of AI-MnO NAPs and the anionic Cr(VI) species favors the Cr(VI) adsorption in acidic media; (ii) the reduction of Cr(VI) to Cr(III) needs a huge number of protons that are only available in the acidic condition [59].
the maximum Cr(VI) adsorption (11.50%) at acidic pH 4.50; on the other hand, least adsorption (0.63%) was observed at alkaline pH 9.0. Intermediate adsorption of Cr(VI) (5.46%) was anticipated at neutral pH 7.0 ( Figure 10b). These results demonstrated the acidic pH-dependent Cr(VI) adsorption activity. More acidic pH produces more Cr(VI) adsorption by the green-synthesized AI-MnO NAPs (adsorbent). Moreover, the decrease in adsorption of Cr(VI) on amine-crosslinked wheat straw (adsorbent) with the increasing pH of the solution towards alkaline was also reported by Xu et al. [37]. This might be because of the presence of Cr(VI) in three different forms in solution depending on the solution pH, such as in HCrO4 − or Cr2O7 2-forms at highly acidic pH, and in CrO4 2− form at alkaline pH (Figure 10c) [37,58]. At higher acidic pH value 4.50, the surface of green-synthesized AI-MnO NAPs becomes more positively charged. It attracts the anionic species (HCrO4 − or Cr2O7 2-) of Cr(VI) by the electrostatic force of attraction (Figure 10c) [37]. This can be further explained by two potential mechanisms as follows: (i) the electrostatic attraction between the positively charged surface of AI-MnO NAPs and the anionic Cr(VI) species favors the Cr(VI) adsorption in acidic media; (ii) the reduction of Cr(VI) to Cr(III) needs a huge number of protons that are only available in the acidic condition [59].
Conclusions
In this work, we have successfully synthesized AI-MnO NAPs functionalized with biologically active phytomolecules of hydroalcoholic leaf extract of A. indicum for the first time via a robust, economic, and eco-friendly approach. The green-synthesized NAPs were further characterized using different spectroscopic techniques, namely, XRD, SEM, EDX, EDX mapping, FTIR, and UV-visible. The green-synthesized AI-MnO NAPs demonstrated excellent antibacterial and anticancer activities against multidrug-resistant gram-positive bacteria, gram-negative bacteria, and HeLa cancer cells, which showed their potential for use in biomedical applications. Our findings also suggest that their enhanced biological activities were the result of the synergetic effect of physical characteristics (smaller size, large surface area) and adsorbed biologically active phytomolecules on their surface. Moreover, the green-synthesized AI-MnO NAPs also demonstrated good photocatalytic and adsorption activities against MB dye and Cr(VI), which displayed their effective potential to treat different organic and inorganic pollutants. Hence, the green-synthesized AI-MnO NAPs have enormous potential for applications in cosmetics, wastewater treatment, and pharmacological and nutraceutical industries. In conclusion, the A. indicum green synthesis is economical and beneficial for the fabrication of plant-mediated, low-cost, less toxic, and more biocompatible nanomaterials. From a future perspective, the selection of plants having biologically active phytomolecules will develop a novel platform for the green fabrication of NAPs for their biomedical applications.
|
v3-fos-license
|
2017-10-15T10:04:08.018Z
|
2008-12-11T00:00:00.000
|
18652838
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://ard.bmj.com/content/68/11/1715.full.pdf",
"pdf_hash": "c1cb6c1fd53a6f30d8a1947bea35757afb9806f3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:524",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c1cb6c1fd53a6f30d8a1947bea35757afb9806f3",
"year": 2008
}
|
pes2o/s2orc
|
Efficacy of prednisone 1–4 mg/day in patients with rheumatoid arthritis: a randomised, double-blind, placebo controlled withdrawal clinical trial
Objective: A randomised double-blind placebo controlled withdrawal clinical trial of prednisone versus placebo in patients with rheumatoid arthritis (RA), treated in usual clinical care with 1–4 mg/day prednisone, withdrawn to the same dose of 1 mg prednisone or identical placebo tablets. Methods: All patients were from one academic setting and all trial visits were conducted in usual clinical care. Patients were taking stable doses of 1–4 mg prednisone with stable clinical status, documented quantitatively by patient questionnaire scores. The protocol included three phases: (1) equivalence: 1–4 study prednisone 1 mg tablets taken for 12 weeks to ascertain their efficacy compared with the patient’s usual tablets before randomisation; (2) transfer: substitution of a 1 mg prednisone or identical placebo tablet every 4 weeks (over 0–12 weeks) to the same number as baseline prednisone; (3) comparison: observation over 24 subsequent weeks taking the same number of either placebo or prednisone tablets as at baseline. The primary outcome was withdrawal due to patient-reported lack of efficacy versus continuation in the trial for 24 weeks. Results: Thirty-one patients were randomised, 15 to prednisone and 16 to placebo, with three administrative discontinuations. In “intent-to-treat” analyses, 3/15 prednisone and 11/16 placebo participants withdrew (p = 0.03). Among participants eligible for the primary outcome, 3/13 prednisone and 11/15 placebo participants withdrew for lack of efficacy (p = 0.02). No meaningful adverse events were reported, as anticipated. Conclusion: Efficacy of 1–4 mg prednisone was documented. Evidence of statistically significant differences with only 31 patients may suggest a robust treatment effect.
The use of glucocorticoids in the treatment of rheumatoid arthritis (RA) has evoked controversy for more than half a century. [1][2][3][4][5] Disease modification was documented during the 1950s, 6 but toxicities of long-term glucocorticoids in pharmacological doses of prednisone or prednisolone of 10 mg/day or more, as was the clinical practice in the 1950s, 7 were inevitable. Therefore, from the 1950s through the 1980s, systemic glucocorticoids were recommended in RA only as ''bridging therapy'' while awaiting anticipated benefits of disease-modifying antirheumatic drugs (DMARDs), or for acute severe disease flares or life-threatening vasculitis.
A reassessment began during the 1980s, based on recognition of severe long-term outcomes of RA 8 9 and clinical experience indicating relatively limited toxicity associated with low doses of glucocorticoids. An open study, 10 a 24-week non-blinded clinical trial 11 and recent double-blind clinical trials [12][13][14][15][16][17] have recognised clinical benefit, including ''disease-modifying'' properties, of low-dose prednisone in slowing radiographic progression, confirmed in meta-analyses. 18 19 Reports indicating disease modification even with low doses of prednisone or prednisolone of 5-7.5 mg/day 12 16 17 are of particular interest, as doses of 10 mg/day are associated with adverse outcomes 20 including bone loss 21 and higher mortality rates. 22 23 Prednisone or prednisolone for RA generally is initiated with a dose of 10-20 mg/day and maintained at levels of 5 mg/day or more. The medical literature includes varying criteria for ''low-dose'' prednisone, generally 5 mg or 10 mg/day. A few clinicians, including the senior author, have treated most patients over the last decade with an initial dose of 3 mg/day.
The efficacy of prednisone in doses of ,5 mg/ day has not been established in patients with RA, and rheumatologists continue to disagree on the use of glucocorticoids. A double-blind clinical trial to analyse the efficacy of ,5 mg/day prednisone would therefore appear desirable. A large multicentre prospective randomised double-blind clinical trial in patients with no previous glucocorticoid therapy, to be taken with their usual RA treatment, might appear ideal. However, resources for such a multicentre clinical trial have not been available. Therefore, with partial support from the United States Arthritis Foundation, we performed a single-centre withdrawal trial of prednisone ,5 mg/day in the course of usual care.
METHODS Patients
All patients were recruited from one academic clinical care setting at Vanderbilt University and all clinical trial visits were conducted during usual clinical care. Most patients with RA in this clinical setting have been treated with long-term prednisone 1-5 mg/day, with a usual initial dose of 3 mg/ day since the mid-1990s. Clinical efficacy without severe toxicity has been observed, 24 although almost all patients are also treated with methotrexate so the specific efficacy of prednisone could not be analysed without a clinical trial.
Withdrawal clinical trial protocol
Patients with stable clinical status who were taking stable doses of prednisone 1-4 mg/day in 1 mg tablets or one 5 mg tablet per day (although no patients taking 5 mg were actually enrolled in the trial) over the previous 12 weeks were invited to participate in a randomised double-blind placebo controlled prednisone withdrawal clinical trial. All participants gave informed consent to participate. The trial was approved by the Institutional Review Board of Vanderbilt University, and supported in part by the United States Arthritis Foundation.
The trial was designed to be broadly inclusive with few exclusion criteria. Inclusion criteria were: age at least 18 years; met American Rheumatism Association (ARA) criteria for RA; 25 had been taking a stable dose of 1-5 mg/day prednisone for at least 12 weeks with no anticipated dose change. Stable clinical status was documented by an absolute change of less than 3 units from 12 weeks earlier in routine assessment of patient index data 3 (RAPID3), an index of the three patient-reported outcomes, on a multidimensional health assessment questionnaire (MDHAQ) 26 for physical function, pain and global estimate of status, each scored 0-10, total 0-30, 27 completed by all patients at all visits as a component of the infrastructure of standard care. 28 Exclusion criteria were relatively few: no prednisone therapy; prednisone dose .5 mg/day; improving or worsening clinical status; anticipation of joint replacement or other elective surgery; uncontrolled hypertension, diabetes or other comorbidities; severe fibromyalgia; inability to complete English language questionnaires; and pregnancy or nursing.
The protocol included three phases: c Equivalence: all participants were given a 12-week (84-day; actually 100 days to ensure availability) supply of ''study prednisone'' tablets to take at the same dose as at baseline before entry into the clinical trial. These tablets were taken in lieu of the patients' usual prednisone obtained at their own pharmacies to ascertain similar efficacy of the study prednisone to the usual prednisone.
c Transfer: participants who reported ''equivalence'' over the 12-week period were assigned randomly to be ''transferred'' at a rate of a single 1 mg tablet per 4 weeks over the next 0-12 weeks from study prednisone tablets to either 1 mg prednisone or identical placebo tablets (table 1). The gradual transfer was performed to avoid abrupt reduction of prednisone usage in subjects randomised for transfer to placebo.
c Comparison: participants were maintained over 24 weeks following the ''transfer'' phase on the same number of either 1 mg prednisone or identical placebo tablets as at baseline. Each visit included assessment and recording of weight and blood pressure; completion of an MDHAQ by the patient and scoring of RAPID3 by the rheumatologist; and laboratory tests of erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), liver function and haematological status to monitor possible adverse events of prednisone or concomitant methotrexate or other medications. 29 Prednisone and placebo tablets Prednisone 1 mg tablets and identical placebo tablets were purchased from Apotex Inc, Toronto, Canada. Tablets were packaged in bottles containing 100 tablets each. The bottles were relabelled at the trial site with two types of labels: ''Bottle A'', known to be prednisone, and ''Bottle B'' which contained ''unknown'' tablets (either prednisone or placebo). The appropriate number of Bottle A study prednisone tablets in bottles of 100 tablets (according to the daily dose at baseline (1-5 mg/day; no patient enrolled took 5 mg/day)) was given to each participant for a 100-day supply during the 12-week (84-day) ''equivalence'' phase.
Packets were prepared for visits 2, 3 and 4 based on the participant's daily baseline dose (1-5 mg/day), according to a randomisation scheme in groups of 4 (2 prednisone and 2 placebo) for each dose. Packets for the entire study were prepared for 68 possible participants, 8 each for prednisone dose levels of 1, 2, 4 or 5 mg/day and 36 for 3 mg/day, taken by the majority of patients before the trial. Study packets included the appropriate number of ''Bottle A'' bottles of 100 1 mg prednisone tablets and ''Bottle B'' bottles of 1 mg prednisone or identical placebo tablets (table 1).
Study visits
Visit 1 included an explanation of the trial and completion of informed consent. Participants were given a 12-week supply (with tablets for 16 extra days) of ''Bottle A'' 1 mg study prednisone tablets to take instead of their usual daily prednisone dosage for 12 weeks. This phase was designed to establish whether or not ''equivalence'' of the same dose of study prednisone to the patient's usual prednisone tablets could be seen.
Visit 2 occurred 12 weeks later. Participants who reported ''equivalence'' of study prednisone to their usual prednisone dose over the 12 weeks and elected to continue in the study were randomised to either 1 mg prednisone or identical placebo tablets for the ''transfer'' phase. Each participant received a specific written schedule with specific dates every 4 weeks to reduce by one the number of tablets to be taken from ''Bottle A'' (of 1 mg prednisone tablets) and to increase by one the Table 1 Plan to ''transfer'' patients from low-dose prednisone tablets to study prednisone or placebo tablets
Dose Medication
Week of ''transfer'' phase Week 0 Week 4 Week 8 Week 12 Week 16* Each participant was given an individual schedule outlining specific dates to make changes in the number of tablets to be taken from bottle A and bottle B. *No patients taking 5 mg at baseline were enrolled in the study.
number of ''Bottle B'' (of unknown study tablets, either 1 mg prednisone or identical placebo tablets) over a 12-week period. Substitution of one ''Bottle B'' tablet for one ''Bottle A'' tablet occurred at 0, 4, 8, 12 and 16 weeks for 1, 2, 3, 4 and 5 mg baseline dose, respectively (no enrolled participant was taking 5 mg/day). Therefore, at the end of the 12-week period, each participant was taking only ''Bottle B'' study medication. The gradual tapering was designed to avoid an abrupt discontinuation of prednisone which might favour prednisone. Visit 3 occurred 4-12 weeks after visit 2 (or was omitted for participants with a baseline dose of 1 mg/day). All participants whose baseline daily dose was 2-4 mg prednisone then started taking only unknown Bottle B prednisone or placebo and began the ''comparison'' phase.
Visit 4 occurred 12 weeks after visit 3. Participants completed the usual MDHAQ and the trial status was reviewed with the investigator.
Visit 5 occurred 12 weeks after visit 4, at least 24 weeks after participants had completed the ''transfer'' phase. The participants completed the final usual MDHAQ and prednisone was reinstated at the pretrial dose.
Clinical trial outcomes
The predetermined primary outcome was withdrawal (''dropout'') after visit 2 due to perceived lack of efficacy of study tablets (prednisone or placebo), ie, during the ''transfer'' or ''comparison'' phases of the trial, versus remaining in the trial until completion of the 24-week ''comparison'' period. Secondary outcomes included a change in any of the three RA Core Data Set variables 30 found on the MDHAQ (as well as the HAQ) for physical function, pain and global estimate, all scored 0-10, and RAPID3 0-30 composite scores. Weight, systolic and diastolic blood pressure and laboratory tests of ESR, CRP, haematology and liver profiles were recorded at each visit and analysed as indicators of possible adverse events.
Data management and statistical analyses
All ''case report forms'' were the routinely administered MDHAQ and a physician-completed data sheet that included blood pressure, weight and all medications. These data were entered into a Microsoft Access database maintained on all patients seen at each visit in this setting and transferred to Stata V.9.2 (College Station, Texas, USA). The prednisone dose at baseline of all participants was compared descriptively with the initial dose of prednisone taken by patients at their first visit to this setting 1-15 years earlier.
Differences between treatment groups were evaluated using the Wilcoxon rank sum test for continuous variables or the Fisher exact test for categorical variables. The primary analysis and an intention-to-treat analysis of all randomised participants were conducted using the Fisher exact test to assess statistical significance. Differences between treatment groups with respect to changes from visit 1 to visit 5 or final visit for physical function score (0-10), pain visual analogue scale (VAS) score (0-10), patient global VAS score (0-10), RAPID3 composite score (0-30), fatigue VAS score (0-10), morning stiffness (minutes), weight, systolic/diastolic blood pressure, ESR and CRP levels and other laboratory tests were evaluated using the Wilcoxon rank sum test.
Patient recruitment
Enrolment was conducted over 17 months from March 2005 to July 2006. Overall, 156 patients with RA were seen over this period. Although the trial was designed with liberal inclusion criteria and minimal exclusion criteria, only 37 of these 156 patients met the enrolment criteria and volunteered to participate. The reasons for non-participation included: 21 (13.5%) unwilling to discontinue taking prednisone, often noting previous efforts without success, at the advice of physicians, relatives and others; 21 (13.5%) clinically improving (RAPID3 lower by >3 units) or with new RA therapies; 9 (5.8%) clinically declining (RAPID3 higher by >3 units) with need for new therapies; 14 (9%) with severe fibromyalgia; 15 (9.6%) too far away for 3-monthly visits; 1 (0.6%) could not complete an English language questionnaire; 19 (12.2%) took a prednisone dose of .5 mg/day (all initiated by other physicians); 5 (3.2%) were not taking any prednisone; 4 (2.6%) with severe clinical status for whom the investigator regarded it as inappropriate clinically to discontinue prednisone; 3 (1.9%) pregnant or nursing; 5 (3.2%) with substantial comorbidities; and 2 (1.3%) with planned elective surgery. Thus, only 37 of the 156 patients with RA (23.7%) were eligible and volunteered to participate (table 2).
Of the 37 patients who agreed to participate in the trial, 6 (16.2%) reported that study prednisone was ineffective during the 12-week ''equivalence'' period compared with their usual prednisone (although the study tablets met US Food and Drug Administration (FDA) requirements) and declined to continue. Thus, 31 participants were randomised, 15 to the prednisone group and 16 to the placebo group.
Participant enrolment in clinical trial
The participants randomised to prednisone and placebo did not differ significantly in age or any quantitative or laboratory measure (table 3). The mean prednisone dose at baseline was 2.9 mg/day and the median dose was 3 mg/day in both groups. Other treatments taken included methotrexate at doses between 5 and 25 mg/week by all but 2 participants; hydroxychloroquine by 10 participants (8 in combination with methotrexate), 5 in the prednisone group and 5 in the placebo group; leflunomide by 2 participants in the prednisone group; etanercept by 3 participants, 2 in the prednisone group and 1 in the placebo group; and adalimumab by 1 participant in the placebo group.
Among the 31 participants, 22 had a baseline prednisone dose of 3 mg/day, 5 of 4 mg/day, 3 of 2 mg/day and 1 of 1 mg/day
Withdrawal clinical trial results
Of the 15 participants randomised to the prednisone group, 2 were withdrawn for administrative reasons, one for an unexpected hysterectomy and the other for a recurrence of breast cancer. Of the 13 remaining participants in the prednisone group, 3 withdrew for lack of efficacy and 10 completed the 24-week ''comparison'' observation period (table 5).
Of the 16 participants randomised to the placebo group, 1 was withdrawn for administrative reasons; the patient had severe weight loss which was ultimately found to be based on depression, with discontinuation all medications. Of the 15 remaining participants in the placebo group, 11 withdrew for lack of efficacy and 4 completed the 24-week ''comparison'' observation period (table 5).
Differences between withdrawals in the prednisone group and the placebo group were statistically significant (p = 0.021, table 5). An intent-to-treat analysis of all randomised participants also indicated significant differences (p = 0.032, table 5).
Participants in the placebo group had higher median changes (indicating poorer status) with worsening scores for physical function, pain, patient global estimate, RAPID3 and fatigue. Participants in the prednisone group remained similar to baseline at the conclusion of the trial (table 6), although none of the differences were statistically significant compared with the placebo group (p.0.05). Furthermore, no significant differences were seen between the groups for changes in ESR or CRP levels.
Adverse events
No meaningful toxicities were reported by the participants in either group, as anticipated, since all participants had been taking stable doses of 1-4 mg/day prednisone before the trial, many for long periods. No significant changes in weight or blood pressure were seen within either group or between groups.
DISCUSSION
The results of this withdrawal clinical trial indicate that patients who were transferred from long-term prednisone doses of 1-4 mg/day to identical placebo tablets were significantly more likely to withdraw over a subsequent 6-9-month period than those who were randomised to prednisone. These results may appear surprising as most rheumatologists initiate (and often maintain) prednisone treatment at doses higher than 3 mg. By contrast, most participants in the clinical trial reported here had never taken prednisone at a dose higher than 3 mg, and the efficacy of this dose compared with placebo was documented in the trial. This trial has many limitations. First, the number of participants is small, although a finding of statistically significant differences with only 31 participants may imply a robust treatment effect. Second, all participants were from one academic clinical practice and may not be representative of all patients with RA. A multicentre trial to improve generalisability of the results would be desirable. Third, a trial of initiation of prednisone 3 mg/day in patients who had never been treated previously with prednisone, rather than withdrawal from prednisone, might give more definitive information. However, a period of years would be required to accumulate a sufficient number of patients from one rheumatologist, and resources for performance of a multicentre trial have not been available. Fourth, the trial was conducted entirely in the course of usual clinic visits, without a study coordinator who might have added rigor to the results. However, the costs of this trial were substantially lower than in usual clinical trials and the primary outcome of withdrawal for lack of efficacy was accounted for in all 31 enrolled patients. It might be possible to conduct large simple clinical trials 31 32 in RA using only patient self-report measures and indices that include only these data. Self-report measures and indices distinguish active from control treatment as significantly as joint counts, laboratory tests or indices requiring these data in reported clinical trials of RA. 33 34 It was disappointing that only 37 of 156 consecutive patients with RA seen over 17 months were eligible and volunteered to The reported trial may underestimate the treatment effects of prednisone, given that a primary reason for non-participation was a desire not to discontinue prednisone on the basis of failure of (often many) previous attempts. It is not clear why the six participants who withdrew before randomisation may have experienced lower efficacy with study prednisone than with their own prednisone. Although generic medications are required to meet chemical criteria for equivalence, anecdotal information suggests that some patients may vary in response to different brands of generic medications.
Neither efficacy nor safety of long-term low-dose prednisone can be established definitively from the results of this clinical trial. Long-term safety remains of concern. Higher mortality rates have been associated with the use of prednisone in an earlier cohort of patients seen by the senior author at Vanderbilt University 22 and by others. 23 38 However, results of observational studies reporting adverse outcomes of glucocorticoids are confounded by indication, as patients with more severe clinical status are more likely to be treated with glucocorticoids. Furthermore, almost all patients in the previously reported studies had been treated with prednisone doses greater than 10 mg/day, many for extended periods.
Most participants in the present study never took doses of prednisone greater than 3 mg/day, with mean RAPID3 scores at baseline of ,6 on a scale of 0-30, indicating low severity. 27 Limited data are available concerning long-term mortality outcomes in such patients, although MHAQ physical function scores of ,1.2 on a scale of 0-10 (0.6 on a scale of 0-3) are associated with favourable long-term mortality outcomes compared with all patients with RA. 39 In one study of cardiovascular disease associated with long-term glucocorticoid use, patients whose dose was 5 mg showed no differences from control subjects. 40 Large prospective studies as well as long-term observations of patients such as those in the present study, treated with prednisone only in doses of 5 mg/day or less, are needed to clarify possible effects of very low-dose prednisone on mortality.
We conclude that this clinical trial documents the efficacy of low-dose prednisone in patients with RA. Although not analysed in this study because of the short time frame, we have observed minimal long-term adverse events in patients who have taken daily prednisone for more than 10 years, sometimes up to 20 years. A multicentre long-term (2 years) ''de novo'' clinical trial of prednisone in new patients who have not had any prior glucocorticoid treatment would be of considerable value. Extended report
|
v3-fos-license
|
2020-02-22T14:03:52.447Z
|
2020-02-01T00:00:00.000
|
211231673
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0007964&type=printable",
"pdf_hash": "6413beddc3f8af0a526e84316ae46e44dab8ee7e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:525",
"s2fieldsofstudy": [],
"sha1": "f5ecd87f34985493e725aad05fd94a009b171e47",
"year": 2020
}
|
pes2o/s2orc
|
Fungal diseases as neglected pathogens: A wake-up call to public health officials
The invasive diseases caused by fungi, the so-called systemic mycoses, profoundly impact human health [ ]the Global Action Fund for Fungal Infections (GAFFI) also highlights the devastating impact of focal fungal diseases in individuals who often have intact immune systems According to the Centers for Disease Control and Prevention (CDC), fungi are among the leading causes of opportunistic infections affecting patients with HIV/AIDS [16] [ ]they afflict the poorest people without access to safe drinking water, sanitation, and basic health services [ ]neglected tropical diseases can cause severe pain and disability throughout life, with long-term consequences for patients and families of the affected person
of all ages suffer from a serious fungal infection each year globally [11]. Notably, over 1.5 million of these individuals are estimated to die from their fungal disease [12].
Individual fungal diseases have profound impacts on human health. Around 220,000 new cases of cryptococcal meningitis occur worldwide each year, resulting in 181,000 deaths concentrated in sub-Saharan Africa [13]. More than 400,000 people develop Pneumocystis pneumonia annually and die without access to therapy [11]. In Latin America, histoplasmosis is one of the most common opportunistic infections among people living with HIV/AIDS, and approximately 30% of patients diagnosed with histoplasmosis in that region die from this disease [12]. Morbidity rates linked to fungal infections also represent an important health issue. For example, diseases such as chromoblastomycosis and eumycetoma lead to destructive deformations and debilitating conditions of the subcutaneous tissues, skin, and underlying bones, which result in social exclusion [14].
AIDS and opportunistic fungal diseases: Problem solved or current threat?
Along with patients on anticancer therapies and other immunosuppressive medications, individuals with advanced HIV have dramatically contributed to the excess numbers of deaths due to fungal diseases. The implementation of new therapeutic strategies has had an unquestionably positive impact on the health of individuals with HIV and, as a result, AIDS-related deaths have fallen by more than 50% since their peak in 2004. The global number of people living with HIV ranged from 32.7 million to 44 million in 2018. In this group, up to 23.3 million people had access to antiretroviral therapy. In 2017, about 1.7 million new HIV infections were reported, and about 770,000 people died from this condition. It is noteworthy that up to 75 million people have been infected with HIV since the start of the pandemic, resulting in approximately 32 million AIDS-related deaths [15]. Hence, there remain large numbers of individuals who are not in care or whose immune systems are compromised by HIV. These compromised HIV-infected individuals, particularly those with CD4+ cell counts less than 200/mm 3 , are at high risk for invasive fungal diseases. Thus, the spread or control of AIDS is directly linked to the impact of invasive mycoses on public health.
Tuberculosis remains the leading cause of death among people living with HIV, accounting for about 1 in 3 AIDS-related deaths [15]. By the end of 2016, 1.2 million people living with HIV developed tuberculosis. However, it is important to reinforce that invasive mycoses have a similarly close relationship to AIDS. According to the Centers for Disease Control and Prevention (CDC), fungi are among the leading causes of opportunistic infections affecting patients with HIV/AIDS [16]. Even with the increasing availability of anti-HIV treatment in less developed countries, fungal infections, particularly cryptococcosis and histoplasmosis [12,13], are still a major problem for people living with HIV/AIDS. For example, meningitis caused by the genus Cryptococcus is (after tuberculosis) the second leading cause of death in people living with HIV [13]. Importantly, cryptococcal meningitis is a brain infection that, if left untreated, results in an agonizing death for people living with HIV [17].
Systemic mycoses are neglected diseases
Despite their alarming impact on human health, fungal diseases have been continually neglected over the years. According to Molyneux [18], neglected tropical diseases have particular characteristics. Firstly, they afflict the poorest people without access to safe drinking water, sanitation, and basic health services. Secondly, they are usually chronic and slowly developing, becoming progressively worse if left undiagnosed and untreated. The damage these diseases cause can be irreversible. Finally, neglected tropical diseases can cause severe pain and disability throughout life, with long-term consequences for patients and families of the affected person. People with neglected tropical diseases are often stigmatized and socially excluded, which can affect their mental health. High-income groups are rarely affected.
The number of diseases that meet the above criteria is regrettably higher than would be expected for the second decade of the current millennium. This number, however, is underestimated, as several important syndromes fit these criteria but are not formally recognized as such, including systemic mycoses. Indeed, most high-mortality mycoses remain ignored by public health authorities and decision-makers. The financial support for fungal disease research is incredibly lower than the funding available for other infectious diseases that cause similar mortality [19,20]. For instance, for each human individual dying from malaria, US $1,315 are invested in research and development. Investment per death corresponds to US $334 for tuberculosis, US$276 for diarrheal diseases, and only US$31 for cryptococcal meningitis [19]. Still, there is no clear recognition of the importance of fungal diseases by international health agencies. For example, the World Health Organization (WHO) has recently included mycetoma, chromoblastomycosis, and "other deep mycoses" in the list of neglected tropical diseases [21], but specific information on WHO plans to combat fungal diseases is not yet available. Research on histoplasmosis, paracoccidioidomycosis, and sporotrichosis receives negligible funding [19]. Although these diseases are associated with high rates of mortality or the generation of conditions that hinder the performance of professional functions and social integration [14], none of them has been formally recognized as neglected diseases by WHO.
According to Morel [22], neglected diseases persist due to failures in science, market, and public health. Science failures occur when there is insufficient knowledge on the pathophysiology of infectious agents and the host response. Market failures are usually observed in diseases against which medicines or vaccines exist but at a prohibitive cost. Finally, public health failures occur in syndromes against which low cost or even free prophylactic tools and medicines are available but their use is limited by poor logistics and lack of governmental support.
Fungal diseases are clearly affected by the 3 types of failures described above. In this field, there has been a significant failure in science compared to diseases of medical importance recognized for decades or centuries, as previously mentioned. Of course, significant gaps in knowledge generation rates exist. Fungal infections consist of pathogenic processes triggered by eukaryotic microorganisms, which hinders the development of drugs that are toxic to the pathogen without affecting host tissues.
The fact that there are no licensed antifungal vaccines underscores another clear failure in science. Similarly, reliable diagnostic methods are available for a very limited number of mycoses [23], and therapeutic options are restricted to a few classes of drugs that too frequently are associated to both intrinsic and acquired resistance [24], toxic, and expensive [25]. In fact, innovative tools to combat invasive mycoses are rare and of slow development. For illustration, the most recently developed antifungals (echinocandins) were approved for clinical use in 2002 [26], reinforcing a major science failure in the area. It is noteworthy that this class of drugs is ineffective against various high-mortality mycoses [25].
Market failures have a profound impact on the control of fungal diseases. The deadliest fungal infections affect neglected populations, which results in a reduced market for drug commercialization and lack of interest from the pharmaceutical sector in the development of medicines, vaccines, and diagnostic tests for human mycoses. The main drug historically used for the treatment of severe disseminated mycoses is amphotericin B (AmB), whose discovery dates to 1955 [27], and it remains the standard first-line medication for certain fungal infections, such as cryptococcal meningitis. AmB formulations used for invasive fungal infections vary greatly in efficacy, safety, and cost. Conventional formulations are usually affordable but include significant side effects. The most effective and least toxic formulation is liposomal AmB, which can generate costs of up to US$100,000 per patient in different parts of the globe, including developing countries [28].
Liposomal AmB is highly effective when used in combination with other drugs. This pharmaceutical preparation was recommended by WHO as the preferred treatment for cryptococcal meningitis [29]. However, the high prices and unavailability of liposomal AmB in several countries have created major barriers to access to the most recommended treatment-as recognized by WHO itself-in developing countries. Liposomal AmB is registered and available for use (at high cost) in only 6 of 116 developing countries where fungal meningitis is a public health problem [11]. Prices are impeditive in many countries, revealing an unquestionable market failure.
Public health failures also impact fungal diseases negatively. According to GAFFI [11], several major antifungals are not available or registered in various regions where fungal diseases are most lethal. 5-Fluorocytosine, a low-cost antimetabolite that is beneficial to a number of patients with systemic mycoses when used in combination with other antifungal drugs, is not available and/or registered in many countries, including those highly affected by systemic mycoses [11]. Given the intrinsic difficulties and high costs of drug development and the evident market and public health failures in this field, it is more realistic and impactful to make rational use of the diagnostic and antifungal tests already available to minimize the number of deaths caused by fungal diseases. In a recent study, Denning [30] proposed actions to reduce deaths from fungal diseases on the basis of currently available diagnostic tests and generic antifungals. Assuming that diagnostic tests would be properly applied and that antifungal therapy would be administered promptly and following current international guidelines, it was estimated that by 2020 annual deaths from cryptococcal meningitis could fall from 180,000 to 70,000. Deaths due to Pneumocystis pneumonia would fall from 400,000 annually to 162,500. The 80,000 annual deaths attributable to disseminated histoplasmosis could be reduced by 60%. Annual deaths due to chronic pulmonary aspergillosis (56,288) could fall by 33,500. These actions would thus result in a total of 1 million lives saved over 5 years. Of course, the effective implementation of AIDS control and prevention campaigns in areas lacking such programs would also positively impact the reduction in deaths caused by fungal infections. Such actions have the potential to minimize a clear public health failure on the basis of the use of existing tools for diagnosing and treating invasive mycoses.
Present and future problems: The unknown
The epidemiology of fungal diseases is dynamic, and changes are difficult to predict. In 2012, the CDC reported an outbreak of fungal infections of the central nervous system that occurred among patients who received epidural or paraspinal injections of methylprednisolone. The majority of affected patients had meningitis caused by an extremely rare cause of fungal disease, namely Exserohilum rostratum [31]. This fungus is an example of an unexpected, emergent fungal disease, and it reinforced the perception of the pathogenic potential inherent in the fungal kingdom. The E. rostratum outbreak killed over 60 people out of 750 infected patients [32].
There is also a growing perception that climate change directly impacts the ability of fungi to cause damage to the human host. Recently, the multiresistant pathogen Candida auris has emerged as a serious global threat to human health, causing infections resistant to all major classes of antifungal drugs in immunocompromised patients [33]. C. auris differs from most other Candida species in several aspects. As recently reviewed by Lockhart, C. auris colonizes the skin rather than the gastrointestinal tract and is extremely resilient in the environment [34]. This resiliency has led to the fungus being associated with healthcare outbreaks, which have been exceedingly difficult to control due to the remarkable difficulty in eradicating the fungus from both patients and the environment [33]. Also of great concern, antimicrobial resistance in C. auris is more common than susceptibility to antifungals [34]. The spread of C. auris disease is linked to clonal isolates recovered from India, Venezuela, and South Africa between 2012 and 2015. Widespread use of antifungal drugs has been suggested as a determining factor for the emergence of C. auris [35]. Another hypothesis for the emergence of C. auris suggests that the fungus has recently acquired the virulence characteristics required to cause damage to human hosts. Although these explanations cannot be ruled out, it is unlikely that these changes occurred simultaneously on 3 continents. In this sense, it has recently been proposed that isolates of C. auris adapted to the human body temperature through selection from high-temperature regions [35]. Thus, this would be the first example of a novel human fungal pathogen that emerged as a result of global warming, which would explain several of its pathogenic characteristics. This observation demonstrates an important link between climate change and infectious diseases. Importantly, new threats to human health might still occur through climate adaptation mechanisms of zoonotic fungi, as proposed for C. auris.
Diseases that are known for decades still raise concerns. The city of Rio de Janeiro, Brazil, currently faces the largest sporotrichosis epidemic in history from a species, Sporothrix brasiliensis, that emerged locally [36]. Paracoccidioidomycosis is still one of the most important systemic mycoses in Latin America and the leading cause of mycosis mortality in immunocompetent individuals in Brazil [37]. Globally, the latest estimates suggest an annual occurrence of approximately 3 million cases of chronic pulmonary aspergillosis, over 200,000 cases of cryptococcal meningitis, 700,000 cases of invasive candidiasis, 500,000 cases of Pneumocystis jirovecii pneumonia, 250,000 cases of invasive aspergillosis, 100,000 cases of histoplasmosis, over 10 million cases of fungal asthma, and 1 million cases of fungal keratitis [12].
The need for improved diagnosis of fungal infections
Early diagnosis of mycoses is decisive to efficient therapy. Common methods for the laboratory diagnosis of fungal infections include direct microscopic examination of human or animal samples, histopathology, microbial culture, antigen detection, serology, and, in a few cases, molecular tests [38]. These tests are relatively efficient at identifying well-known pathogens as causative agents of human and animal syndromes. Difficulties are nevertheless present as fungi typically reproduce slowly, and culture methods may take as long as a month to identify common species such as Histoplasma sp. Moreover, susceptibility testing to guide clinicians is also problematic, and breakpoints are not available for several important human pathogenic fungi, including C. auris [39]. Moreover, as reviewed by Wickes and Wiederhold [23], detection of less frequently encountered fungi is considerably more complex because routine clinical laboratories may lack the expertise and appropriate equipment to identify pathogenic agents. In fact, a recent survey involving 129 major laboratory centers in 24 countries of Latin America and the Caribbean revealed that only 9% of these centers appear to potentially meet the minimum European Confederation of Medical Mycology standards for fungal laboratory diagnostics [40]. Furthermore, in the national laboratories of developing countries, there is an enormous demand for the diagnostics of other infectious (quite often epidemic) diseases, and the lack of trained personnel is an additional limitation.
The impact of this condition on human health is clear. For example, Exserohilum diseases before the 2012 United States outbreak manifested as rare systemic, cutaneous, corneal, and subcutaneous infections [41]. As the causative agent of the meningitis outbreak, E. rostratum was identified 1 month after the first meningitis case was reported, when the CDC announced that E. rostratum was recovered from unopened vials of steroid injections [42]. Similar problems occurred with necrotizing mucormycosis, a devastating complication of wounds caused by Apophysomyces sp., Saksenaea sp., and Lichtheimia [43]. Late laboratory diagnosis of cases of necrotizing mucormycosis caused by Apophysomyces trapeziformis resulted in 5 deaths in Missouri in 2011 [44]. Addressing the epidemic of C. auris has also been impeded by inherent deficiencies with classical laboratory methods utilized by many clinical laboratories as well as public health institutions [39]. These cases reinforce the notion that tests rapidly identifying infecting fungi have the potential to impact the course of fungal diseases beneficially.
Funding for research and innovation in fungal diseases
Funding for research on fungal diseases is unquestionably small compared to funding available for other infectious diseases that cause similar mortality [19,20]. As an illustration, funding for research on cryptococcal meningitis, the fifth deadliest infectious disease, receives 4.3-fold less funding than the disease caused by the bacterial pathogen Neisseria meningititis [19]. Of concern, reports submitted between 2008 and 2017 on funding for neglected diseases show that cryptococcal meningitis was the only measurably funded fungal disease, accounting for 0.5% of the total invested [45]. Tuberculosis, for comparison, had a 34-fold higher investment. Other fungal diseases were not even included in these reports. Specifically, neglected mycoses of unquestionable clinical importance-such as paracoccidioidomycosis, mycetoma, sporotrichosis, and chromoblastomycosis-have not even been mentioned in the report, suggesting that these research areas have received negligible funding. These observations were fully confirmed by direct analysis of scientific articles declaring financial support from major international agencies with a history of supporting neglected disease research [45].
Reduced support for research and innovation in fungal diseases impacts knowledge generation directly. For example, tuberculosis and malaria were the focus of 8,827 and 5,687 scientific articles published in 2017, respectively. Fungal diseases, on the other hand, were much less investigated, with 213 articles on cryptococcosis, 80 on paracoccidioidomycosis, 51 on chromoblastomycosis, 53 on mycetoma, and 56 on sporotrichosis produced in the same period [45]. These numbers are probably linked to alarming facts, such as the aforementioned lack of vaccines capable of preventing fungal disease, less effective diagnostics, and a dearth of antifungal drugs in development.
Perspectives
There are ongoing initiatives to develop antifungal vaccines and drugs with the potential to control invasive mycoses [46]. However, the distance between promising laboratory results and the translation of knowledge into benefits to the general population is unquestionably long. In fungal diseases, this distance is apparently longer, considering the lack of investment in science and technology in association with the science, market, and public health failures discussed here. The situation is even more complex if one considers the emergence of multiresistant and still largely unknown pathogens such as C. auris. The impact of emerging infections of this nature on human health is still hard to predict, but, as C. auris is now spread across the globe, the reality is that such infections can lead to significant morbidity and mortality as well as have vast economic consequences. Thus, it seems clear that public health authorities and decision-makers need to more thoughtfully and closely consider invasive fungal diseases as a real and contemporary problem to avoid disasters historically observed in other models of infectious diseases. The fact that fungal diseases are not spread at the same rate as other microbial transmissible diseases causing epidemics does not mean they are less relevant in terms of the number of attained individuals. Furthermore, the fact that they are less studied represents an enormous risk given the new potential threats as a consequence of environmental deterioration and global warming.
Realistic discussions about how prevention, diagnosis, and control of fungal diseases will improve outcomes demand a separation between concrete actions using currently available tools and future preventive actions. Of course, prophylactic actions against as yet unknown conditions are complex and difficult to develop, but the recent history of emerging fungal diseases reveals a clear need for knowledge generation on fungal pathogens. The attention to emerging fungal pathogens is important because even in the case they do not cause disease to humans due to new and as yet unknown zoonosis, they can affect animal health with an impact in the economy. Also, they can affect wild animals with an unpredictable ecological impact on biodiversity. Stimulating basic science and innovative activities in the area is therefore essential to reduce the impact of poorly known or yet unknown fungal diseases on human health.
On the basis of currently available therapeutic and diagnostic tools, short-and mediumterm impact actions also need to be implemented. For example, in 2019, WHO reinforced the need to use the Histoplasma capsulatum antigen detection test to diagnose histoplasmosis [47]. This test allows the diagnosis of the disease in more than 85% of patients within 48 hours, which would hasten the implementation of lifesaving antifungal therapy. Without proper diagnosis, patients are usually treated for tuberculosis, which has similar clinical symptoms. Under these conditions, patients usually die within 1 to 3 weeks. It is estimated that 48,000 lives could be saved over 5 years if appropriate diagnosis and treatment approaches are implemented for histoplasmosis.
The above examples illustrate the complexity behind well-known and still poorly known fungal diseases. In both scenarios, concrete actions can be implemented, Support for basic research and technological development is obviously important, but making health professionals and decision-makers aware of the profound and ongoing impact of fungal diseases on human health is essential. The current situation, however, raises serious concerns, considering the funding limitations in the area and lack of public programs for prevention and control of fungal diseases. The high incidence of invasive mycoses in AIDS patients and the recent examples of C. auris and E. rostratum demonstrate that, without game-changing actions, the perspective on how fungal diseases will impact human health in the coming decades is extremely negative.
|
v3-fos-license
|
2022-05-26T15:04:47.618Z
|
2022-05-01T00:00:00.000
|
249054539
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4409/11/10/1720/pdf?version=1653374158",
"pdf_hash": "91a7005728417fca0f225d67321c6e346302dfb3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:529",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"sha1": "e174a9612ae31efcbbbea31a5bf0e74ce2d1238d",
"year": 2022
}
|
pes2o/s2orc
|
Angiogenesis, Lymphangiogenesis, and Inflammation in Chronic Obstructive Pulmonary Disease (COPD): Few Certainties and Many Outstanding Questions
Chronic obstructive pulmonary disease (COPD) is characterized by chronic inflammation, predominantly affecting the lung parenchyma and peripheral airways, that results in progressive and irreversible airflow obstruction. COPD development is promoted by persistent pulmonary inflammation in response to several stimuli (e.g., cigarette smoke, bacterial and viral infections, air pollution, etc.). Angiogenesis, the formation of new blood vessels, and lymphangiogenesis, the formation of new lymphatic vessels, are features of airway inflammation in COPD. There is compelling evidence that effector cells of inflammation (lung-resident macrophages and mast cells and infiltrating neutrophils, eosinophils, basophils, lymphocytes, etc.) are major sources of a vast array of angiogenic (e.g., vascular endothelial growth factor-A (VEGF-A), angiopoietins) and/or lymphangiogenic factors (VEGF-C, -D). Further, structural cells, including bronchial and alveolar epithelial cells, endothelial cells, fibroblasts/myofibroblasts, and airway smooth muscle cells, can contribute to inflammation and angiogenesis in COPD. Although there is evidence that alterations of angiogenesis and, to a lesser extent, lymphangiogenesis, are associated with COPD, there are still many unanswered questions.
Introduction
In 1628, William Harvey discovered that blood flows throughout the human body, being pumped by the heart through a single system of arteries and veins [1]. In 1661, Marcello Malpighi first identified the capillaries and, at the same time, Caspar Aselius discovered the lymphatic vessels [1]. Angiogenesis, a term coined by John Hunter in 1787, is the outgrowth and proliferation of capillaries from pre-existing blood vessels [2,3]. This process is distinct from other major forms of neovascularization, in which new blood vessels are formed from endothelial precursor cells (EPCs), also known as angioblasts [2,4].
EPCs share an origin with hematopoietic progenitors and assemble into a primitive vascular labyrinth of small capillaries in a process known as vasculogenesis [1]. The vascular plexus progressively expands by means of vessel sprouting and remodels into a highly organized vascular network of larger vessels ramifying into smaller ones. Nascent endothelial cell (EC) channels become covered by pericytes (PCs) and smooth muscle . VEGF-A is the main mediator of angiogenesis. Several isoforms of VEGF-A activate tyrosine kinase receptors VEGFR1 and VEGFR2. VEGF-A signals mainly through VEGFR2, which is expressed at high levels by blood endothelial cells (BECs). VEGF-B and PlGF specifically activate VEGFR1: its role in angiogenesis, also expressed in BEC, is less clear. PlGF and VEGF-B also bind to soluble VEGFR1 (sVEGFR1). VEGF-C and -D activate VEGFR3 and VEGFR2. VEGFR3 is largely restricted to lymphatic endothelial cells (LECs). Besides the three tyrosine kinase receptors, there are co-receptors for VEGFs such as neuropilins (NRPs). NRP1 associates with VEGFR1 and VEGFR2 to bind VEGF-A, -B, and PlGF. NRP2 associates with VEGFR3 to bind VEGF-C and -D to regulate lymphangiogenesis.
Angiopoietins (ANGPTs) are members of another family of naturally occurring promoters of embryonic and postnatal neovascularization [27]. In humans, the ANGPT/Tie signaling pathway includes the Tie1 and Tie2 receptors and ANGPT1 and 2 [28]. Tie2 is a tyrosine kinase receptor highly expressed in ECs [29], but is also found in certain immune cells such as human basophils and lung mast cells (HLMCs) [30]. Tie1 is considered an orphan receptor without a known ligand ( Figure 2). Angiopoietin-1 (ANGPT1), expressed by pericytes and other vascular supporting cells, promotes angiogenesis by establishing and maintaining vascular integrity and quiescence [31]. ANGPT1 binds to the Tie2 receptor on ECs and stabilizes nascent vessels by protecting the adult vasculature against plasma leakage induced by VEGF-A [31][32][33]. ANGPT2 is produced by ECs and acts in an autocrine manner as a partial Tie2 agonist (i.e., it quenches Tie2 signaling in the presence of ANGPT2 and weakly activates Tie2 in the presence of ANGPT1) [34]. Therefore, as an antagonist of constitutive ANGPT1/ANGPT2 signaling, ANGPT2 reduces vascular integrity [35,36]. Tie2 mRNA protein is most abundant in the lung, which is uniquely dependent on Tie2 signaling [32]. ANGPT2 increases plasma and alveolar fluid in acute lung injury and is a mediator of epithelial necrosis with an important role in hyperoxic lung injury and pulmonary edema [37]. ANGPT1 overexpression also induces lymphatic vessel enlargement, sprouting, and VEGF-A is the main mediator of angiogenesis. Several isoforms of VEGF-A activate tyrosine kinase receptors VEGFR1 and VEGFR2. VEGF-A signals mainly through VEGFR2, which is expressed at high levels by blood endothelial cells (BECs). VEGF-B and PlGF specifically activate VEGFR1: its role in angiogenesis, also expressed in BEC, is less clear. PlGF and VEGF-B also bind to soluble VEGFR1 (sVEGFR1). VEGF-C and -D activate VEGFR3 and VEGFR2. VEGFR3 is largely restricted to lymphatic endothelial cells (LECs). Besides the three tyrosine kinase receptors, there are co-receptors for VEGFs such as neuropilins (NRPs). NRP1 associates with VEGFR1 and VEGFR2 to bind VEGF-A, -B, and PlGF. NRP2 associates with VEGFR3 to bind VEGF-C and -D to regulate lymphangiogenesis.
Hepatocyte growth factor (HGF) is a potent inducer of tumor growth and the formation of metastasis [49]. HGF is secreted as an inactive precursor (pro-HGF), and proteolytic cleavage results in an active αand β-chain heterodimer, which activates the tyrosine kinase receptor MET. HGF plays a major role in the modulation of angiogenesis and tumorigenesis [49]. Angiogenin is one of the most potent tumor-derived angiogenic factors [50,51]. It has been implicated as a mitogen for ECs, an immune modulator with suppressive effects on polymorphonuclear leukocytes, an activator of specific protease cascade, and an adhesion molecule [52]. Angiogenin is produced by macrophages, ECs, and peripheral blood lymphocytes [53].
Basic fibroblast growth factor (bFGF) belongs to a group of heparin-binding growth factors that stimulate EC proliferation and migration in vitro and angiogenesis in vivo [54]. bFGF plays a role in inflammatory conditions, wound healing [55], and pulmonary fibrosis [56]. There is evidence that angiogenin negatively regulates the expression of bFGF [54].
Hepatocyte growth factor (HGF) is a potent inducer of tumor growth and the formation of metastasis [49]. HGF is secreted as an inactive precursor (pro-HGF), and proteolytic cleavage results in an active α-and β-chain heterodimer, which activates the tyrosine kinase receptor MET. HGF plays a major role in the modulation of angiogenesis and tumorigenesis [49]. Angiogenin is one of the most potent tumor-derived angiogenic factors [50,51]. It has been implicated as a mitogen for ECs, an immune modulator with suppressive effects on polymorphonuclear leukocytes, an activator of specific protease cascade, and an adhesion molecule [52]. Angiogenin is produced by macrophages, ECs, and peripheral blood lymphocytes [53].
Basic fibroblast growth factor (bFGF) belongs to a group of heparin-binding growth factors that stimulate EC proliferation and migration in vitro and angiogenesis in vivo [54]. bFGF plays a role in inflammatory conditions, wound healing [55], and pulmonary fibrosis [56]. There is evidence that angiogenin negatively regulates the expression of bFGF [54].
Several homologous proteins with similar or different biological profiles (IL-17B, IL-17C, IL-17D, IL-17E, IL-17F) are grouped in a cytokine family [64,65]. IL-17 promotes angiogenesis in humans by stimulating EC migration and regulating the production of various proangiogenic factors [66,67]. The promotion of angiogenesis by IL-17 may also result from enhancement of the action of bFGF, HGF, and VEGF-A [68].
Different angiogenesis inhibitors have been identified. Thrombospondin-1 (TSP1), the first antiangiogenic factor identified in the 1990s, prevents VEGF-A-induced angiogenesis by directly binding to it and interfering with its binding to cell-surface heparan sulfates [76]. TSP1 is a potent inhibitor of EC migration and proliferation and an inducer of endothelial apoptosis [77]. Endostatin, a potent endogenous angiogenesis inhibitor [78], blocks endothelial growth and migration and promotes apoptosis. It antagonizes VEGF-A effects [79] at the VEGFR2 level [80]. VEGF-A mRNA splicing generates two protein families that differ by their carboxy-terminal six amino acids named VEGF-A 165a and VEGF-A 165b [81,82]. VEGF-A 165a is the canonical pro-angiogenic isoform, whereas VEGF-A 165b is the anti-angiogenic isoform. VEGF-A 165b binds to VEGFR2 but does not bind to NRP1. Therefore, VEGF-A 165b does not stimulate EC responses and inhibits several VEGF-A 165a -mediated EC processes [83,84]. VEGF-A 165b can be expressed and released by human neutrophils [46].
Chronic Obstructive Pulmonary Disease (COPD) and Inflammation
Chronic obstructive pulmonary (COPD) is a major global epidemic increasing worldwide as populations age [85,86]. COPD is the fourth-ranked cause of death worldwide, affecting approximately 10% of subjects older than 45 years [87]. There is compelling evidence that COPD is a complex and heterogeneous disorder with multiple endo-phenotypes and clinical presentations [88].
Inflammatory patterns in COPD have been referred to as inflammatory endotypes [89]. Neutrophilic inflammation is a hallmark of COPD and contributes to pivotal pathological features [88]. Eosinophilic inflammation can also be present as a stable endotype in a subgroup of COPD patients [90,91] and is associated with a favorable response to inhaled glucocorticoids (ICS) [92][93][94].
Whatever the underlying disease mechanisms, COPD patients are characterized by chronic inflammation of the airways and lung parenchyma, occasionally associated with systemic inflammation [95]. Cigarette smoke is the primary cause of COPD and is responsible for approximately 70% of COPD cases [96]. Other risk factors for COPD are air pollution, occupational exposure, respiratory infections, childhood asthma, and α1 antitrypsin (α1AT) deficiency [97] (Figure 3). Upon exposure to inhaled toxicants, bronchial epithelial cells (BECs) are activated and release preformed (i.e., alarmins: TSLP, IL-33, IL-25) and de novo synthesized cytokines (e.g., TNF-α, IL-1, IL-6, GM-CSF, and CXCL8). These mediators promote a cascade of signaling events leading to chronic pulmonary inflammation, airflow obstruction, and alveolar wall destruction in a susceptible individual. Chronic airway inflammation in COPD is associated with the activation of tissue-resident (e.g., macrophages, mast cells) and -infiltrating immune cells (e.g., neutrophils, eosinophils, basophils, CD8 + T lymphocytes) in the lumen and wall of airways and parenchyma [88,[98][99][100]. COPD development can also invoke pulmonary vascular re- Risk factors for COPD development include cigarette smoke, air pollution, occupational exposures, childhood asthma, respiratory infections, and alpha-1 anti-trypsin (α1AT) deficiency. Upon exposure to inhaled toxicants, lung structural cells, including epithelial cells and fibroblasts, as well as endothelial cells, are activated. Damaged bronchial epithelial cells release alarmins (TSLP, IL-33, IL-25) that activate several immune cells, endothelial cells, and fibroblasts. These cells produce inflammatory mediators to recruit other inflammatory cells, such as neutrophils, macrophages, and lymphocytes, to the site of exposure. This augments the expression of inflammatory mediators, such as cytokines (e.g., TNF-α, IL-6), chemokines [e.g., CCL2, CCL7, CXCL1, CXCL5, CXCL8], LTB 4 , and proteases (e.g., neutrophil elastase (NE), cathepsins, and matrix metalloproteinases (MMPs)). This cascade of events can lead to chronic pulmonary inflammation, airflow obstruction, and alveolar wall destruction (emphysema) in a susceptible individual. Chronic airway inflammation in COPD is associated with the activation of tissueresident (e.g., macrophages, mast cells) and -infiltrating immune cells (e.g., neutrophils, eosinophils, basophils, CD8 + T lymphocytes) in the lumen and wall of airways and parenchyma [88,[98][99][100]. COPD development can also invoke pulmonary vascular remodeling. Pulmonary arteries show structural abnormalities even in mild COPD without arterial hypoxemia and in smokers with normal lung function [101]. There is also evidence that angiogenesis and, to a lesser extent, lymphangiogenesis, are dysregulated in COPD [102][103][104][105][106]. However, airway angiogenesis and lymphangiogenesis in COPD and emphysema have been surprisingly poorly studied considering the relevance of these conditions.
In a preclinical mouse model, IL-25 (also known as IL-17E) activated innate lymphoid cells (ILC2s)-2 and caused pulmonary fibrosis [124]. Elevated concentrations of TSLP, IL-25, and IL-33 have been found in induced sputum of patients with COPD compared to controls [125]. To distinguish between asthma and COPD, which sometimes might be difficult in clinical practice, several biomarkers have been analyzed in the two groups of patients [126]. The authors found that asthma patients were characterized by higher levels of FeNO and peripheral blood eosinophils. It has been reported that IL-25 has the potential to promote angiogenesis [127].
Inflammatory Cells, Angiogenesis, and Lymphangiogenesis
The inflammation seen in the lungs of COPD patients involves both innate (macrophages, neutrophils, mast cells, eosinophils, basophils, natural killer cells (NK cells), γδ T cells, ILCs, and dendritic cells (DCs)) and adaptive immunity (B and T lymphocytes) [85]. There is also evidence that structural cells, including BECs and alveolar epithelial cells, ECs, fibroblasts, and myofibroblasts, can contribute to inflammatory mechanisms and angiogenesis in COPD.
Compelling evidence indicates that human macrophages are highly heterogeneous [129][130][131][132][133]. In the human lung, several subsets of macrophages have been identified [130][131][132]134], and it has been suggested that M1-like macrophages predominate in COPD patients [135]. However, further studies using single-cell RNA sequencing are needed to characterize the macrophage subpopulations in COPD patients. Recent evidence indicates that rhinovirus impairs the innate immune response to different bacteria in alveolar macrophages from patients with COPD [136]. Human rhinovirus also induced the release of cytokines (e.g., IL-6, TNF-α, IL-10, CXCL8) from macrophages.
Recently, we have found that two TSLP isoforms (long (lfTSLP) and short (sfTSLP)) and a TSLP receptor (TSLPR) are expressed in HLMs [20]. TSLP, contained in HLMs, was released in response to LPS. These results prompted us to investigate whether HLMs could be a target of TSLP. We found that TSLP induced the release of angiogenic (VEGF-A and ANGPT2) and lymphangiogenic factors (VEGF-C) from HLMs [20]. These results highlight a novel immunological network involving epithelial-derived TSLP, TSLPR, and the release of angiogenic and lymphangiogenic factors from HLMs.
Mast Cells
Mast cells are prominent immune cells in human lung parenchyma [12,69,132] and play a pivotal role in coordinating lung inflammation [140]. Mast cell density is increased in bronchial biopsies from COPD patients compared to healthy controls [99]. Activated rodent mast cells release VEGF-A and FGF-2 [141], and mast cell supernatants induce an angiogenic response in the chorioallantoic membrane [142,143]. HLMCs constitutively express other VEGFs in addition to VEGF-A, namely the angiogenic VEGF-B and the lymphangiogenic VEGF-C and -D [144]. These VEGFs are often present as preformed mediators in mast cells [9]. PGE 2 and adenosine, two important proinflammatory mediators, induced the expression of VEGF-A, -C, and -D in HLMCs [144]. These findings indicate that HLMCs have an intrinsic capacity to produce several VEGFs, suggesting that these cells might regulate both angiogenesis and lymphangiogenesis. HLMCs are not only a source of VEGFs in the airways, but also a target for these angiogenic factors. Indeed, HLMCs express VEGFR1 and 2, two major receptors for VEGFs. Different VEGFs (VEGF-A, -B, -C, -D, and PlGF-1) exert chemotactic effects on HLMCs by engaging both receptors. Recently, we found that several bacterial superantigens can induce the release of angiogenic (VEGF-A) and lymphangiogenic (VEGF-C) factors from HLMCs. Interestingly, the epithelium-derived cytokine IL-33 potentiated the release of proinflammatory (i.e., histamine), angiogenic, and lymphangiogenic factors from HLMCs [145]. These results suggest that IL-33 might enhance the inflammatory angiogenic and lymphangiogenic activators of HLMCs in pulmonary disorders.
Neutrophils
Increased numbers of activated neutrophils are found in the sputum and BAL fluid of COPD patients and correlate with disease severity, although few neutrophils are found in the bronchial wall and lung parenchyma [88]. Smoking stimulates the production and release of neutrophils from bone marrow and survival in the respiratory tract, possibly mediated by GM-CSF and G-CSF secreted from lung macrophages. Neutrophils' recruitment to the lung parenchyma involves initial adhesion to activated ECs through E-selectin, which is overexpressed on ECs in the airways of COPD patients. Neutrophils migrate into the respiratory tract under various chemotactic factors such as LTB 4 , CXCL1, CXCL5, and CXCL8 [146]. These chemotactic mediators can be derived from alveolar macrophages, mast cells, T cells, and epithelial cells [146]. Neutrophils themselves might be a major source of CXCL8 [58]. Neutrophils from COPD patients are activated and have increased concentrations of myeloperoxidase [142,147]. Activated neutrophils secrete neutrophil elastase (NE), cathepsin G (CG), and proteinase 3 (PR3), as well as MMP-8 and MMP-9, which contribute to alveolar destruction. NE, CG, and PR3 are potent promoters of mucus secretion from submucosal glands and goblet cells [148]. During COPD exacerbations, there is a marked increase of neutrophils in the airways, resulting in the increased production of neutrophil chemotactic factors (e.g., LTB 4 and CXCL8) [149].
Activated human neutrophils release neutrophil extracellular traps (NETs) [150,151]. Increased components of NETs have been found in the sputum of both stable and exacerbating COPD patients, alongside an increased proportion of NET-producing neutrophils [152,153]. The abundance of NETs in sputum correlates with the severity of airflow limitation [152,154], loss of microbiota diversity [154], and overall severity of COPD [154]. Despite these observations, neutrophils isolated from the blood of patients with COPD exacerbations have an apparently reduced ability to form NETs compared to stable patients and healthy controls, despite the increased plasma levels of cell-free DNA [155]. Finally, it should be noted that NETs can directly and indirectly promote angiogenesis [8].
Human neutrophils constitutively express and contain several proangiogenic factors (VEGF-A 165 , VEGF-B, ANGPT1, CXCL8, and HGF) [46,156]. Human neutrophils, similarly to other circulating immune cells (e.g., basophils) [157], do not express lymphangiogenic factors (VEGF-C and -D). sPLA 2 selectively induces the release of proangiogenic factors from human neutrophils [46]. Of note, sPLA 2 -activated neutrophils also express the antiangiogenic isoform VEGF 165b [46]. The relevance of the latter observation in the context of COPD remains to be defined. More recently, we found that LPS-activated neutrophils release VEGF-A, which stimulates angiogenesis through the formation of tubules in vitro [58].
Eosinophils
Eosinophils have been identified in different anatomical compartments of COPDaffected lungs and increased in severe patients [100]. However, the role of eosinophils and their mediators in COPD is still uncertain. Increased eosinophil numbers have been described in the airways and BAL fluid of patients with stable COPD, whereas others have not found increased numbers in airway biopsies, BAL fluid, or induced sputum [158]. The presence of eosinophils in COPD patients seems to predict a more favorable therapeutic response to bronchodilators and ICS [94] and might indicate coexisting asthma or asthma-COPD overlap syndrome (ACOS) [159][160][161]. Up to 15% of COPD patients appear to have clinical features of asthma [142]. The mechanism for increased eosinophil counts in some patients with COPD is debated [162]. It has been suggested that damaged BECs release IL-33, which can induce the release of IL-5 from ILC2s [163]. IL-33 expression is increased in basal epithelial progenitor cells in COPD patients and is associated with increased levels of IL-13 and the mucin gene 5AC [119]. IL-33 is expressed in the lungs of COPD patients [164], and levels of IL-33 and its receptors ST2 are increased in the serum of these patients. Moreover, circulating IL-33 levels in COPD patients are correlated to peripheral blood eosinophils [118]. Finally, the exposure of PBMCs from COPD patients to combustion-generated ultrafine particles obtained from fuel induced the release of IL-33 [121].
The potential role of eosinophils and their powerful mediators in the pathophysiology of certain COPD endotypes has generated some enthusiasm in treating this heterogeneous disorder with monoclonal antibodies (mAbs) targeting IL-5 (i.e., mepolizumab) or IL-5Rα (i.e., benralizumab). In COPD patients with eosinophilic phenotype, mepolizumab decreased the annual rate of exacerbations compared to a placebo group [165]. Benralizumab was not associated with a lower annualized rate of COPD exacerbations than placebo among patients with blood eosinophils counts ≥ 220 per mm 3 [166].
Human eosinophils synthesize and store in their granules several proangiogenic mediators such as VEGF-A, FGF-2, TNF-α, GM-CSF, nerve growth factor (NGF), and CXCL8 [167]. In addition, these cells promote EC proliferation in vitro and induce vessel formation in aortic rings and in the chick CAM assays [168].
Basophils
Although human basophils account for 0.5-1% of all leukocytes in peripheral blood, these cells play critical roles in clearing pathogens [169][170][171], initiating allergic disorders [71,172], and COPD [100]. Basophil density is increased in the lung tissue of COPD patients compared to smoking controls [100]. A significant correlation was found between basophils and eosinophils in the lungs of COPD patients. Activated human basophils express several forms of VEGF-A (121,165,189), and their secretory granules contain VEGF-A [157]. The activation of human basophils induces the release of VEGF-A [157] and ANGPT1 [30]. Human basophils also express HGF [173]. VEGF-A has a chemotactic effect on basophils through the activation of VEGFR2. These cells do not express VEGF-C and -D and presumably play a role in angiogenesis, but not in lymphangiogenesis [157].
Lymphocytes
CD8 + and, to a lesser extent, CD4 + T cells, are increased in the lung parenchyma, bronchi, and bronchioles of COPD patients compared to asymptomatic smokers [174,175]. There is evidence that CD8 + T lymphocytes are both increased in number and have increased functional activity in COPD [175]. CXCR3 is highly expressed on effector T cells following activation by ligands such as CXCL10. CD8 + T lymphocytes themselves produce CXCL10, thus recruiting more CXCR3 + T cells to the lung, where they exert inflammatory and destructive effects. The overexpression of CXCR3 and its ligand CXCL10 by BECs could contribute to the accumulation of CD8 + and CD4 + T cells, which express CXCR3 [176].
ILCs are critical players in mucosal immunity. Group 1 ILCs (ILC1), group 2 (ILC2), and group 3 (ILC3) are a population of tissue-resident lymphocytes with pleiotropic roles in mucosal inflammation, including defense against pathogens, the maintenance of epithelial barrier homeostasis, the containment of microbiota, and tissue repair [177]. ILCs play an important role in the regulation of lung immunity and might be activated through danger signals and cell damage [178]. All three groups of ILCs have been identified in the human lung [179]. In COPD patients, there is an increase in the number of ILC3s, which secrete IL-17 and IL-22, and these cells might play a role in driving neutrophilic inflammation. Exposure to cigarette smoke inhibits ILC2 function, and this is associated with an exaggerated anti-viral response [116]. Moreover, exposure to cigarette smoke and viral infections induced the emergence of the ILC1 population in mice [180]. The same authors found that the frequency of circulating ILC1 was higher in COPD patients compared to healthy controls. Conversely, the frequency of ILC2 cells was lower in COPD patients compared to healthy smokers. A similar increase in ILC1 frequency has been reported in the lungs of COPD patients [181].
A distinct cluster of CD4 + T helper 17 (Th17) cells are characterized by the expression of the master transcription factor RORγt [65]. CD4 + T H 17 cells, which secrete IL-17A and IL-22, are increased in the airways of COPD patients and might play a role in orchestrating neutrophilic inflammation [182,183]. Th17 cells produce the IL-17 family of structurally related cytokines, IL-17A through IL-17F. IL-17A, commonly known as IL-17, is the prototypical member of this family. It was reported that Th17 cells release IL-1β and IL-17 and exert lymphangiogenic effects [184]. IL-17A promotes angiogenesis in preclinical [66] and clinical models of vascular remodeling [185]. IL-17E (IL-25), a little unusual among the IL-17 family, is produced by bronchial epithelial cells [186] and tuft cells [187] and in respiratory viral infections [188]. B lymphocytes are also increased in the lungs of COPD patients, particularly in those with severe disease [189]. B cells can be organized into lymphoid follicles located in peripheral airways and lung parenchyma [190]. The expression of B-cell activating factor, an important regulator of B-cell function and hyperplasia, is increased in the lymphoid follicles of patients with COPD [191,192]. Recent evidence indicates that a subset of regulatory B cells (Bregs) with high levels of the surface markers CD24 and CD38, previously shown to exert immunosuppressive functions, is decreased in the peripheral blood of COPD patients [193].
Dendritic Cells
Dendritic cells (DCs) are an important link between innate and adaptive immunity [194]. The airways and lungs contain a rich network of DCs localized near the surface, so that they are ideally located to signal the entry of inhaled foreign substances [195,196]. Epithelium-derived cytokines (TSLP, IL-33, IL-25) are important modulators of DC functions [197][198][199]. DCs can activate a variety of other inflammatory and immune cells, including macrophages, neutrophils, and T and B lymphocytes, and therefore DCs might play an important role in the pulmonary response to cigarette smoke and other inhaled toxic chemicals [200]. DCs are activated in the lungs of COPD patients [201] and correlate to disease severity [202]. The numbers of DCs are increased in the lungs of COPD patients, and cigarette smoke increases their survival in vitro [203]. Human DCs can produce biologically active VEGF-A [194]. DCs activated by different bacteria release VEGF-A, which induces neutrophil recruitment to the site of inflammation [204]. [205]. NK cells have been implicated in maintaining immune homeostasis in the lung and in the pathogenesis of COPD [206,207]. However, the specific mechanisms of involvement of NK cells in COPD are still rather elusive [208]. NK cells make up 5-15% of the circulating lymphocytes. These cells are subdivided into two main subpopulations, CD56 bright CD16 − and CD56 dim CD16 + . CD56 bright CD16 − NK cells, accounting for about 10% of peripheral blood NK population, mainly produce several cytokines (i.e., IFN-γ, IL-10, TNF-α, GM-CSF). CD56 dim CD16 + NK cells, the predominant (approximately 90%) peripheral blood NK cells, are highly cytotoxic by producing perforin and granzymes and inducing antibody-dependent cytotoxicity [205]. In humans, NK cells represent 5-20% of the CD45 + lung lymphocytes [209]. Approximately 80% of lung NK cells show the CD56 dim CD16 + phenotype, whereas the remaining 20% are CD56 bright CD16 − and CD56 dim CD16 − [207]. There is evidence that the low cytotoxic CD56 bright CD16 − phenotype exerts pro-angiogenic activity [210].
NK cells, as innate immune cells, contribute to the first line of defense mechanisms for the human body against viral and bacterial infections and tumors
Several studies examining the frequency and activation status of NK cells in peripheral blood and induced sputum in COPD patients have provided contrasting results [208]. Thus, further studies are needed to elucidate the mechanisms of NK cells in the pathogenesis, endotypes, and exacerbations of COPD.
Structural Cells and Angiogenesis Epithelial Cells
The bronchial epithelium constitutes a key component of the innate immune system, providing a physical and immune-modulatory barrier that is a first line of defense against environmental agents. Epithelial cells are activated by cigarette smoke and other inhaled irritants (i.e., biomass fuel smoke) to produce a plethora of inflammatory mediators (e.g., TNF-α, IL-1β, IL-6, GM-CSF, and CXCL8) [211]. There is compelling evidence that viral and bacterial products [212][213][214][215], smoke extracts [113,114], diesel exhaust [216], and cytokines [217] can induce the rapid release of epithelial-derived cytokines (TSLP, IL-33, and IL-25), also known as alarmins. These upstream cytokines can activate several immune (e.g., DCs, ILCs, macrophages, mast cells, neutrophils, eosinophils) and structural [e.g., fibroblasts/myofibroblasts, ASM cells, goblet cells, and ECs] cells [108,123]. Thus, epithelial-derived cytokines might play an upstream role in airway remodeling in COPD. In particular, TSLP expression in bronchial biopsies was increased in COPD patients compared to healthy ex-smokers and smokers [107]. Moreover, most COPD exacerbations are associated with viral infections, and rhinoviral infection induced the overexpression of TSLP [218].
VEGFs appear to be necessary to maintain alveolar cell integrity, and their blockade in rats induces apoptosis of alveolar cells and an emphysema-like pathology [219]. A reduction in peripheral lung VEGF concentrations is found in smokers and COPD patients, but levels of HGF, another growth factor, are increased in smokers and therefore might protect against the effect of reduced VEGF levels on alveolar integrity. In COPD patients, both VEGF and HGF levels are reduced, which might contribute to the development of emphysema [220].
The airway epithelium in COPD patients often shows squamous metaplasia, resulting from increased proliferation of basal BECs. Epithelial growth factor receptors (EGFRs) show an increased expression in the BECs of COPD patients and might contribute to basal cell proliferation, resulting in squamous metaplasia and an increased risk of bronchial carcinoma [221]. Goblet cell hyperplasia, a typical feature of COPD, is a response to chronic airway insult due to cigarette smoke and other pollutants. EGFRs play an important role in mucus hyperplasia and secretion and can be activated by neutrophilic inflammation through NE secretion, which releases TGF-α [222] (Figure 2). Oxidant stress can also activate EGFRs and induce mucus hypersecretion [223].
Human BECs constitutively express and release significant amounts of VEGF-A in cell culture media at concentrations capable of stimulating EC growth. Hypoxia and TGF-β1 stimulated VEGF-A production in these cells [224]. Canine vascular smooth muscle cells (VSMCs) express VEGFR1, 2, and NRP1 at mRNA and protein levels and respond to VEGF-A in vitro [225]. These findings have been extended by showing that ASM cells also express several splice variants of VEGF-A (121,165,189,206) and constitutively secrete VEGF-A protein [226]. Certain cytokines (e.g., IL-1β, TGF-β) increase VEGF-A production by human VSMCs [227]. VEGF stimulation enhanced the production of MMPs by human VSMCs [228], and VEGF-A can induce fibronectin secretion by human ASM cells. These findings suggest that lung structural cells can contribute to angiogenesis through the local release of angiogenic factors.
The endothelium has long been known to be dysfunctional in COPD [229]. Endothelial dysfunction is associated with COPD severity and is related to FEV 1 and the percentage of emphysema on CT scans [230,231]. MicroRNAs (miR) are small non-coding ribonucleic acids (RNAs) that regulate gene expression [232]. miR expression differs between COPD patients and healthy controls [233,234]. A recent study identified three miR upregulated in COPD pulmonary endothelial cells: miR-181b-3p, -429, and -23c [234]. These miRs impair angiogenesis (tube formation and sprouting of endothelial cells). miR-driven changes in the pulmonary endothelium might represent a novel mechanism driving COPD through alterations in angiogenesis.
Angiogenesis and Lymphangiogenesis in Experimental Models of Chronic Airway Inflammation
The role of angiogenesis and lymphangiogenesis has been evaluated in different experimental models of COPD. A plethora of stimuli, including cigarette smoke [235], hypoxia [236], and cytokines (e.g., IL-1β and TGF-β) [227] increase VEGF-A production. Perfusion of isolated lungs under hypoxic conditions increased tissue VEGF-A and VEGFR1 and 2 mRNAs [237]. VEGF-A and VEGFR2 were similarly overexpressed in chronically hypoxic rats, suggesting that both acute and chronic hypoxia increase the lung tissue expression of VEGF-A and its receptors. The same investigators reported that the pharmacologic blockade of VEGFR boosted the expression of oxidative stress, alveolar cell apoptosis, and alveolar enlargement [238]. VEGF seems to protect ECs against apoptosis in models of rapidly growing vessels during fetal development or tumor angiogenesis [239].
Cigarette smoke is a complex mixture containing a myriad of oxidant molecules [240] and can increase oxidative stress. CSE down-regulates VEGF expression by epithelial cells, causes EC apoptosis, and shortens the VEGF-dependent survival of cultured ECs [238]. Chronic cigarette exposure or administration of a VEGFR antagonist caused alveolar cell apoptosis and airspace enlargement [238,241]. VEGF-B, a selective agonist of VEGFR1, is expressed in the lung [242] and can stimulate angiogenesis in the pulmonary circulation through the interaction with VEGFR1 [243]. The role of VEGF-B in experimental COPD models and in patients with this disorder is largely unknown.
Lymphatic vessel hyperplasia plays a role in chronic airway inflammation. In mouse models of chronic respiratory tract infection with Mycoplasma pulmonis, lymphangiogenesis began slowly in airway inflammation, but after a few weeks, it overtook the remodeling and proliferation of blood vessels and persisted after the resolution of inflammation [244]. Lymphangiogenesis in inflamed airways is mediated by VEGF-C and -D, mainly derived from airway-immune cells (e.g., macrophages and mast cells), [12,20,45,58,137,144,145] through VEGFR3 signaling in lymphatic ECs. Further studies are needed to clarify the factors modulating the growth and regression of lymphatic vessels in chronic airway inflammation.
Angiogenesis and Lymphangiogenesis in COPD
Initial studies in bronchial biopsies of COPD patients did not find any significant increase in vessel number [245,246]. A subsequent immunohistochemical study on bronchial biopsies from patients with moderate COPD (GOLD 2) showed an increase in the number of vessels and the vascular area compared to controls. The increase in bronchial vascularity was associated with higher cellular expression of VEGF-A [102]. The immunohistochemical expression of VEGF-A was also greater in pulmonary arteries of smokers with normal lung function and patients with moderate COPD than in non-smokers [106]. VEGF levels in serum and induced sputum were higher in COPD patients than controls [103,104,247]. An interesting case-control study revealed that genetic polymorphisms of HIF-1α and VEGF are associated with the progression of COPD [248].
Kranenburg and colleagues [105] found that COPD was associated with increased VEGF expression in bronchial, bronchiolar, and alveolar epithelium, lung macrophages, ASM cells, and VSMCs. VEGFR1 and 2 were also higher in COPD patients. The authors found an inverse correlation between VEGF and FEV 1 and suggested that increased VEGF expression was an attempt to repair lung damage in COPD. COPD patients with acute exacerbations may have a transient increase in circulating concentrations of VEGF and C-reactive protein (CRP) and a higher neutrophil count than stable COPD and healthy controls [249].
The significance of the VEGF/VEGFR system in COPD and emphysema appears to differ. VEGF and its receptor VEGFR2 were decreased in lung extracts of emphysematous lungs [250]. Santos et al. examined surgical specimens from non-smokers, smokers with normal lung function, patients with moderate COPD, and patients with emphysema [106]. Although COPD patients showed an overexpression of VEGF, in patients with severe emphysema, the expression of VEGF-A in pulmonary arteries was low despite intense vascular remodeling. The authors suggested that VEGF-A expression varies with the severity of COPD and might be involved in pulmonary vascular remodeling at the early stages of the disease. VEGF expression in alveolar macrophages was downregulated in patients with emphysema compared to smokers without emphysema [251]. VEGF is a growth factor that maintains alveolar homeostasis, so the decrease in VEGF expression in emphysema might play a pathogenic role.
There is little information on factors besides VEGFs that could affect angiogenesis in COPD. It has been reported that the serum and BAL fluid levels of PlGF increase in COPD patients and are inversely correlated with FEV 1 [252]. Immunochemistry studies found that the expression of bFGF in the airways of COPD patients was greater than in asymptomatic smokers [253].
Collectively, these findings support the hypothesis that angiogenesis is a prominent feature of airway inflammation in COPD: increased vascularity and enhanced bronchial expression of angiogenic factors (mostly VEGF-A) are associated with COPD development. VEGF-A overexpression appears inversely correlated to the disease severity.
COPD is frequently associated with other concomitant systemic disorders [85,254]. For example, limb skeletal muscle dysfunction affects the morbi-mortality of these patients [255]. Capillary remodeling in response to exercise training is linked to angiogenesis [256]. The skeletal muscle angiogenic process (i.e., capillary creation and maturation) of COPD patients in response to exercise training is impaired compared to controls [257]. Moreover, women with COPD from biomass smoke have reduced serum levels of biomarkers of angiogenesis and tumor progression (e.g., FGF-2, HGF, sVEGFR2, sHER2/neu, sTIE-2) compared to women with COPD from smoking [258].
Therapeutic Opportunities
Given the potential role of angiogenesis and lymphangiogenesis in COPD, further studies are needed to investigate the role of angiogenic and lymphangiogenic inhibitors as a therapeutic approach for COPD treatment. In a mouse model of COPD induced by LPS injection and cigarette smoke inhalation, sunitinib, a specific tyrosine kinase inhibitor, has been shown to downregulate the expression of VEGF, VEGFR1, and VEGFR2. In addition, it also downregulates the phosphorylation of VEGFR1/R2 [259]. Moreover, antiangiogenic nanotherapy has been shown to inhibit airway remodeling in a mouse model of asthma [260]. There is increasing evidence that cysLTs, major lipid mediators produced by human mast cells [69,70] and basophils [71], can promote angiogenesis [73][74][75] via the activation of cysLT 2 R. A cysLT 2 R antagonist has been shown to inhibit angiogenesis [73][74][75]. This class of compounds should be investigated to assess their effects in preclinical models of COPD. High-dose ICS therapy (2000 mcg/day of fluticasone, FP) has been shown to reduce airway vasculature in asthma patients [261]. In particular, ICS decreased the vessel number by 30%, VEGF staining by 40%, and angiogenic sprouting by 25%.
TSLP is expressed in the airways of COPD patients [107], and it is overexpressed in ASM from COPD patients compared to healthy subjects [111]. TSLP induces the release of angiogenic and lymphangiogenic factors from HLMs [20]. Tezepelumab is a monoclonal antibody [87] anti-TSLP that has been shown to improve lung function in severe asthma [262,263]. The efficacy of tezepelumab in preventing COPD exacerbations is presently under investigation (NCT04039113).
Different classes of inhibitors targeting specific angiogenic factors (e.g., VEGFs, ANGPTs) and their receptors (i.e., VEGFRs, Tie1/2) have been developed to inhibit angiogenesis and lymphangiogenesis [265,266]. These compounds, some of which have already been approved for clinical use, could be used to evaluate the role of angiogenesis/lymphangiogenesis in preclinical and clinical models of COPD.
Angiogenesis, a canonical feature of inflammation [11,12,57], is also altered in COPD [102][103][104][105][106]247,248,270]. Vascular abnormalities [271] and enhanced bronchial expression of angiogenic factors [105] have been associated with COPD development. It remains to be demonstrated whether these altered vascular responses might be involved in the pathogenesis of parenchymal and vascular remodeling in different stages (e.g., early and/or late) of the disease. VEGF and its receptors VEGFR1 and 2 may be involved in peripheral vascular and airway remodeling in an autocrine or paracrine manner. This system may also be associated with epithelial cell viability during airway wall remodeling in COPD.
The importance of lymphangiogenesis in COPD remains largely unknown. Two important lung-resident immune cells such as human macrophages [45,58,137] and mast cells [12,144,145] are major sources of two main lymphangiogenic factors (VEGF-C and -D). Intriguingly, recent evidence indicates that VEGF-C, differently from VEGF-D, can contribute to the resolution of inflammation [272]. The role of distinct (VEGF-C vs. VEGF-D) lymphangiogenic factors in different stages and phenotypes of COPD remains to be explored.
Asthma and COPD are airflow limitation diseases with similar clinical manifestations but different pathophysiologic mechanisms. The two inflammatory disorders can be successfully differentiated in the vast majority of cases [126]. Sometimes the presence of eosinophils in COPD patients might indicate co-existing ACOS [159][160][161]. Several studies have reported increased bronchial vascularity and the overexpression of angiogenic factors in asthma [273][274][275]. A marked increase in VEGF was seen in tissues and biological fluids from asthmatics, and the levels correlated with disease severity, but inversely with airway hyperresponsiveness [276][277][278]. By contrast, low VEGF levels have been noted in emphysema and VEGF blockade caused emphysema in murine models [241]. Thus, some authors have suggested that VEGF excess contributed to an asthma-like phenotype of COPD and VEGF deficiency to the development of pulmonary emphysema [279].
Several drugs and monoclonal antibodies that target the VEGF-VEGFR and the ANGPT-Tie pathways are in clinical practice and development for oncological and inflammatory applications [265,280]. We would like to speculate that further investigations should evaluate whether the correction of altered angiogenesis/lymphangiogenesis may prove beneficial in treating chronic inflammatory airway diseases. There is some evidence that cysteinyl leukotriene (CysLT) receptor antagonists can alter vascular permeability by reducing angiogenic factor expression in the airways [281]. Finally, agents that specifically inhibit various angiogenic factors (VEGFs, ANGPTs, etc.) and their receptors (VEGFRs, Tie1/2) controlling angiogenesis and lymphangiogenesis may offer novel strategies for treating microvascular changes in COPD.
Funding: This work was supported in part by grants from the CISI-Lab Project (University of Naples Federico II), TIMING Project, and Campania Bioscience (Regione Campania).
Acknowledgments:
The authors thank Gjada Criscuolo for her excellent managerial assistance in preparing this manuscript and the administrative staff (Roberto Bifulco, Anna Ferraro, and Maria Cristina Fucci), without whom it would not be possible to work as a team.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2023-03-01T16:03:16.875Z
|
2023-02-27T00:00:00.000
|
257249888
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "http://umu.diva-portal.org/smash/get/diva2:1743822/FULLTEXT01",
"pdf_hash": "8cf827c2e836a1bc72c87d830346fad03afccfda",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:530",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "0299fabe619927daab33e9d0ddc5d0f7517feb43",
"year": 2023
}
|
pes2o/s2orc
|
Contrasting impacts of warming and browning on periphyton
We tested interactive effects of warming (+2°C) and browning on periphyton accrual and pigment composition when grown on a synthetic substrate (plastic strips) in the euphotic zone of 16 experimental ponds. We found that increased colored dissolved organic matter (cDOM) and associated nutrients alone, or in combination with warming, resulted in a substantially enhanced biomass accrual of periphyton, and a comparatively smaller increase in phytoplankton. This illustrates that periphyton is capable of using nutrients associated with cDOM, and by this may affect nutrient availability for phytoplankton. However, warming weakened the positive impact of browning on periphyton accrual, possibly by thermal compensation inferred from altered pigment composition, and/or changes in community composition. Our results illustrate multiple impacts of climate change on algal growth, which could have implications for productivity and consumer resource use, especially in shallow areas in northern lakes.
Over the last decades, surface waters throughout the boreal northern hemisphere have been rapidly warming (O'Reilly et al. 2015) and receiving increasing concentrations of colored dissolved organic matter (cDOM) resulting in the browning of waters (Evans et al. 2005;Roulet and Moore 2006). The warming and browning of surface waters affect aquatic algae in contrasting ways. Although increasing temperatures may directly increase algal growth rates (Lürling et al. 2013;Hood et al. 2018), algae may employ compensation mechanisms for suboptimal warmer temperatures (Barton and Yvon-Durocher 2019;Liu et al. 2022), that on the longer term may result in a reduction of algal accrual. Increasing cDOM concentrations supplement dissolved organic carbon (DOC) and nutrients that stimulate algal growth but also cause light limitation resulting in a unimodal relationship of algal growth with cDOM in lakes (Jones and Lennon 2015;Seekell et al. 2015;Bergström and Karlsson 2019). In addition, nutrients, light, and temperature combined have interactive effects on aquatic algal growth (De Senerpont Domis et al. 2014;Endo et al. 2017;Burrows et al. 2021). For instance, algae become more efficient at utilizing nutrients at higher temperatures, resulting in a synergetic effect on algal growth (Rhee and Gotham 1981;Hamdan et al. 2021) and biomass accrual , where the net responses are further constrained by light. Free-floating algae (phytoplankton) are restricted to utilizing nutrients from the water and can position themselves at optimal light conditions, whereas periphytic (attached) algae are stationary and thus restricted to use the light reaching their location. Because of this contrasting mobility, periphyton and phytoplankton are two distinct algal communities. Periphyton is often more abundant compared to phytoplankton, especially in shallow areas of lakes or in lakes with clear water, but particularly understudied (Vadeboncoeur et al. 2002;Vander Zanden and Vadeboncoeur 2020). Hence, separate and/or interactive effects of light, temperature and nutrients caused by warming and browning on biomass accrual of periphyton are unresolved.
Periphyton includes algae growing on submerged surfaces with nutrient contents ranging from nutrient-poor rocks (epilithic algae) to nutrient-rich sediments (epipelic algae) from which it derives nutrients for its growth (Vadeboncoeur and Lodge 2000). Because of the relatively nutrient-rich sediment conditions, epipelic algae in northern lakes are primarily light-and not nutrient-limited (Vadeboncoeur et al. 2003;Ask et al. 2009;Puts et al. 2022). However, positive responses to in situ inorganic nutrient (Myrstener et al. 2018;Fork et al. 2020) and nutrients combined with cDOM additions (Vinebrooke and Leavitt 1998), as well as field studies (Hansson 1992), indicate that epipelic algae can also be nutrient limited, especially in nutrient-poor systems. Conversely, epilithic algae are constrained to utilize pelagic nutrients, and are often nutrientand/or light-limited (Vinebrooke and Leavitt 1998;Fork et al. 2020;Myrstener et al. 2020). Thus, among periphyton in general and phytoplankton the differences in growth-strategies are both nutrient-and light-related, and among epipelic and epilithic algae these differences are mainly nutrient-related. Therefore, it is unclear if ongoing warming and browning of lakes influence pelagic nutrient utilization among periphyton, which is plausible especially when considering periphyton growing on nutrient-poor substrate.
Aquatic algae have developed a variety of photosynthetic pigments in order to adapt to a wide range of light conditions, mostly to protect against harmful radiation causing oxidative stress but also to optimize light harvesting at different intensities and wavelengths. Chlorophyll α (Chl a) is often used as a biomarker for biomass in plants including algae (Wetzel and Likens 1991). Chlorophyll c (Chl c) occurs in mixed algae groups (i.e., diatoms, chrysomonads and brown algae; Jeffrey and Humphrey 1975) and is particularly rich in the carotenoid fucoxanthin (accessory pigment related to photo-protection; Young and Britton 1993). Although not perfect proxies, algal pigment compositions can be used as markers for algal communities, and especially carotenoids are successful taxonomic markers (Young and Britton 1993). Accordingly, pigment composition is often used to assess algal responses to varying light intensities both in experimental (Ehling-Schulz et al. 1997) and natural settings, for instance reflected by lake depth or turbidity (water color; Vinebrooke and Leavitt 1998;Hodgson et al. 2004). Yet, interactive effects of warming, nutrient supplementation, and changes in water color by cDOM additions on periphyton biomass and pigment composition in a natural setting remains understudied.
Here, we test interactive effects of warming (+2 C) and nutrient supplementation associated with cDOM on biomass of phytoplankton and periphyton growing on a nutrient-poor synthetic substrate in 16 experimental ponds. In addition, we test the response in pigment composition (Chl a, Chl c, and fucoxanthin) of periphyton to browning and warming. We hypothesize that: 1. Periphyton and phytoplankton accrual increase with cDOM and associated nutrients (browning) alone, or in combination with warming. 2. Periphyton growing on nutrient-poor substrates can efficiently utilize pelagic nutrients associated with cDOM for its growth. 3. Periphytic pigment composition (Chl a, Chl c, and fucoxanthin) responds to both browning and warming.
Methodology and approach
Experimental design and monitoring Using 20 (16 + 4 buffer ponds) experimental ponds at the Experimental Ecosystem Facility (EXEF), we tested periphyton and phytoplankton biomass accrual over a cDOM and associated nutrients gradient combined with warming. We used dissolved organic carbon (DOC) as proxy for cDOM, and measured associated (total and inorganic) nutrients and water color (Table 1) to assess the nutrient-supplementing and lightreducing effect of the supplemented cDOM (Jones 1992;Bergström and Karlsson 2019). The experiment was carried out between 15 June 2018 and 30 August 2018, which was an exceptionally warm summer for this area (Blunden and Arndt 2019). The EXEF ponds are naturally functioning ecosystems and contain a soft benthic habitat and naturally occurring primary producers and invertebrate consumers. The ponds were separated by impermeable dark-green polyvinylchloride (PVC) sheets, are each 11.5 m long, 6.7 m wide, and on average 1.5 m deep, and contain a 6.7-m-long natural shoreline (Fig. 1A). We established a 4 Â 2 factorial design that included four increasing cDOM categories (duplicates) and warming (+2 C; Table 1; Fig. 1) and used the remaining four ponds to separate the warmed ponds from the ambient ponds. We choose a warming of 2 C because the Intergovernmental Panel on Climate Change (IPCC) formulated that a global warming of 2 C goes accompanied with heat extremes where ecosystems will more often reach critical tolerance thresholds (IPCC, AR6; Ch1, 2021). We created the cDOM gradient by combining continuously provided input of cDOM-rich water, collected biweekly from the naturally cDOM-rich small boreal river Hörneån located 45 km northwest of EXEF, with tap water derived from the Umeå municipality that uses ground water as source (control treatment; Table 1 for water chemistry for tap and river water). The water of each individual pond was warmed by a land-based individual heat exchanger and circulated through a filter cube (10 PPI) in each pond (see Capo et al. 2021;Hamdan et al. 2021 for specifics). The ambient ponds were subjected to the same circulation process without heat exchangers. The treatments commenced in autumn 2017, and the ponds were fish-free until 11 May 2018, when we introduced 43 adult sticklebacks (total biomass 38.5 g) into each pond as part of another study, and to make sure top-down control (predation) by chironomids and zooplankton on the algae was minimal (Mahdy et al. 2015;Carpenter et al. 2022).
We measured DOC, total nitrogen and phosphorus (TN and TP), dissolved inorganic nitrogen and phosphorus (DIN [NO À 3 + NO 2 ] and PO 3À 4 ) and pH, every 3 weeks, starting 28 May (2 weeks before installing periphyton strips). We took water samples from the surface with a 0.6-m-long water sampler and stored them dark in 1-L bottles with minimum air. We immediately processed the samples in the lab, and we measured pH directly after all other sampling in the lab. We filtered water for DOC, DIN, and PO 3À 4 analyses through a 0.45-μm filter (Sarstedt) before storing. We acidified the DOC samples with HCl to an end concentration of 12 mM and stored them in a refrigerator before analyzing. We kept DIN and PO 3À 4 (filtered), and TN and TP (unfiltered) samples frozen until analysis. We retrieved incoming photosynthetically active radiation (PAR) from stations installed next to the ponds, and continuously monitored water temperatures in each pond in situ at 15-min intervals. Because of interference by a nest-building bisam rat causing high turbidity, we excluded one of the control ponds subjected to ambient temperature and miss temperature data just before the last sampling. Light attenuation coefficients (Kd) were calculated as the absolute slope of natural logarithmically transformed PAR against depth.
Algal biomass accrual and pigments
We measured Chl a as proxy for both periphyton and phytoplankton algal biomass, on three and four sampling occasions, respectively ( Fig. 1B; Table 1). Due to the presence of sticklebacks (fish), the abundance of pelagic zooplankton was low indicating low grazing pressure by zooplankton on both phytoplankton and periphyton (Hamdan 2021). In addition, although present in the ponds, snails (Lymnaeidae sp.), and chironomids were rare on the strips (Koizumi et al. in rev.) and thus grazing on the periphyton strips was minimal. We grew the periphyton on vertical synthetic (polycarbonate) strips that were 60 cm long, 10 cm wide, and 0.75 mm thick. On 15 June, we deployed six strips per pond attached to small floaters to ensure the strips were always at the water surface, and we excluded the algae growing on the top 10 cm (Fig. 2C). Periphyton biomass accrual was measured over time by harvesting two strips on two sides (i.e., quadruplicate measurements per pond) into separate plastic containers using a plastic scraper and stored dark and frozen (À20 C) before further analysis. We estimated the overall biomass accrual based on the periphyton and phytoplankton measurements taken on the last date of the experiment, that is, periphyton that has grown on the strips during the whole experiment, and the Chl a from the water column during the last sampling (Fig. 1C). We freeze-dried the periphyton samples before extraction and used the whole sample for analysis. Periphytic Chl a (sensu Steinman et al. 2017), Chl c 1+2 (here: Chl c; sensu Jeffrey and Humphrey 1975) and fucoxanthins (sensu Seely et al. 1972) were extracted in 90% acetone and estimated spectrophotometrically, measuring the full absorbance spectra with 1-nm band intervals. We took pelagic Chl a every 3 weeks from the same water used to measure the water chemistry, by filtering 100 mL onto Whatman GF/F filters and extracted for 24 h in the dark in 95% ethanol before measuring with a spectrofluorometer (Perkin Elmer LS-55) with 433 nm as excitation and 673 nm as emission wavelength.
Data interpretation
We expressed the response of periphytic and phytoplankton biomass accrual to cDOM and associated nutrients, and warming, compared to the control treatment as log-transformed response ratios (RR x ) on each sampling date (Fig. 1B). We combined the measurements from duplicate ponds and included all values to get a mean seasonal RR x (Table 1). We performed two-way ANOVAs for the accrual, RR x , and pigment composition, and tested interactive effects (Table 2) and established 95% confidence intervals to assess if the responses to cDOM and warming treatments differed. To compare periphytic and phytoplankton accrual (measured in mg cm À2 and mg m À3 , respectively) we calculated a pond average for phytoplankton Chl a (in mg m À2 ) by multiplying the measured Chl a values with the pond volume between 0 and 0.6 m depth and then dividing by the pond surface area. We expressed the pigment compositions Chl c and fucoxanthin in relation to biomass of Chl a (g g À1 and mg g À1 , respectively) and fucoxanthin to the biomass of Chl c (g g À1 ), to remove the effect of biomass increase. We removed one erroneous measurement (one of quadruplicate periphyton measurements likely due to dilution error) and performed analyses in R and SPSS (Supporting Information Data S1).
Results and discussion
Effects of cDOM and warming on water physiochemistry We successfully created cDOM categories that formed a DOC (4.5-11.5 mg L À1 ) and nutrient (TN 159-476 μg L À1 , TP 9.2-37.9 μg L À1 ) gradient, and a warming treatment with an average of +1.9 AE 0.3 C compared to the control ( Fig. 1; Table 1). Indeed, the cDOM-rich river water had significantly (p < 0.05) higher DOC, TN, and TP concentrations compared to the tap water, with decreased light conditions (i.e., higher Kd) along the cDOM gradient (Table 1), illustrating the dual impact of cDOM on algal development, that is, by supplementing nutrients and constraining light (Jones 1992;Bergström and Karlsson 2019).
The inorganic nutrient (DIN, and PO 3À 4 ) concentrations were low in each CDOM category (Table 1), typically like pristine browned water bodies in northern Sweden (Bergström and Jansson 2000;Deininger et al. 2017). Concentrations of TN, TP, and DOC in the highest cDOM category were similar to those found in the river (Table 1; Jonsson and Byström unpubl. data). The DIN : TP ratios were further low (mass < 3), suggesting N-limited conditions in all treatment ponds (Liess et al. 2009;Bergström 2010). TP and PO 3À 4 concentrations increased more than DIN along the cDOM gradient (except for the ambient high cDOM treatment: Table 1), indicating a tendency of declining DIN : TP and intensified N-limited conditions with increasing cDOM, similar to what has been reported for northern Swedish lakes with browning (Jansson et al. 2001;Isles et al. 2018). Although we did not find a (significant) effect of the temperature treatment on total or dissolved nutrients, it is possible that temperature increased microbial activity and cDOM mineralization (Hall et al. 2008;Gudasz et al. 2010) and nutrient availability for autotrophic algae (Stepanauskas 1999a;Stepanauskas et al. 1999b). The possible increased microbial activity with warming may also have promoted competition for nutrients between periphyton and bacteria (Carr et al. 2005;Li et al. 2017).
Effects of cDOM and warming on algal accrual
Periphyton and phytoplankton biomass accrual increased with cDOM in both ambient and warmed treatments, with interactive effects between temperature and cDOM for periphyton only, and with significantly higher accrual of Table 1. Seasonal average water physio chemistry (including the tap water and cDOM-rich river source water used to create treatments; mean AE standard deviation) and accrual per treatment. Periphyton and phytoplankton accrual (mg Chl a m À2 ) represent values measured on the last sampling.
Puts et al.
Periphyton accrual under warming and browning periphyton relative to phytoplankton ( Fig. 2A-C; Table 2). Our results thus confirm interactive effects of nutrients, temperature and to some extent light on algal growth (De Senerpont Domis et al. 2014;Endo et al. 2017;Burrows et al. 2021), and possibly also on microbial growth (Carr et al. 2005;Li et al. 2017), here induced by warming and browning. Periphyton accrual (73-98% of total periphyton and phytoplankton accrual) well exceeded phytoplankton accrual, except for the warmed low-cDOM pond (40%), indicating a much higher nutrient uptake from the water by periphyton compared to phytoplankton (Fig. 2B). Since chironomid and zooplankton biomass was low in all ponds because of fish predation (Koizumi et al. in rev.), effects of zooplankton grazing on phytoplankton and periphyton biomass accrual is negligible compared to the effect of the treatments. Nonetheless, in natural systems, the relative nutrient uptake of periphyton vs phytoplankton depends on the relative availability of surface (periphyton habitat) vs. water volume (phytoplankton habitat) and should therefore be higher in small ponds and lakes with increased lake surface to volume ratio. Seasonal average response ratios of phytoplankton biomass accrual increased with cDOM and associated nutrients, and there was a trend of increased accrual with warming ( Fig. 2D; Table 2). These results indicate that enhanced phytoplankton accrual in response to additions of limiting nutrients by cDOM (Table 1; Seekell et al. 2014;Thrane et al. 2014) can be enhanced by higher water temperatures (Myrstener et al. 2018;. In contrast, warming weakened periphyton biomass accrual, with consistently lower biomass accrual in warmed compared to ambient treatments ( Fig. 2D; Table 2). A possible explanation for this difference could be that the phytoplankton community was better adapted to higher water temperatures compared to the periphyton community due to different optimum temperatures (Lürling et al. 2013;Liu et al. 2022), or altered competition between microbes and periphyton (Li et al. 2017). Yet, our results clearly illustrate that periphyton indeed utilize nutrients supplemented by cDOM when growing on nutrient-poor, hard substrates (here plastic strips) under light satisfactory conditions. Indeed, our results of higher accrual of periphyton than phytoplankton at all cDOM levels suggest that periphyton may be capable of constraining nutrient resources coupled to cDOM for phytoplankton, with slight modification by warming. Fig. 2. (A-C) Accrual of periphyton (light green) and phytoplankton (dark green) for the ambient (top panels) and warmed (bottom panels) treatments, measured on the last sampling occasion expressed as (A) average per m 2 , (B) the relative accrual of periphyton and phytoplankton (%, expressed as relative amount of periphyton), and (C) picture of the periphyton strips exposed to the four cDOM treatments from the ambient ponds. (D,E) Seasonal average response ratios (RR) of biomass accrual per cDOM and temperature (ambient = blue, warmed = red) treatment of (D) phytoplankton and (E) periphyton. RR = log transformed response ratio; error bars represent 95% confidence intervals, and comparative levels of significance are additionally indicated with letters.
Periphyton pigment composition
We also tested if the browning and warming treatments resulted in altered pigment composition, using Chl c and fucoxanthin relative to Chl a, and to each other, as proxy (cf. Anning et al. 2001). Chl c was abundant in all treatments and amounts increased accordingly with Chl a (see publicly available data), and Chl c : Chl a decreased along the cDOM gradient but not with warming ( Fig. 3A; Table 2). Fucoxanthin was detected at all treatments, and relative amounts (compared to Chl a and Chl c) responded to the cDOM treatments but showed a consistent and stronger increasing trend with warming, especially at low-and mid-cDOM categories (Fig. 3B,C; Table 2). Fucoxanthin : Chl c generally (insignificantly) decreased with the warmed treatment at all cDOM categories (Table 2). Such changes in pigment composition measured over a longer time may reflect changes in community composition, but they may also reflect changes related to growth mechanisms.
Our observed decrease in Chl c : Chl a with browning is consistent with other studies, but the warming-induced decrease of fucoxanthin : Chl a and fucoxanthin : Chl c is more surprising, since fucoxanthin often is related to photo protection (Fig. 3;MacIntyre et al. 2002;Hodgson et al. 2004;Endo et al. 2017). Several enzymes involved in photosynthesis are temperature dependent, and have an optimum temperature, with declining accrual at lower or higher temperatures than the optimum temperatures (Schoolfield et al. 1981;Raven and Geider 1988;Lürling et al. 2013). There is evidence that the independence of Chl c : Chl a to temperature together with a decrease in fucoxanthin : Chl a ratio is characteristic for diatoms exposed at higher temperatures that may employ mechanisms to adapt to the high temperatures (Anning et al. 2001), as may be the case in our systems as well. Such "thermal compensation" mechanisms are debated and more often measured in experimental settings where optimal nutrient and light conditions are combined (Raven and Geider 1988), but are also observed in natural settings including community assemblages (Kingsolver 2009;Barton and Yvon-Durocher 2019;Liu et al. 2022). Our results therefore fit these patterns well, indicating that diatoms were likely abundant and caused the response to the additional warming on top of the extraordinarily warm temperatures during the experiment (see Blunden and Arndt 2019). In conclusion, the supplementing effect of nutrients by cDOM was dominant and caused an overall increase of periphyton accrual (Table 1; Fig. 2), which was likely impaired by thermal compensation, measured as decreased fucoxanthin : Chl a (and Chl c), especially for the mid-cDOM category (Fig. 3). In summary, our results suggest that browning and warming affect periphyton accrual growing on nutrientpoor substrates in contrasting ways. While in natural systems, periphyton (growing on varying substrates) is often light limited due to water coloring by cDOM (Vinebrooke and Leavitt 1998), and additionally increases with temperature (Björk-Ramberg and Ånell 1985;Puts et al. 2022), here cDOM caused an increase in accrual due to nutrient supplementation, possibly due to the optimal position in the water column with respect to light and the nutrient-poor substrate. In addition, temperature had a negative effect on periphytic accrual (especially at the mid cDOM category), likely by affecting periphytic algae communities and their growth strategies by thermal compensation mechanisms (occurring at all cDOM categories). Since most of the northern Swedish lakes have DOC levels up to 10.6 mg L À1 (Bergström and Karlsson 2019) which is similar to our highest cDOM category (corresponds to DOC concentrations up to 11.5 mg L À1 ), warming may in addition to light (browning) constrain periphyton development in these ecosystems in the near future. Our results illustrate the complexity by which impacts of browning and warming manifest. Considering the importance of periphyton for the productivity of many lakes Finstad Fig. 3. Seasonal averages per cDOM and temperature (ambient = blue, warmed = red) treatment of (A) Chl c relative to Chl a, (B) fucoxanthin relative to Chl a, and (C) fucoxanthin relative to Chl c. Error bars represent 95% confidence intervals, and comparative levels of significance are additionally indicated with letters. et al. 2014), the results in our study highlight the potency of browning and warming to change aquatic ecosystems.
|
v3-fos-license
|
2020-11-19T09:17:54.994Z
|
2020-11-15T00:00:00.000
|
228843859
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2071-1050/12/22/9511/pdf",
"pdf_hash": "3b752c6c1797eb42cb8e8845f0849c99a440b154",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:531",
"s2fieldsofstudy": [
"Economics",
"Business",
"Political Science"
],
"sha1": "2309913281b73d97a89594527ab7a01dd482bd4d",
"year": 2020
}
|
pes2o/s2orc
|
Alliances between For-Profit and Non-Profit Organizations as an Instrument to Implement the Economy for the Common Good
The model of the Economy for the Common Good (ECG) has cooperation as one of its main principles. This alternative economic model proposes to prioritize cooperation over competition to favor the creation of social value. From this point of view, strategic alliances between organizations can be used as an instrument that supports implementation of the ECG model. In recent years, alliances between for-profit and non-profit entities have been strengthened as a method to facilitate actions focused on social responsibility and sustainability. Moreover, the ECG model has become an adequate management framework for corporate sustainability. This work aims to connect alliances between for-profit and non-profit organizations with the ECG model. First, this connection is manifested in a theoretical way. This paper is going to analyze how such alliances can contribute to increasing the values of the ECG model: human dignity, solidarity and social justice, environmental sustainability, and transparency and codetermination. Afterwards, this work analyzes two cases of this type of alliance—Grupo Vips-Fundación Hazlo Posible and Danone Foods-Grameen Bank—to determine the benefits that this type of cooperation can provide to society. We study their motives and the benefits that they bring to the organizations and the community. Therefore, this work assesses how these types of alliances influence the different topics included in the Common Good Matrix. Moreover, we conduct a comparative analysis between both cases. This work demonstrates that, by implementing this type of strategic alliances, the creation of social value is favored, thus contributing to implementation of the ECG model.
Introduction
Cooperation between organizations is a practice for which history dates back to the beginning of economic exchanges. Nevertheless, it becomes more relevant when the environment starts to be considered a significant variable in strategic analysis during the second half of the twentieth century. The economic dynamism and the globalization of markets restrict an organization's capacity to respond individually, an aspect that forces them to establish cooperation agreements to face environmental challenges. In consequence, alliances become a critical strategic tool for business success [1]. The literature on this subject is extensive [2], and the names used to refer to it are diverse: cooperation agreements [3,4], alliances [5,6], or coalitions [7], among others. Furthermore, alliances are analyzed from multiple perspectives and approaches [8]. Most authors focus on aspects such as their motives [9], the process of formation of the agreement [10], or their management [1].
Nowadays, a more recent source of research analyzes alliances and their relationship with ethical behaviors, social responsibility, and sustainability in corporations [11,12]. These studies affirm that the success of cooperation agreements depends on the absence of opportunistic behaviors, mutual respect, and trust. These fundamental aspects for the management of alliances connect with the social values derived from social responsibility and business ethics. Therefore, alliances between for-profit and non-profit entities are being promoted in order to develop critical competencies to create social value [13][14][15][16][17].
Furthermore, there is another aspect which is currently generating a broader literature on the subject of corporate sustainability. Such an aspect is the relevance of social value creation and its relationship with economic value through what Porter and Kramer [18] refer to as shared value [19][20][21][22][23][24]. Dyllick and Muff [25] establish a matrix in which they explain the process through which a company can reach "true sustainability". The process allows for identification of a typology formed by four different stages: (1) business-as-usual (economic concerns), (2) business sustainability 1.0 (three-dimensional concerns), (3) business sustainability 2.0 (triple bottom line), and (4) business sustainability 3.0 (outside-in). These authors point out that the Economy for the Common Good (ECG) model is an organizational framework that focuses on business sustainability 3.0 [25] (p. 169).
The ECG model is an organizational framework created in 2010 in Europe (Austria) by economics professor and social activist Christian Felber [26]. According to this model, corporations can measure their contribution to the common good through the Balance Sheet for the Common Good. This tool uses a strategic matrix-the Matrix for the Common Good-to quantify the company contribution to the creation of social and environmental value. This contribution is related to four principal values (human dignity, solidarity, social justice, environmental sustainability, and codetermination and transparency) with five different stakeholders (suppliers, investors, employees, customers, and social environment) [27]. This model replaces profit for the common good, and it gives priority to cooperation over competition. In this sense, the ECG is an organizational model that highlights the cooperation between companies, both internally (intra-cooperation) and externally (inter-cooperation).
When analyzing the literature on strategic alliances, we can find a few works concerning the importance of strategic alliances between for-profit and non-profit organizations. From an empirical point of view, the literature is even scarcer. In this sense, the present work provides new literature on this subject. Furthermore, we intend to relate this type of cooperation agreement with sustainability models such as the ECG one to prove that such alliances can help implement this model. Once the work establishes the theoretical relationship between both, future investigations could quantify the contributions of these alliances to the ECG model by applying the Common Good Matrix.
The objective of this work is to relate the study of alliances with the ECG model to establish a theoretical connection between the two. To this end, this work analyzes empirically the contribution that alliances between for-profit and non-profit organizations can make to the ECG model.
Cooperation agreements between these two types of organizations contribute to the creation of social value, which is the principal purpose of the ECG model. Therefore, establishment of these types of alliances can contribute to implementation of the ECG model in companies. This work intends to establish the theoretical connection between both approaches and the empirical contributions that these types of alliances can make to the Common Good Matrix.
With this purpose, our work is structured as follows. After this introductory section, we introduce Section 2, which includes the literature review. We divide such segments into three different subsections: (1) "2.1. Cooperation between for-profit and non-profit organizations", (2) "2.2. The Economy for the Common Good organizational model", and (3) "2.3. Cooperation in the ECG model". Following this segment, we describe the work method in Section 3. Afterwards, Section 4 focuses on the description of case studies. This segment is divided into two different parts: (1) "4.1 Grupo Vips-Fundación Hazlo Posible" and (2) "4.2. Danone Foods-Grameen Bank". After this, Section 5 analyzes the results of the empirical study conducted previously, employing a comparative analysis between both cases. To finish, Section 6 discusses the results of this work, and Section 7 contains the conclusions.
Cooperation between For-Profit and Non-Profit Organizations
Interfirm cooperation as a field of academic study within the economy began to gain relevance in the 1970s. Since that time, numerous papers have highlighted the importance of cooperation to obtain a competitive advantage [28][29][30]. The first papers on this subject affirmed that cooperation was a cover-up practice that restrained competition [31][32][33][34][35][36]. However, in the 1980s, the cooperation approach experienced a change due to application of the "Transaction Costs Economics" [37,38]. Nowadays, cooperation is analyzed as a hybrid or intermediate form between the market and the company. Interfirm cooperation reduces market inefficiencies (reducing transaction costs), and it facilitates access to resources and capabilities that the entities could not obtain individually [39,40].
From an organizational perspective, cooperation begins to gain interest as a field of academic study when the environment starts to be considered a critical element in the competition between organizations [41]. Even though there are previous references [42], the most relevant works emerge when cooperation is studied within the resource dependency approach. Such an approach states that cooperation between entities occurs as a consequence of mutual dependence [43,44]. This fact leads to a strategic approach to cooperation, which is considered an alternative strategy to mergers and vertical integration (internalization of activities). This approach comprises two main types of reasons why companies cooperate: (1) to access new resources and capabilities that the entities do not possess [45,46] and (2) to organize certain activities of the value chain [7,47,48].
The existing literature indicates that cooperation can be implemented through different methods-functional agreements, franchising, outsourcing, joint ventures, networks, etc.-which share common characteristics that help define the concept [28]. Such attributes are as follows: (1) partners keeping their legal independence, (2) the explicit and long-term nature of the collaboration, (3) partners sharing common and compatible goals, and (4) the organizations involved sharing their resources and capabilities.
The strategic perspective of organizations focuses the study of alliances on two different aspects [7,9,47,49]: (1) their motives and causes, and (2) their consequences (benefits and results of the alliances). Moreover, this perspective includes the study of the process of negotiating and managing the cooperation agreements [50][51][52].
With this work, we are interested in introducing a new motive that has been little analyzed by the academic field: the use of alliances to promote the implementation of socially responsible practices, ethical behaviors, and sustainability criteria [11,53,54]. Table 1 summarizes the main contributions concerning the relationship between strategic alliances and corporate social responsibility. Table 1. Main contributions of alliances to corporate social responsibility.
Authors Contributions
Canzaniello et al. (2017) [30] This paper analyzes the vertical alliances between companies and their suppliers in the supply chain under sustainability criteria. Altamira (2000) [55] This paper studies how ethical behaviors on alliances can avoid opportunistic behaviors and distrust between partners.
Michavilla (2011) [56] This work states that the success of an alliance depends on mutual trust between partners.
Renart Cava (1999) [57] The author concludes that the failure of an alliance depends on unethical behaviors and the loss of trust between partners.
Pérez López (1993) [58] This paper states that the transcendent motivations are the ones that avoid opportunistic behaviors, thus favoring the success of alliances. Browning et al. (1995) [59] The authors state that the success of alliances increases when the commitment between the people involved in the agreements goes beyond the contractual obligations. These ethical bonds create a moral community.
In Table 1, we can see that different authors have focused their studies on the relationship between ethics and the success of strategic alliances between organizations. Such studies have been conducted using different approaches, such as corporate governance [60], stakeholders' theory [61], and the territorial perspective [62]. More recently, a big part of the literature on business cooperation has focused on the impact that social responsibility policies have on this type of agreements. The new studies emphasize aspects such as the formation, development, and control of the alliances [11,53,54].
To assess the impact that social responsibility measures have on cooperation agreements, Kliksberg [63] defined corporate social responsibility as a way of understanding the business world in which organizations are allies and drivers of environmental and social development. The author adds that capitalism and its individualistic ethics cannot be suitable to underpin social responsibility. Therefore, this system does not meet the needs of today's society. Ethics must be based on cooperation and mutual contribution [64].
Related to this aspect, Pulgar and Pelekais [11] identified the importance of strategic alliances as a tool to guarantee the effectiveness of social responsibility measures. By cooperating, partners in different or similar industries can unite their strengths in order to meet the needs and requirements of the society in which they operate. Additionally, Morales et al. [65] affirmed that social responsibility policies can be more effective if they are carried out through cooperation with entities that possess the key competencies for their implementation, that is, through strategic alliances with non-profit organizations. Consequently, these authors, together with Porter and Kramer [18], introduced the importance of alliances between for-profit and non-profit organizations for the creation of shared value.
The number of publications on strategic alliances between for-profit and non-profit organizations is still scarce and very recent. With regards to this subject, we can find papers that focus on diverse topics. Some of these topics are health and education [66], intersectoral alliances [67], cooperation agreements between private entities and non-profit organizations [68], intersectoral alliances that focus on corporate websites [69], alliances centered on the use of the balanced scorecard [70], cooperation agreements that focus on the implementation of marketing strategies [71], intersectoral alliances centered on reputation [72], alliances focused on strategic development [73], and alliances centered on the mediating role of altruism [74]. Table 2 summarizes the main contributions concerning the relationship between alliances between for-profit and non-profit organizations and corporate social responsibility. Table 2. Main contributions of alliances between for-profit and non-profit organizations to corporate social responsibility.
Authors Contributions
Muthuri et al. (2009) [75] and Porter and Kramer (2011) [18] Hybrid corporations are those that are capable of creating shared value as the intersection between economic and social value. The authors state that these alliances are created as a response to the new interests of the stakeholders. In this sense, the work establishes three perspectives or approaches: (1) Transaction: occasional philanthropic actions that do not create value. (2) Transition: continuous dialogue between the companies involved in the alliance but without considering the rest of the stakeholders. This work's contributions are related to the benefits obtained from alliances created to establish socially responsible practices.
With regards to cooperation between for-profit and non-profit organizations, Porter and Kramer [18] stated that, on the one hand, for-profit corporations can embrace the social and environmental aspects predominating in the non-profit entities. On the other hand, the authors affirm that non-profit organizations can ensure their economic viability to continue operating. Then, the existing boundary between for-profit and non-profit organizations blurs. By cooperating, both entities complement their deficiencies, thus guarantying their survival. Therefore, these strategic alliances lead to the creation of shared value, which is based on the idea that organizations create economic and social value through their economic activity.
In the same line of thinking, Rodríguez et al. [16] focused on the impact that this type of cooperation has on both organizations. According to their studies, the positive impacts and benefits derived from strategic alliances between for-profit and non-profit organizations can be seen in three different factors: (1) for-profit organizations experience an increase in their number of partners as a consequence of improvement in their corporate image derived from the alliance; (2) employees from both entities undergo an increase in their motivation derived from a higher general and operative implication at work; and (3) these alliances promote the exchange of tangible and intangible resources and capabilities between both organizations. It is worth noting that this last aspect leads to a process of in-depth learning, which will remain with each one of the entities if the cooperation agreement ceases in the future. Moreover, the authors mention that this knowledge derived from the alliance will allow both organizations to compete in meeting the needs and requirements of every single stakeholder of the society in which the companies operate.
Concerning this topic, Carreras and Iglesias' [15] work shares some of the benefits explained above, but they added some new ones. On the one hand, with regards to the benefits experienced by capitalist organizations, they included (1) an improvement in their internal processes and the potential for innovation in their human resources policies, (2) the retention of employees due to an increase in their motivation, (3) higher visibility of the organization's value, (4) increase in the company's awareness of the reality of the society in which they operate, and (5) the implementation of social responsibility measures becoming easier thanks to the knowledge obtained from the non-profit entity. On the other hand, the positive impacts experienced by the non-profit entities are (1) access to economic and technical resources, (2) creation of new products and services as a consequence of the know-how acquired, (3) increase in their channels of distribution, (4) access to a broader number of partners due to the networks of the for-profit company, and (5) increase in their visibility.
Therefore, we can conclude that different benefits exist for both types of entities, which explain the increase in the importance of this type of alliances during recent years [76]. Nevertheless, the importance of these agreements does not reside in this aspect, since alliances between for-profit organizations also present benefits for the entities involved. Therefore, many papers focus on the impact that these agreements constitute for the community where they are developed [13]. Concerning this aspect, Abenoza et al. [17] indicated that there are three positive impacts for the community derived from this type of cooperation.
In the first place, they stated that entities can solve a social problem more efficiently due to the exchange of resources and capabilities that takes place between them.
In the second place, the authors mention that organizations are able to generate social innovation as well [17]. As a consequence of the cooperation between for-profit and non-profit entities, organizations can develop new products, services, and technologies with significant social impact. That is, these alliances lead to the creation of innovative offers aimed at solving the social problems mentioned above. On the one hand, the non-lucrative entity detects the environmental or social challenge by bringing its knowledge about this aspect, while on the other hand, the for-profit organization brings its professional experience related to product or service development.
Finally, the third community benefit that Abenoza et al. [17] indicated in relation to these types of alliances is related to the local and global changes they promote. The authors referred to these cooperation agreements as strategic alliances developed to promote ethical practices. They named these practices "global action networks", which are described as coalitions created to promote not only local changes but also global ones. These agreements are based on (1) shared learning and (2) the establishment of innovative networks emerging from the technological exchange between the organizations involved. According to the authors, these agreements are created to encourage social transformation. Such social transformation is the reason why these alliances are not restricted to the local community in which the organizations operate, but they pretend to have a positive integral impact through innovation.
In this line of thinking, Callejas and Salazar [77] indicated that the exchange of knowledge and technology that takes place between for-profit and non-profit organizations leaves a communitarian footprint globally and locally. Furthermore, the authors added that this footprint can be seen at the social and environmental levels. These types of alliances allow, through innovation, the development and implementation of new models of production and distribution of products and services designed for the common good. Moreover, these new models are environmentally friendly, and they do not need to be limited to the locality in which the organizations are established.
As a conclusion, we can state that the transcendence of the strategic alliances between for-profit and non-profit organizations resides on the impact that they have on the community. The creation of economic and social sustainable value is only possible through this model of cooperation since these strategic alliances allow social development through the economic one and vice versa [78].
The Economy for the Common Good Organizational Model
The ECG model was created in the year 2010 during the financial crisis that began in Central Europe in 2008. The model was created by Christian Felber, professor of Economics at the University of Vienna and social activist of Attac. Hence, the model has social origins, being created as an alternative perspective to the current global economic model. The underlying topic that this model suggests is that economic growth and money cannot be considered an end or an objective; these must be instruments used in order to create social welfare and to improve the quality of people's lives. From this perspective, Felber proposes the substitution of "for-profit" by "common good" and the replacement of competition by cooperation [26]. The substitution of "for-profit" by "common good" does not imply that organizations are not going to obtain benefits; this means that entities must favor the general interest through the creation of social value.
Cooperation and competition are two factors that have been analyzed together in business literature. Firstly, they were considered two opposing terms (when cooperation increases, competition decreases) [31][32][33]. Later, they were contemplated as two concepts directly related (cooperation favors competition) [4]. Nevertheless, it is important to mention that competition and cooperation are not incompatible concepts; they can be supplemented by what is known as co-petition [79]. This concept arises from game theory application. According to this theory, companies must follow a win-win strategy in which success in the market does not require the failure of competing companies, as there may be multiple winners [80][81][82][83][84]. Cooperation is used by organizations to be more competitive without the need to reduce or eliminate competence.
The ECG model was created as a proposal for a global economic model. Nevertheless, its implementation focuses on the organizational sphere. With this purpose, the model uses a tool known as the Common Good Balance Sheet (CGBS), in which its utility is to assess the impact that an organization produces on society (social impact) and the planet (environmental impact) [85]. The CGBS works as a complementary tool to the balance sheet and to the profit and loss statement, which measure the economic and financial structure of an entity. The CGBS consists of the Common Good Report, the Common Good Matrix, and the company's improvement plan.
The Common Good Matrix is a strategic matrix that relates the four fundamental values of the ECG model (human dignity, solidarity and social justice, environmental sustainability, and transparency and codetermination) with five main groups of stakeholders (suppliers, owners and financial service providers, employees, customers, and their social environment). Through the matrix, organizations can quantitatively measure their contribution to the common good. According to the score range obtained in the matrix, the company can be placed at the following rating levels: beginner (0-1), advanced (2-3), experienced (4-6), and exemplar (7-10). The highest possible score is a thousand points. Furthermore, this matrix can be applied to any organization, whether they are private (for-profit and non-profit corporations) or public. The matrix structure can be seen in Table 3.
Cooperation in the ECG Model
Cooperation appears implicitly and transversally in every single one of the 20 topics comprised within the matrix. In order to quantify these topics, the matrix uses different types of indicators. Detailed information on the matrix structure and the process through which the scores are obtained can be found on the website of the International Association for the Common Good Economy [86]. By analyzing each one of its indicators, we can identify those aspects related to cooperation which appear explicitly in the matrix. Therefore, throughout the study of the ECG Matrix, we can find references to cooperation in both ways, implicitly and explicitly. The next section Section 2.3. of the work focuses on these references.
In the first place, by studying the first interest group of the matrix (suppliers), we analyze the relationship between the organization and the suppliers across the supply chain. In this case, alliances have significant relevance, since we consider these types of relationships as vertical alliances within the supply chain. This block of the matrix introduces the concept of "sustainable supply chain". A sustainable supply chain is one in which a situation of abuse of market power between the company and its suppliers does not take place. Moreover, it includes the establishment of cooperation agreements with local suppliers, thus favoring proximity sourcing and relationships.
Furthermore, this block includes the conditions of the cooperation agreement between the company and the supplier. Sustainability criteria bring these conditions, which are specified through the four values placed horizontally in the matrix. Such values represent a sustainable supply chain management. Cooperation is reflected explicitly in the fourth value corresponding to "A.4. Transparency and codetermination in the supply chain". In this case, the ECG model suggests establishing cooperative relations with suppliers based on transparency and participation.
In the second place, by studying the second interest group of the matrix (owners and financial service providers), the company's relationship with its financial providers (primarily credit institutions) is analyzed. In this block of the matrix, horizontal alliances between the organizations and the financial entities take place. These alliances-when we apply the sustainability criteria-can be understood as coalitions with ethical and social banks. This aspect implies the establishment of cooperation agreements with financial providers, which are based on socially responsible investments. In this case, the company can participate in solidarity-based financing in order to implement projects with a positive social and environmental impact. The implementation of these projects will be conducted employing subordinated loans, microcredit, and crowdfunding or by providing financial aid directly. It is important to mention that this matrix includes a negative indicator (it implies subtracting points from the matrix): the possibility that the organization participates in a hostile takeover bid.
In the third place, by studying the third interest group of the matrix (employees), the relationships between people by using tools based on participation and co-decision are analyzed. In this case, cooperation is studied from an internal perspective known as intra-cooperation. This one uses different types of methods, such as cooperative work or collaborative decision-making. This block of the matrix shows that an organizational culture based on respect, trust, and appreciation fosters cooperation between people, thus including an open, transparent, and bottom-up communication system. Moreover, the matrix includes indicators related to diversity-age, gender, ethnicity, and religion-and equal opportunities. Nevertheless, the aspect that focuses the most on cooperation is the one related to transparency and codetermination or internal democratic participation. In this sense, the first aspect valued is transparency or access to the company's relevant information; by accessing information, people can build their own opinion and can participate actively. A second important aspect to consider is the legitimacy of the organization's management; employees are the ones who must legitimate directors through dialogue, contribution, and participation. There is also a third aspect related to participation of employees in the decision-making process of the company; workers can give ideas, and they can take action in the entity's decision-making process. To this end, ECG theory suggests the following concrete actions: making bottom-up decisions, sharing responsibility, covering all levels of the organization, and applying direct democracy in relevant decisions.
In the fourth place, by studying the fourth interest group of the matrix (customers and other companies), the relationship of the company with its clients (favoring their participation in those decisions that directly affect them) and with other entities (enhancing inter-cooperation) is analyzed.
With respect to the customers, ECG theory proposes to maintain a transparent, honest, and horizontal relationship with them. Furthermore, the model suggests facilitating their access to the purchasing process by actively participating in decisions that affect them. According to ECG theory, such access could be promoted by creating customer groups, working committees, etc. Clients can participate in the improvement or development of existing products or services as well as in facilitating their diffusion among customers.
With regards to cooperation with other entities, collaboration should be based on an attitude of appreciation and equal and horizontal treatment. This section of the matrix is based on three aspects: (1) considering organizations in the same sector as complementary in the market, (2) working with other companies to offer joint solutions to the customers, and (3) offering disinterested support to other entities in a situation of need. These three aspects form a condition that, in the theoretical framework, was referred to as co-petition. Some of the matrix indicators related to co-petition are (1) the percentage of time and resources spent on collaborative product and service development, (2) the percentage of time aimed at enhancing cooperation with other organizations from the same or different market segments and sectors, and (3) the number of civil society projects shared with other entities. Finally, solidarity towards other organizations is implemented by promoting mutual disinterested assistance through the exchange of employees, assignments, financial resources, and technology. In this block of the matrix, hostile behavior towards other organizations, the abuse of its competitive position in the market, and seeking win-lose strategies are activities negatively assessed.
In the fifth and last position, by studying the fifth interest group of the matrix (the social environment), the relationship of the company with society is analyzed. This block focuses on the relationship between the organization and the region where it develops its economic activity. Here, we can include alliances between for-profit organizations and NGOs or other non-profit entities as well as cooperation agreements with other groups of interest such as neighborhoods, ecologist associations, etc. The theoretical aspects previously analyzed in this work with regards to alliances between for-profit and non-profit organizations can be applied to this block of the Common Good Matrix.
Moreover, this section of the model comprises different ways to collaborate with the community by means of voluntary contributions. Such contributions can be monetary or in-kind. They can also take place by using the company's network to support civil society projects. Besides, cooperation with interest groups of the community, such as local institutions and NGOs, represent another way to collaborate.
These contributions and impacts are based on transparency; they are possible because the organization provides to the external interest groups truthful and complete information with regards to the entities' actions and their direct effects on the social environment. In connection with this, direct and active participation with the community's interest groups should be encouraged by consulting them and by involving them in the decision-making process of the company, especially in those aspects that directly affect the social environment. Information manipulation and lack of transparency are valued negatively in the matrix.
By studying the indicators proposed and reflected in the Common Good Matrix, we can quantify the role that cooperation plays in the implementation of the ECG model.
Method
The present study adopts a qualitative methodological approach, more specifically, the case study method. According to Yin [87], this methodology is the most appropriate one due to the critical and peculiar character of the cases and the unique object of study that they comprise. This approach enables investigating and drawing conclusions in a way that would not be possible otherwise. Moreover, multiple case studies (research in which several cases are analyzed at the same time) enables the investigation and comparison of examples and facts so that, finally, reality can be described [87]. In the same line of thinking, authors such as Rule and John [88] state that it is possible to test and affirm theories with the case study method. Their study indicates that qualitative methodologies can test and verify theories through the study of cases employing deductions that move from the general to the specific. This generalization is described by Yin [89] as "analytic generalization", a process that takes place when a theory can be used as the basis to compare the empiric results obtained with the case study.
Since the final objective of this work is to demonstrate the theories exposed in the theoretical framework, this work conducts an empirical study consisting of the analysis and comparison of two different cases. These cases have been selected for two main reasons: (1) visibility of the participating organizations and (2) the amount of information published with regards to the cooperation agreements between them.
The study of the two selected cases has been conducted based on the information obtained in such cases. We have chosen the cases based on the impact that they have on the community and the ease of getting sufficient and adequate information. This information has been obtained from two different types of sources: (1) from the institutional websites of the entities that are part of the alliances and (2) from different papers published on academic journals and other public documents.
In order to meet the work's purpose, the study is divided into three sections: (1) an individual study of the strategic alliance between Grupo Vips and Fundación Hazlo Posible, (2) an individual study of the strategic alliance between Danone Foods and Grameen Bank, and (3) a comparative analysis of both cases with the aim of discerning if there is a common pattern concerning the implications of these cooperation agreements for the community and their relationship with the ECG.
Grupo VIPS-Fundación Hazlo Posible Case Study
On the one hand, Grupo Vips is a family business founded in Madrid in 1969, when Plácido Arango (the founder) decided to open the first establishment under the name Vips. Although initially the organization had only one chain of restaurants, nowadays, the group is formed by six big chains and ten restaurants with more than 350 establishments and a workforce of more than 10,000 employees [90].
On the other hand, Fundación Hazlo Posible is a private non-profit organization founded by José Martín and Catalina Parra in 1999 to stimulate the involvement of society in solidarity and social causes by using information and communication technologies. With this objective, they created an online portal web under the name Canalsolidario.org [90]. This portal web was intended to work as a bridge between the activities of the NGO and the society. Today, Canalsolidario.org is a digital newspaper specialized in social information, while the foundation is focused on three action lines: awareness-raising and communication, volunteering, and knowledge management. The entity has a workforce of 17 employees and a census of 500 volunteers who support the organization guided by the values of commitment, diversity, energy, and creativity [90].
Cooperation between both organizations began when Maite Arango (vice-president of the Vips Group Council) and Elena Acín (director of Fundación Hazlo Posible) met at the first Internet Fair in Madrid in 1999. Acín attended the fair to present their new portal web Canalsolidario.org with the aim of accessing private funding, while Arango was looking for a social project to which Grupo Vips could commit [90]. After several meetings, both entities decided to start to collaborate on the portal web Canalsolidario.org in 2000. When cooperation started, the portal was renamed Hacesfalta.org after Grupo Vips made an initial investment of 106.000 €. The decision to start cooperating was taken quickly since both organizations shared the same goal. Such a goal was to increase the visibility of a new type of volunteering in Spain, that is, virtual volunteering. With this purpose, both entities unified their knowledge: Grupo Vips contributed with its customer service experience and Fundación Hazlo Posible contributed with its consulting skills. After this first contact, during the following years, in order to avoid complications related to compliance with the agreements, both entities set up formal governance mechanisms such as the signing of annual agreements and the submission of monthly reports to monitor the development of the project [90].
In 2004, Fundación Hazlo Posible launched a new corporate volunteering program as an extension of Hacesfalta.org in which different organizations were able to participate. To manage this program, Grupo Vips created a new corporate social responsibility department so that the relationships between the organizations become formal and informal. Nevertheless, it is important to mention that, in addition to these formal administrative procedures, two main aspects explain the long-term nature of this alliance. These aspects are (1) a strong level of engagement and interactions between the entities, and (2) trust, which has been an essential governance mechanism for the development of this cooperation agreement.
One year later, in 2005, both entities developed a project known as Brazos Abiertos (open arms). This new project, which was part of the corporate volunteering program, was aimed at incorporating new employees in their organizations and at integrating immigrants arriving in Spain into the Spanish society. Due to the growth of the different projects carried out, organizational changes were necessary. These adjustments caused a renewal in the management positions of both organizations, and they generated the incorporation of several new skilled employees into the workforce [90].
Over the next three years, as a consequence of the financial crisis of 2008 and due to the rapid growth of the alliance, Fundación Hazlo Posible detected the need to look for new sources of financing. With this purpose, in 2010, both organizations signed a three-year contract consisting of two aspects: (1) Grupo Vips would reduce the amount of money they invest in the non-profit organization, and (2) Fundación Hazlo Possible would look for other sponsorships and advertising contracts. Thus, in 2010, the strategic alliance was strengthened by opening the non-profit organization to new sources of funding [90].
Nowadays, both entities work together on the Hacesfalta.org project and on a new one incorporated in 2019 as part of the corporate volunteering program, named Red Talento Que Impacta [91]. This new project promotes strategic relationships between for-profit and non-profit organizations to foster the transfer of knowledge and learning between both types of companies. Therefore, Red Talento Que Impacta enhances and strengths the creation of social value in the community [92].
Concerning the impact of this alliance on the community where the organizations are established, we can find different benefits. First of all, since the creation of the first volunteering portal web, both entities have contributed to enhancing the volunteering and social responsibility culture in Spain. Since the creation of this cooperation agreement, hundreds of non-profit and for-profit organizations have actively participated in several solidarity initiatives, thus fostering the solidarity and corporate responsibility values [90]. Furthermore, this new organizational culture fosters communication between the different departments of Grupo Vips, thus leading to an increase in the motivation of employees, alongside a more robust leadership of its professionals [90].
In the second place, the strategic alliance has also contributed to technological development. As a result of the cooperation agreement, the portal web Canaldolidario.org and other following projects were created and intensively developed, initiatives that had a strong social repercussion. Therefore, we can state that, through this type of cooperation agreement, entities can develop new innovative products and services aimed at solving social problems [17]. Moreover, this technological development had also a positive impact on the organizations involved in the cooperation agreement [90]. This pioneering technological development has contributed to strengthening the reputation and corporate image of Grupo Vips. As stated by Rodríguez et al. [16], an improvement in the corporate image derived from the alliance enables the recruitment of new partners, alongside an increase in the reputation of the company. Therefore, we can conclude that both entities have benefited to a great extent since the cooperation agreement was established.
On the one hand, as a result of an increase in its visibility and a significant rise in its economic resources, Fundación Hazlo Posible has experienced a considerable growth of its partners: nowadays, the non-profit entity has 53 allies, and it is part of ten social networks [93]. On the other hand, Grupo Vips has undergone a clear improvement in its corporate image. Currently, the group is considered one of the most well-known businesses within its sector as a consequence of its social activity [94]. Since the establishment of the alliance, the for-profit organization has been awarded several distinctions provided by institutions such as MERCO (Spanish Corporate Reputation Monitor) or the organization Fundación Empresa [94]. Moreover, Grupo Vips is invited to academic forums to talk about the key to its success. Besides, the entity is cited in different publications and academic journals related to this subject. These recognitions have fostered the company's growth: before the alliance, the group had a workforce of 2800 employees, while nowadays, it has 9300 workers distributed among the 400 establishments and six chains that the business currently has [94].
Therefore, we can observe that, through cooperation between for-profit entities and non-profit organizations, companies bring benefits to society and the cooperating businesses as well. Thanks to the collation, Fundación Hazlo Posible and Grupo Vips constitute a culture based on diversity, innovation, teamwork, and efficiency, a culture that benefits every single group of interest of the agreement [90]. Through the strategic alliance, both organizations share resources and capabilities, thus allowing the for-profit organization access to those it did not possess before the alliance. Such resources enable the transfer of knowledge that makes the development of the portal web possible, which brings benefits for both entities and the community.
Throughout their repeated joint activities based on trust, both organizations created a unique synergy that brought them a sustainable competitive advantage, which allowed the alliance activity and its benefits to last over time. Porter and Kramer [18] stated that this sustainable competitive advantage is based on the shared value creation: through cooperation between for-profit and non-profit organizations, entities create social value benefiting the economic activity of the partners involved and extending the positive impact they have on society. Thereafter, we can observe that cooperation between Grupo Vips and Fundación Hazlo Posible assists in the creation of social value and the common good, thus fostering implementation of the ECG model. Table 4 summarizes the main contributions of the case study on implementation of the ECG model from a cooperation perspective.
Danone Foods-Grameen Bank Case Study
On one side, Danone Foods is a food business founded by Isaac Carasso in 1919 to improve people's health through progress in the process of fermentation. Although the first factory of the group was established in Barcelona in 1929, the company moved to France, where they inaugurated the first factory for the production of yogurts. Nowadays, the corporation has four different business lines: dairy products, water, infant nutrition, and medical nutrition. Moreover, the group operates in more than 130 countries spread over the world [95].
On the other hand, Grameen Bank is an entity created by Muhammad Yunus in Bangladesh in 1940 to offer microcredits to the most impoverished population of the region without the need for guarantees. Yunus idea was to provide funding capacities to the poorest people with terms and conditions they could afford. With this action, Yunus intended to carry out small activities that could contribute to the development of the community. Currently, Grameen Bank has 2568 branch offices with a workforce made up of 21,751 employees. Moreover, the organization has a base of nine million borrowers in more than 80,000 villages. Besides, the methods employed by Grameen are applied to projects in more than 58 countries such as The United States, Canada, France, Norway, and The Netherlands, among others [96].
The strategic alliance between the two organizations dates back to 2005 when, while visiting the city of Paris, Yunus received an invitation from Franck Riboud (CEO of Danone Group) to meet. During the meeting, the founder of Grameen suggested to Riboud the creation of a cooperation agreement between the two entities to jointly address the child malnutrition in Bangladesh, particularly in rural areas of the country. To this end, in 2006, the joint venture Grameen Danone Foods was created as a social enterprise [97]. This business model focused on the resolution of social and environmental issues. For that reason, the benefits obtained from the social enterprise activities were reinvested in the organization, so that a feedback loop that generates growth while contributing to development was created. Even though this social enterprise was based on the improvement of a social problem, the entity must be cost-effective. The enterprise must generate profit that can be reinvested into the organization to assist in a continuous process of progress and progress [98].
Hence, the social joint venture Grameen Danone Foods started to produce a new yogurt under the name of Shoktidoi ("strong yogurt"). The enterprise manufactured this product with essential nutrients, as it is specifically designed to fill the nutritional deficiencies of children in Bangladesh. The product had a selling price of eight cents, being affordable for the most impoverished population since it is intended for children in need [99].
In 2006, they established the first production plant in the city of Bogra, a factory 50 times smaller than the standard plants of Danone Group, which supplied a rural population spread over a 30 km perimeter around the Shoktidoi factory [100]. Since the establishment of this plant, the joint venture has invested around 700 thousand dollars in the region. Besides, more than seven million people have benefited from access to health, hygiene, nutrition, and employment [101].
With regards to the impact that the strategic alliance Grameen-Danone Foods has brought to different groups of the community in which the enterprise is located, it is worth noting that this cooperation agreement shows benefits in the environmental, economic, and social spheres.
Firstly, to reduce the environmental footprint of the social enterprise, the plant located in the city of Bogra works with solar energy employed to heat the water and to clean the facilities [97]. This aspect proves that, by using this renewable energy source, the social entity contributes to putting into practice the ECG model. In this case, by reducing the environmental effects of its activity, they focused on the ecological solidarity and sustainability aspects [27].
Secondly, concerning the economic impact of the strategic alliance in Bangladesh, it is important to mention that the social enterprise has become a significant source of income creation for the Bogra region and its surrounding areas. On the one hand, 475 farmers located in the area around the plant provide the milk used for the production of the yogurts. Moreover, these yogurts are manufactured by 30 local employees, and they are distributed by 250 micro-entrepreneurial women from the area who sell the products door-to-door. In total, Grameen Danone Foods employs 1600 employees from the area in which the plant is located [97].
Hence, we can observe that different agents of the community are involved in the manufacturing and distribution processes of the nutritious yogurts. Consequently, the joint venture improves the nutrition of children at the same time that it creates a business fabric that offers opportunities for employment and, therefore, for progress to different sectors of the community.
As with the environmental sphere, through its activity, Grameen Danone Foods also contributes to implementing the EBC model in the economic sphere. In this case, the joint venture focuses on social justice and human dignity values. Such values are observed in the extensive contribution to the community and social impact that the social enterprise represents [27].
Thirdly, concerning the social impact of the cooperation agreement in the community, we can find the following benefits. On the one hand, the plant situated in Bogra produces 100,000 units of yogurt daily, which means that 100,000 children from the area receive nourishment that covers 30% of their daily needs for vitamin A, iron, iodine, and zinc [99]. Thus, the strategic alliance achieves one of their main objectives: the improvement of nutritional conditions of children in Bangladesh, especially those in rural areas.
On the other hand, the coalition has helped to eradicate the exclusion of the poorest groups of the community by creating a large number of local jobs, as was mentioned above. This job creation has favored access of people from rural areas to goods and services they could not afford previously, such as access to finance, drinking water systems, or electricity. These aspects have led to a significant reduction in inequality of the rural population with respect to more developed areas. Therefore, the joint venture also contributes to the ECG model with the values of justice, human dignity, and solidarity [85], thus actively promoting development of the community.
Thereafter, thanks to the strategic alliance, Grameen Bank and Danone Foods created shared value, value based on the idea that, through its economic activity (in this case, the manufacture of yogurts), the organizations involved in the agreement create both economic and social value [18]. Furthermore, with the creation of a social enterprise, both firms diluted the historical definition of each type of entity. They created a hybrid organization in which the for-profit company covers the social and environmental aspects predominating in the non-profit organization, and this, in turn, ensures the economic viability necessary to continue operating [18].
Finally, it is worth noting that both organizations have also benefited from the strategic alliance. On the one hand, since the social enterprise was established, Danone has experienced an increase in its number of partners. Moreover, nowadays, the for-profit entity can implement social responsibility policies quickly as a consequence of the transfer of knowledge derived from the alliance, thus strengthening its corporate image [95]. The creation of the social enterprise Grameen Danone Foods became the foundational project of Danone Communities, a social business network aimed at assisting local entrepreneurs with funding and formation [99]. Currently, Danone Communities consists of 11 social businesses which include several for-profit organizations [99]. This network has significantly improved the group's brand positioning. According to the Reptrak index (an indicator of corporate reputation issued by the Reputation Institute) [102], during the period 2010-2014, Danone Group was positioned first in the ranking. The corporation was recognized in 2015 as the company with the best reputation in the food and beverage sector [100].
On the other hand, since the establishment of the strategic alliance in 2006, Grameen Bank has founded seven social enterprises [103], thus accessing a vast number of people in need.
Therefore, we can conclude that, through this cooperation model in which organizations shared their different resources and capabilities, the social enterprises derived from the alliances can solve a social issue more efficiently than operating separately [17]. The social enterprise Grameen Danone Foods, by producing and locally distributing affordable yogurts to the population, enabled an improvement in the nourishment of the children. Moreover, it generated an ecosystem in which different sectors of the community participated. This factor increases the economic capacity of the rural people from the area so that it generates social and economic development there. Thus, we can observe that the joint venture Grameen Danone Foods contributes to the implementation of the ECG model through its cooperative activity in which capital is not the goal but an instrument for social transformation [26]. Table 5 summarizes the main contributions of the case study to implementation of the ECG model from a cooperation perspective.
Results
After analyzing the cooperation agreements, we can state that there is a common pattern between both cases concerning the benefits provided to the community. The strategic alliances studied solve a social issue by sharing their different resources and capabilities.
On the one hand, Grupo Vips (by providing funding and contacts) and Fundación Hazlo Posible (by contributing with its social responsibility knowledge) expand a portal web aimed at the insertion of organizations and different sectors of the society in volunteering aspects. Hence, we can observe that Grupo Vips-Fundación Hazlo Posible solve a community problem through social innovation [17]. In this case, the organizations involved contribute to technological development through their portal web, a technology that has a significant impact and social benefit.
On the other hand, Danone Foods (by contributing with its food processing knowledge) and Grameen Bank (by providing its social enterprise and Bangladesh market knowledge) improve the nourishment of the children in the area. Moreover, the joint venture assists in the economic progress of Bogra by creating employment.
Furthermore, the entities involved in the cooperation agreements experience benefits for themselves. Grupo Vips, Fundación Hazlo Posible, Grameen Bank, and Danone Foods have benefited from a significant increase in their number of partners. Moreover, their networks have been expanded as a consequence of their greater visibility in the community. Likewise, since the establishment of the alliance, Grupo Vips and Danone Foods are considered some of the most reputable companies within their sectors [94,95].
Finally, it is worth noting that these cooperation agreements lead to the "Creating Shared Value" approach referred to by Porter and Kramer [18]. Both strategic alliances-Grupo Vips-Fundación Hazlo Possible through social innovation and Grameen Bank-Danone Foods through economic progress-generate social benefits which allow the creation of economic benefits. As a result of their work together, the organizations involved in the strategic alliances can redefine the for-profit and non-profit entity concepts. Thus, the cooperating enterprises become a hybrid organization that ensures its economic viability and survival by advancing business competitiveness while fostering their economic and social environment [18]. Table 6 summarizes the similarities between the two case studies analyzed. As we can observe, both cases have a positive impact on the social environment. Such impacts can be seen in three different aspects: (1) the interaction of the organizations with the social agents, (2) the reduction of their environmental footprint, and (2) their support to the community. Furthermore, by promoting decent working conditions, socio-occupational integration, and the culture of participation, the entities have a positive impact on the employees as well. Finally, both agreements favor the funding of social projects, thus contributing to the values of solidarity and social justice. Therefore, we can observe that both cooperation agreements contribute to implementation of the ECG model. On the one hand, both cases place cooperation ahead of competition (basic principle of the ECG model) [27]. On the other hand, the strategic alliances operate meeting the values considered as contributors to the common good: human dignity, solidarity, sustainability, social justice, and participation [26]. Grupo Vips-Fundación Hazlo Posible and Grameen Bank-Danone Foods generate social value and shared value, thus contributing to implementation of the ECG model.
Discussion
From the present study, we can conclude that cooperation is a crucial instrument when implementing sustainability-based management models. Such sustainable models require the dominance of its three dimensions (economic, social, and environmental), dimensions that not all companies possess.
To this end, strategic alliances enable organizations to access the resources and capabilities needed to develop the three dimensions of sustainability that they cannot attain individually. From this perspective, alliances between private for-profit organizations and non-profit entities become a particularly suitable tool for the implementation of corporate sustainability. Thus, for-profit organizations possess the resources needed to ensure economic sustainability while NGOs provide adequate resources and capabilities for social and ecological sustainability. Hence, as stated by Porter and Kramer [18] within their "Creating Shared Value" approach, both types of entities can create shared value in which economic, social, and environmental value reinforce each other. Furthermore, from the authors' approach, we can conclude that those entities with a higher probability of survival are the ones referred to as hybrid organizations since such organizations are capable of creating economic and social value cooperatively.
As cited in the theoretical framework, during the last years, papers related to this approach to cooperation are being published. While there are not many publications that study this aspect, there are already some papers that analyze this perspective with interesting results and conclusions. In this regard, this work is a clear contribution to this line of investigation. On the one hand, this work analyzes strategic alliances between for-profit and non-profit organizations, thus reinforcing the literature in this field. On the other hand, the study connects alliances with corporate sustainability models, thus providing a clear innovation in this field of study. In particular, these contributions are driven by studying the impact that cooperation has when implementing the Economy for the Common Good model [27,85,104].
From the empirical study conducted throughout this work, we can state that there is a direct and positive relationship between cooperation and the implementation of the Economy for the Common Good model. By analyzing the case studies, we can affirm that strategic alliances favor the creation of social and environmental value for the different stakeholders of the organization. By using the Common Good Matrix, we have identified the actions of the alliance that create social value. We observe that the positive impacts derived from the cooperation agreements concentrate on some stakeholders such as the social environment, the employees, and the funders. In both case studies, organizations create social value by means of financing social projects, generating a participative corporate culture, and supporting the community. Furthermore, the matrix includes aspects such as the eradication of poverty and the socio-occupational integration of some groups, which are essential priorities of these types of alliances.
With these elements, several values of the ECG model are promoted. Such values are human dignity, solidarity and social justice, environmental sustainability, and transparency and codetermination. Likewise, one of the cooperation agreements studied also creates social value for suppliers, customers, and other companies. Moreover, it is important to highlight how alliances foster local suppliers and social entrepreneurship in communities where the cooperation agreements are developed.
It is worth noting that the literature on strategic alliances between for-profit and non-profit organizations is not developed in-depth yet. On the one hand, according to the classification carried out by García et al. [14] regarding the three different approaches that this type of alliances can adopt, we can conclude that most of these cooperation agreements are still in the first stages. Thus, the majority of these alliances focus on the transition approach, which is based on occasional philanthropic actions. Therefore, such alliances neither create shared value nor common good value.
On the other hand, the most progressive alliances are based on a transitional approach in which continuous dialogue between the for-profit and the non-profit organization takes place. Nevertheless, this perspective does not consider the rest of the stakeholders, thus creating shared value but not contributing to the creation of value for the common good. To reach the common good, alliances must include the transformation approach mentioned by García et al. [14], since this one is based on direct and active participation between all of the stakeholders that are affected by the alliance. From this point of view, the ECG model can help to strengthen this type of cooperation agreement. Consequently, not only the ECG model can benefit from the alliance but also, at the same time, these types of alliances can be enhanced through the ECG model. Therefore, there is a direct and reciprocal relationship between both.
Conclusions
The first purpose of this work was to establish a theoretical relationship between strategic alliances between for-profit and non-profit organizations and the ECG model. In this sense, the present work contributes to the literature on both topics. On the one hand, this work provides a new perspective on the study of this type of alliances. Firstly, we analyze the motives why organizations establish this type of cooperation agreement while connecting them with ethics and corporate social responsibility. Secondly, we analyze the impacts and benefits that these alliances create for the cooperating entities and for the community in which they operate.
On the other hand, this work provides a new perspective to the study of the ECG model, since this paper analyzes cooperation as one of the primary keys of the model. Although this model suggests interfirm cooperation as a determining factor for business success, there is no literature on this subject. Therefore, this work is the first theoretical study that provides this ECG model approach.
The second objective of this work was to establish a practical application of the theoretical relationship between strategic alliances between for-profit and non-profit organizations and the ECG model. To this end, an empirical study based on the analysis of two case studies has been conducted. In this sense, our contribution is innovative and original, as there are no published empirical studies on this subject.
Finally, although the present work provides multiple case studies, it may be of interest to develop future researches concerning alliances between for-profit and non-profit organizations so that the taxonomy described can be expanded. Besides, it might be relevant to carry out an empirical study based on a quantitative analysis in which relationships between the different types of alliances and their social and environmental impacts can be analyzed.
A new paper on this topic could consist of application of the Common Good Matrix to a specific alliance between for-profit and non-profit organizations. Thus, the matrix indicators will be applied to quantitatively measure the contributions of these alliances to the common good.
|
v3-fos-license
|
2023-01-20T14:11:52.283Z
|
2015-07-01T00:00:00.000
|
256013633
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://genomemedicine.biomedcentral.com/track/pdf/10.1186/s13073-015-0187-6",
"pdf_hash": "142055bb01b884aebeee5841e74bf1d5503e5195",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:532",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "142055bb01b884aebeee5841e74bf1d5503e5195",
"year": 2015
}
|
pes2o/s2orc
|
Transferring genomics to the clinic: distinguishing Burkitt and diffuse large B cell lymphomas
Classifiers based on molecular criteria such as gene expression signatures have been developed to distinguish Burkitt lymphoma and diffuse large B cell lymphoma, which help to explore the intermediate cases where traditional diagnosis is difficult. Transfer of these research classifiers into a clinical setting is challenging because there are competing classifiers in the literature based on different methodology and gene sets with no clear best choice; classifiers based on one expression measurement platform may not transfer effectively to another; and, classifiers developed using fresh frozen samples may not work effectively with the commonly used and more convenient formalin fixed paraffin-embedded samples used in routine diagnosis. Here we thoroughly compared two published high profile classifiers developed on data from different Affymetrix array platforms and fresh-frozen tissue, examining their transferability and concordance. Based on this analysis, a new Burkitt and diffuse large B cell lymphoma classifier (BDC) was developed and employed on Illumina DASL data from our own paraffin-embedded samples, allowing comparison with the diagnosis made in a central haematopathology laboratory and evaluation of clinical relevance. We show that both previous classifiers can be recapitulated using very much smaller gene sets than originally employed, and that the classification result is closely dependent on the Burkitt lymphoma criteria applied in the training set. The BDC classification on our data exhibits high agreement (~95 %) with the original diagnosis. A simple outcome comparison in the patients presenting intermediate features on conventional criteria suggests that the cases classified as Burkitt lymphoma by BDC have worse response to standard diffuse large B cell lymphoma treatment than those classified as diffuse large B cell lymphoma. In this study, we comprehensively investigate two previous Burkitt lymphoma molecular classifiers, and implement a new gene expression classifier, BDC, that works effectively on paraffin-embedded samples and provides useful information for treatment decisions. The classifier is available as a free software package under the GNU public licence within the R statistical software environment through the link http://www.bioinformatics.leeds.ac.uk/labpages/softwares/ or on github https://github.com/Sharlene/BDC.
Results: We show that both previous classifiers can be recapitulated using very much smaller gene sets than originally employed, and that the classification result is closely dependent on the Burkitt lymphoma criteria applied in the training set. The BDC classification on our data exhibits high agreement (~95 %) with the original diagnosis. A simple outcome comparison in the patients presenting intermediate features on conventional criteria suggests that the cases classified as Burkitt lymphoma by BDC have worse response to standard diffuse large B cell lymphoma treatment than those classified as diffuse large B cell lymphoma.
Conclusions: In this study, we comprehensively investigate two previous Burkitt lymphoma molecular classifiers, and implement a new gene expression classifier, BDC, that works effectively on paraffin-embedded samples and provides useful information for treatment decisions. The classifier is available as a free software package under the GNU public licence within the R statistical software environment through the link http://www.bioinformatics.leeds.ac.uk/labpages/ softwares/ or on github https://github.com/Sharlene/BDC.
Background
Gene expression patterns represent an attractive molecular phenotype for the classification of cancer [1][2][3][4]: they represent the functional state of the cancer cell that results from the perturbation of cellular processes such as signal transduction and genetic regulation, and whose underlying cause may be mutations or other changes in the cancer cell genome [4]. DNA microarrays have made gene expression measurements at the whole genome scale affordable for routine clinical diagnostics, and this has led to the development of gene expression signatures that may inform prognosis or treatment [5][6][7][8]. Blood cell cancers, leukaemia and lymphoma, are particularly attractive targets for gene expression signatures since they result from cells undergoing a complex pathway of differentiation, where cellular identity is largely defined by the pattern of gene expression, and where errors in differentiation or maturation are reproducibly manifest in cancers as aberrant patterns of gene expression [9]. Despite this, transfer of gene expression signatures into clinical practice has not proved straightforward [10,11]. Different measurement technologies have emerged (e.g. microarrays, RT-PCR and RNA-seq) but, until recently, these have not been applicable to routine samples that are mainly formalin fixed and paraffin embedded (FFPE) in most centres. Furthermore, reproducibility between laboratories has proved challenging [12]. Equally, continual improvements in methodology, although welcome, raise the issue of transferability of signatures to newer platforms and can frustrate the clinical need for robust and fixed standards [13,14]. Here we present a case study in the transfer of gene expression classifiers from the research literature into clinical practice.
We have adopted the example of Burkitt lymphoma (BL). This is a highly proliferative neoplasm that occurs sporadically in North America and European countries, but also has a variant associated with HIV infection and an endemic form common in Africa which is associated with Epstein-Barr virus (EBV) [15]. The criteria used to establish a diagnosis of BL have varied since its original description based on morphologic grounds in the endemic form, but it is now accepted that it is associated with translocation between the MYC oncogene and immunoglobulin gene [16], normally in the absence of chromosomal translocations involving oncogenes associated with diffuse large B cell lymphoma (DLBCL) [17,18], and more recent studies have revealed further commonly associated mutations [19][20][21]. This is a case study of high clinical relevance, since treatment of BL requires intense chemotherapy [e.g. R-CODOX-M/ IVAC; rituximab, cyclophosphamide, vincristine (known as Oncovin), doxorubicin methotrexate, ifosfamide, etoposide (known as Vepesid) and cytarabine ( known as Ara-C) [22], while in contrast DLBCL outcome is not improved by intensification of chemotherapy and is treated with a milder regime as first line therapy (e.g. R-CHOP; rituximab, cyclophosphamide, doxorubicin (known as hydroxydaunomycin), vincristine (known as Oncovin), prednisolone) [23]. However, a group of cases which are introduced as "B cell lymphoma, unclassifiable, with features intermediate between diffuse large B cell lymphoma and Burkitt lymphoma" [24] has received increased attention. These are likely to share some but not all pathogenetic features of classic BL, or arise as a result of alternative primary molecular events that nonetheless deregulate the common oncogenic pathways [25,26]. This group appears to respond poorly to either intensive treatment or R-CHOP-like regimes [27][28][29], and the underlying mechanism remains largely unknown and the appropriate treatment still needs to be established.
Two seminal studies [30,31] introduced gene expression-based classifiers to distinguish cases of BL and DLBCL based on data sets from different array platforms. Hummel and co-workers [31] adopted an approach whereby the set of classic BL samples was systematically extended on the basis of overall similarity in gene expression patterns to less clear cases. This semi-supervised approach using 58 genes effectively defined a new class called 'molecular Burkitt lymphoma'. On the other hand, Dave and coworkers [30] based their supervised Bayesian method on independent expert pathology assignment of cases to the BL/DLBCL classes, and created a classifier based on 217 genes. The two classifiers are thus different in nature: they depend on relatively large gene sets with limited overlap and can be viewed as different gene expression-based definitions of BL.
Here, starting from the above work, we investigate optimal classification algorithms and gene lists to recapitulate the original classifiers, and by examining the transferability of the optimal classifiers between data sets we effectively compare the definitions of BL applied in each data set and classifier. Our own clinical data are based on RNA extraction from FFPE samples using the Illumina DASL (cDNA-mediated Annealing, Selection, extension and Ligation) technology, while the above classifiers were based on RNA extracted from freshfrozen samples and different Affymetrix arrays. RNA in FFPE samples is more degraded, and although experimental protocols are improving, the data from this source remain significantly more noisy, and the change of measurement platform could have an equally significant effect. Nevertheless, FFPE data are likely to be the clinical reality for the foreseeable future, particularly in diagnostic laboratories responsible for large geographical areas with many hospitals. We investigate the production of a classifier based on a reduced gene set that can be effectively transferred between different gene expression measurement platforms in publicly available data sets and our own clinical data, and make a preliminary assessment of its likely clinical utility.
Data sets
The data sets used in this study are summarized in Table 1. Five public data sets were downloaded from the Gene Expression Omnibus [32]. GSE4732 was split into two subsets derived from different array platforms, here referred to as GSE4732_p1 and GSE4732_p2. Classifier development employed GSE4732_p1 and GSE4475, and the other data sets were used in testing transferability of classifiers.
We also included 249 FFPE samples (GSE32918) from a previous study [33], together with 93 samples from the same platform Illumina DASL version 3 array and 250 samples from version 4 arrays in this study. Technical replicates were assessed both within each platform and between two platforms to examine reproducibility and consistency. The quality of each sample was checked before further analysis and the details are described in Additional file 1. The new samples analyzed have been submitted to the Gene Expression Omnibus with accession number GSE69053.
Ethical approval
This study is covered by standard NRES (National Research Ethics Service) ethics approval for Haematological Malignancy Diagnostic Service (HMDS; St James Hospital, Leeds) local cases and treatment was not modified as a consequence of the study. The re-analyses of data from the LY10 and RCHOP14/21 clinical trials are separately covered by each trial's ethical approval. This research is fully compatible with the Helsinki declaration.
Data preparation
Preparation was done in R. All Affymetrix data sets except GSE4732_p1 were processed with the affy package [34] from raw data, and expression summarization was done with the rma algorithm [35] with quantile normalization. Gene identifiers were mapped with hgu133a.db [36] and hgu133plus2.db [37] packages. GSE4732_p1 was generated by an older custom array format and for this we used normalized expression data and gene identifiers provided by the authors. Pre-processing (including quality control) and expression summarization for the Illumina data sets was done with the lumi package [38] applying a vst transformation [39] and quantile normalization. Where multiple probes represented the same gene, the expression for the gene was summarized with the average value. All gene symbols were then checked with HGNChelper package [40] and updated to the latest approved symbol if necessary.
Classifier performance assessment
Performance of classifiers was assessed using standard measures (overall error rate, overall accuracy, precision and recall within each class). Unless otherwise stated, performance was assessed by tenfold cross-validation when considering performance within a particular data set. We also assessed transferability of classifiers by training on one data set and testing on another. Further detail of these processes is provided in the "Results" section.
Classification algorithms
We tested a total of ten algorithms, Bayes Net, Naïve Bayes, libSVM, SMO, Neural Network, Random Forest, Function Tree, LMT (logistic model tree), REP Tree and J48 pruned tree within GSE4732_p1 and GSE4472, respectively, using the Weka [41] machine learning tool. Our aim was not to compare methods, but rather to find a method able to recapitulate to an acceptable level of accuracy the classifications within these data sets. All algorithms were thus given default parameters (except to use 100 trees for the Random Forest), and parameters were then subsequently optimized just for the algorithm chosen for the remainder of the work. Initial investigations of different algorithms were carried out separately within each of GSE4732_p1 and GSE4475. Both of these data sets are associated with a classifier developed by the authors, and we used the gene lists from these classifiers as initial feature sets for algorithms above.
Parameter optimization
We optimized parameters for one classification method, the support vector machine (SVM) implemented in libSVM [42]. Four common kernels are implemented in libSVM and we chose the most commonly used and recommended, the radial basis function (RBF). In this case parameter optimization involves the kernel parameter γ and the trade-off parameter c. We used the automatic script easy.py provided in the libSVM for a parameter grid search to select the model parameters: the search range of c value was 2 −5 to 2 15 with a step of 2 2 , the range of γ values was 2 3 to 2 −15 with a step of 2 −2 and the cross-validation fold was 5 [43]. Note that parameter optimization was carried out by cross-validation within the training data, avoiding potential over-fitting that could result from using the complete data set.
Probability calculation
In the case of the SVM classifier applied to our Illumina data set, the BL probability is a posterior class probability obtained from libSVM, employing an improved implementation of Platt's posterior probability function for binary classification [44].
Classifier gene set comparison
Subsequent development of classifiers involved a number of gene lists derived from those used in the authors' classifiers for GSE4732_p1 and GSE4475 by consideration of issues such as availability of a gene expression measure for the gene on all platforms, robustness to over-fitting, and transferability to unknown data derived from different measurement platforms, as detailed in "Results" and "Discussion". In addition, we also tested the ten genes [45] used in a recent classifier that employs data from the NanoString [46] platform.
Cross-platform normalization
Z-score, rank and two more sophisticated methods, XPN and DWD [47,48] implemented in the CONOR package [49], were used to examine the effect of different crossplatform normalization methods. Z-score normalization operates for each gene independently, producing a normalized expression value in each sample as z = (x − m)/s, where x is the un-normalized expression value of the gene and m and s are the mean and standard deviation of x over all samples. For rank normalization, r = R/N − 0.5 is the normalized value, where R is the rank of the sample with respect to the N other samples on the basis of the expression of the gene concerned. Z-score and rank normalization have potential deficiencies, but also have the advantage of being applicable to data from methods such as RT-PCR and NanoString, which are designed to measure the expression of only relatively small gene sets.
Software implementation
The developed classifier was implemented in the BDC package using the R package mechanism [50], and is available from the authors. The package provides a list of options for classifier gene set, cross-platform normalization method and data set to train the model along with reasonable default settings.
Comparison of data sets and existing classifiers
The two existing classifiers were developed within GSE4732_p1 and GSE4475, respectively. Table 2 summarizes the gene sets used in these classifiers, the total numbers of genes measured on the corresponding platforms and the overlaps of these gene sets. The two classifiers use substantially different gene sets, with limited overlap, and in neither case are expression measurements of all classifier genes available on the other platform. It is impossible, therefore, to test a straightforward reimplementation of either classifier on the data sets that were not used in its development. Our aim, therefore, was to construct new classifiers and gene sets, based on those already existing, which adequately recapitulate the results of existing classifiers but are applicable to all data sets.
Recapitulation of existing classifications
We developed classifiers using feature sets corresponding to the 214 gene list from the original classifier in GSE4732_p1, and the 58 gene list from the original classifier in GSE4475. Figure 1 shows the performance of a range of machine learning methods in both data sets (for detailed figures see Table S1 in Additional file 2). In GSE4732_p1 it is possible to achieve very low overall error rates of around 1 %. In GSE4475 we investigated two definitions of BL: BL probability assigned by the authors as >0.95 (strict) and >0.5 (wide), assigning other samples as DLBCL. Using the strict definition again very low error rates are possible (<2 %). On the other hand errors are larger with the wider definition, indicating that the classes are less well defined in terms of gene expression when this approach is adopted, and arguing in favour of using the stricter definition. Overall, given the level of uncertainty in the actual classification of intermediate cases, we consider that these results reproduce
Optimization of SVM parameters and classifier gene list selection
Motivated by the fact that no platform has gene expression measurements for all the genes used in either original classifier, and aiming to reduce gene lists where possible because classifiers based on fewer features are less complex and less susceptible to over-fitting, we next sought to optimize the gene list for our classifier. At the same time we investigated the effect of optimizing SVM parameters. We considered further gene lists based on the existing classifiers: the 21 genes common to both original classifiers; the 28 genes for which measurements are available in GSE4732_p1 and are part of the classifier used in GSE4475; and the 172 genes that are part of the classifier genes used in GSE4732_p1 and available in GSE4475. A further list of 60 genes was newly identified by comparing the differentially expressed genes of the high confidence cases in each data set (which is 45 BL against 232 DLBCL in GSE4732_p1, and 44 mBL (molecular BL defined by the author)against 129 non-mBL in GSE4475; further details are given in Additional file 1). The results presented in Fig. 2 show that optimization of SVM parameters results in a modest (up to around 1 %) increase of accuracy over the use of default parameters. More importantly they show conclusively that classifiers based on small gene lists perform at least as well as their larger counterparts. The 28 gene list matches the performance of the full list in both data sets with only insignificant reductions in accuracy and was selected for future work. We also tested a recently published list of ten genes [45] developed with NanoString data. This list is insufficiently represented on the platform used in GSE4732_p1 with only six genes. We found it to perform similarly to our 21/28 gene lists in GSE4475 (Table S2 in Additional file 2), but in the absence of applicability to other test data sets we did not consider this gene list further and the five gene lists used to test the classifiers are provided in Additional file 3.
Transfer of classifiers between data sets
Normalization of data to produce an expression measure that is comparable between platforms is an essential first step in producing transferable classifiers. We compared four cross-platform normalization methods, Z-score, Rank, XPN and DWD. The Z-score and Rank methods are the least sophisticated, but could be applied to data for small numbers of genes measured by most technologies. The other methods are more sophisticated and there is evidence that they perform better in some applications [32,49], but they require measurements of many genes, such as those typically produced by microarrays. Table 3 shows the results of training a 28 gene SVM classifier on either GSE4732_p1 or GSE4475 and testing it on other data sets using different data normalization methods. All methods give similar results under the same training and test conditions, indicating that it is of no disadvantage to adopt one of the less sophisticated methods. First of all we considered the simple comparison of classifiers trained on one data set (GSE4732_p1 or GSE4475) and tested on the other. Table 3 shows that a classifier trained on GSE4732_p1 performs reasonably when tested on GSE4475 with the strict BL definition in the latter data set, giving error rates (recall) around 9 % for BL and <2 % for DLBCL. Conversely, training on GSE4475 (strict) and testing on GSE4732_p1 again gives good performance (errors around 4 % for BL and 1 % for DLBCL), indicating the classifier adopted on GSE4732_p1 corresponds to a BL criterion similar to the GSE4475 strict stratification. As would be expected, training with the wide definition of BL in GSE4475 reduces the BL error rate observed when testing on GSE4732_p1 to 2 % with a corresponding increase of the DLBCL error rate to around 5 %.
The performance of the above classifiers on other available data sets is also reported in Table 3. GSE4732_p2 is formed from a subset of the samples in GSE4732_p1 but with measurements from a different array platform (Table 1). It is surprising, therefore, that the classifier trained on GSE4732_p1 performs relatively poorly on this data set (BL error rates 15-21 % depending on normalization method), and the classifier trained on GSE4475 performs worse (BL error rates of 27-33 %). This effect is explored more thoroughly in Fig. 3 (top panel), which illustrates how different definitions of BL in the training data (GSE4475) affect the classifier. It is clear that with respect to this data set, the two consistent classifiers developed above adopt a narrower definition of BL, assigning cases with a weaker BL signal to the DLBCL category, and that a better classification result can be obtained by using a wider BL definition in the training set. GSE10172 is a smaller data set generated by the group (Klapper, Molecular Mechanisms in Malignant Lymphomas Network Project of the Deutsche Krebshilfe) who produced GSE4475. Classifiers trained on either GSE4475 (strict) or GSE4732_p1 produce zero error rate for DLBCL cases but higher errors for BL: however, this is a relatively small data set and these findings may not be significant. Nevertheless, it is again the case that the classifier trained on the wide definition of BL in GSE4475 does produce a more accurate classification in GSE10172 Classification results of GSE4732_p2, GSE10172, GSE17189 and GSE26673 when the classifier was trained by a variety of thresholds, with a heatmap of the 28 classifier genes showing the Z-score normalized expression values. The training set threshold is adjusted according to data set GSE4475 and the class probability given to each sample by the original classifier; for example, training set Th = 0.9 means only include the samples with a confidence over 0.9 in GSE4475 to train the classifier, and Strict and Wide refer to the strict and wide definition used previously. In test set GSE10172, the GEO-Class bar shows both the class label and BL probability from the original data set for each sample. The figure shows that when trained with the GSE4475 strict data set, the classifier has a strict definition of BL similar to with GSE4732_p1 but not very effective in recognizing BLs in GSE4732_p2 nor endemic BL (eBL) and HIV-related BL cases (HIV-BL GEO Gene Expression Omnibus (Fig. 3, bottom left panel), according to the classification given in that data set.
GSE17189 and GSE26673 are different in character, containing endemic BL (eBL) and HIV-related BL cases in contrast to the sporadic cases from the other data sets. Table 3 shows that the two classifiers trained with strict definitions of BL perform poorly with this data (BL error rate > 50 %). The lower right panel of Fig. 3 shows that cases of eBL have a similar gene expression pattern to the sporadic cases but generally with a weaker signal, explaining the high error rates from the strictly trained classifiers and the improvement in this when a wider definition is adopted. Many HIV-related BL cases on the other hand appear to have gene expression patterns related at least as strongly to DLBCL cases as they are to sporadic BLs and do not classify as BL with any choice of training data. Although sharing many pathologic features with sporadic BL, the eBL and HIV-related BL cases do have a distinct pathogenesis and gene expression. Some classifiers can recognize eBL seemingly well, but we suggest that training these classifiers on data for sporadic BL and applying it to eBL or HIV-related BL would not be advised. Given the distinct clinical settings of these disease variants, this does not pose a significant issue in relation to development of an applied gene expression-based classification tool.
To conclude, these studies show that despite using substantially different methods and genes, classifications within GSE4732_p1 or GSE4475 represent a largely consistent definition of BL that can be used as a basis for a classifier that uses fewer genes and transfers well between the two data sets. While this classifier does not apparently perform as well on other smaller and more diverse data sets, inconsistencies are largely related to intermediate cases and depend on where the boundary between classes is placed in a spectrum of cases in the training data. A similar test of the training set effect on GSE4475_p1 is shown in Additional file 4.
Illumina DASL data sets
Following the above investigations, we trained a 28 gene-based SVM, the BL and DLBCL classifier BDC, on the GSE4475 data set with a BL probability threshold of 0.95, and applied it to our Illumina data sets (Table 1) using several cross-platform normalization methods. Despite the results on the smaller data sets above indicating some advantage to a wider definition of BL, we preferred in this case the stricter definition (p = 0.95) because of its stronger consistency within and between the two larger data sets that were used in training studies. Of 592 samples in the version 3 and version 4 data together, 556 (93.9 %) have the same classification Fig. 4 Classification consistency of the replicates from different platforms. Top: the variance of all replicate samples from the same patient when the data are normalized by Z score, Rank, DWD, and XPN methods, respectively. Bottom: the BL probability of each replicate (either has replicates in only one version or has replicates in each version) of the corresponding patient: bigger dots indicate version 4 data, smaller dots version 3 data, orange dots refer to micro-dissected tissue, and green dots are normal dissected tissue independent of normalization methods. For some cases the data sets contain replicates; 124 cases have a replicate on version 3 and version 4 together (including cases replicated within each version and some cases that are not replicated within a version but that have data from both versions). The variance of the BL probability of the total 124 replicates is given in Fig. 4 (top). Again this shows that if replicates show large variability, this is largely independent of normalization method. The Z-score normalization produces the smallest overall variance, and this was used subsequently.
The detailed results for all replicated cases are shown in Fig. 4 (bottom). This shows that the cases where the BL probability is most variable between replicates tend to be intermediate cases with BL probabilities closer to 0.5. It is also clear that version 4 data (with improved initial mRNA reverse transcription) generally give a stronger BL signal (BL probabilities closer to 1.0), probably reflecting better experimental treatment of BL samples, which, by their very nature, are more prone to significant degradation. Finally, it is clear that some of the larger variability between replicates occurs when one replicate is a tissue micro-dissection. Micro-dissection was performed on a subset of tumours following morphological inspection, with the aim of enriching for tumour content/and or the most adequately fixed area of the tissue. This would be expected to give stronger tumour-specific expression, as shown from previous experiments [33], and leads to a clearer classification of BL in the majority of cases.
Comparison of original clinical diagnosis with gene expression-based classification
Our final BDC classification was based on reducing the Illumina data set to a single replicate for each case, choosing version 4 data in preference to version 3, micro-dissected tissue in preference to usual sampling, and otherwise choosing the newest array data. This gave a classification for 403 samples. The current clinical diagnosis of these samples is based on a range of immunophenotypic and molecular (fluorescent in situ hybridization, FISH) data as previously reported [28] and the agreement of this with the gene expressionbased classification is shown in Table 4, where DLBCL diagnosed cases with a known chromosomal rearrangement of the MYC gene are considered separately. Generally there is a high level of agreement between the two diagnoses (85 % of clinically diagnosed BL cases classified as BL, and 96 % of clinically diagnosed DLBCL cases classified as DLBCL). Of the 11 clinical BL cases classified as DLBCL by BDC, three had classic BL characteristics, indistinguishable on conventional criteria from BL, but the remainder of the group included a high level of aberrant cases, with non-classic MYC rearrangement and/or discrepancies in immunophenotype. Of the ten diagnosed DLBCL cases predicted as BL, three showed a BL phenotype without MYC rearrangement. We also looked further at the small group diagnosed as DLBCL but with MYC rearrangement detected. This is a group of particular interest, many of which are now classified as "lymphoma with features intermediate between BL and DLBCL", and though many studies have reported a poor prognosis, currently there is no specific treatment for this group [51][52][53]. In our data set (Table 5), 35 R-CHOP-treated cases in this group were classified into ten BL plus 25 DLBCL by BDC: the survival rate (remained alive or a complete remission from the treatment; for details see Table 5) of each class was 30 % and 68 %, respectively. Although these numbers are small, the survival difference observed suggests some advantage to gene expression classification that might eventually be examined in more detail in future trials. We note also that the survival rate (68 %) observed for intermediate cases classified as DLBCL by BDC is not significantly different from that for DLBCL as a whole (Kaplan-Meier, p = 0.4 compared with the R-CHOP-treated DLBCLs without MYC rearrangement. Full information is provided in the Gene Expression Omnibus data set).
Discussion
The work presented here provides an important step in establishing an optimized, parsimonious and open access gene expression-based classifier for BL. By using the results of one classifier and its associated data set for training, and the other as test data, we have shown that two substantially different classifiers in the research literature have a high degree of concordance and that their results can be recapitulated, at least within the level of uncertainty associated with intermediate cases. We have also shown that this unified classifier can be successfully applied to other public data sets and to data from routine clinical samples. In the context of our own clinical data, the classifier shows a high degree of concordance with the original diagnosis.
At a technical level, the reduction of the gene set compared with the original classifiers is a substantial advantage, making the classifier simpler and opening the possibility of using other measurement technologies such as quantitative PCR or NanoString in clinical applications. In addition, our detailed exploration of different training sets is noteworthy, since classifiers developed so far have largely been trained and tested within single data sets. Clearly the output of a classifier for borderline cases is critically dependent on the labelling of similar cases in the training data: our study maps the effect of changing training classification criteria in detail, and highlights differences in the classification of borderline cases between different data sets when examined in the context of gene expression criteria. Our final decision was to train the classifier on a two-way definition of BL based on the original class of GSE4475, but this nevertheless assigns fewer cases as BL than indicated in some other public data sets. Other recent work in the field has also highlighted the possibility of using reduced gene sets [45,54] for classification and also paraffin embedded samples, in these cases using data from the NanoString platform, which measures expression of a user-defined gene panel. It is an open question whether clinical use is better served by genome scale measurements (e.g. Affymetrix or Illumina arrays, RNA-seq) for each case, or possibly more precise measurements of just those genes needed for classification. However, the work reported here relies on genome scale measurements provided in publicly available data sets: this enabled our detailed comparison of different classifiers and their transferability, and the production of a consensus. This is not possible in general with NanoString data sets, since they seldom contain all the genes required by other classifiers. Our approach has been to leverage as much value as possible from existing data sets and previous classification work. We would support genome scale data generation from clinical samples in the future because it is of much greater utility in research and in the detailed comparison of competing methodologies.
Dependence on training data highlights the underlying difficulty in this and many similar studies, which is the lack of a 'gold standard' against which to evaluate new classifiers. Even though disease categories like BL and DLBCL have developed over many years with a variety of phenotypic and molecular diagnostic criteria, there are still a significant number of cases which are complex and neither expert pathological assessors nor recent molecular classifiers can effectively distinguish them. An alternative evaluation is to examine survival separation or treatment response, which is the primary clinical concern, and we used our own clinical data to examine outcome on the same treatment for cases where gene expression classification disagreed with the original diagnosis. Such discordant cases are relatively few even in a large data set, and the next step will be to make this evaluation in more cases as they become available. However, it is important to note that the treatment options in the setting of B-cell malignancies are likely to evolve at a high rate in the near future, and thus the use of clinical outcome with currently conventional therapy is likely to be an unstable parameter against which to assess the value of classification.
Our decision to develop a binary classifier for BL versus DLBCL, instead of introducing a third intermediate class, is related to the issues described above. Since there are only two main treatment regimes, a third class is not clinically useful. We prefer a classifier that makes a decision one way or the other on intermediate cases, bearing in mind that uncertainty is reflected in the associated class probabilities. It would be naïve to suggest that such a classifier could be the sole basis for treatment decisions, but it can effectively add to the weight of evidence a clinician might consider. [20,21,55]. It remains an open question whether the diseases are better distinguished by these or a gene expression phenotype. However, it seems likely that a combination of both information sources as the basis of future classifiers could lead to increased robustness in the context of heterogeneous diseases and the inevitable noise associated with all measurements on clinical samples.
We have previously developed an applied gene expression-based classifier for the separation of DLBCL cases into so-called "cell of origin" classes in samples derived from FFPE material [33]. This tool is currently being applied in a routine clinical setting in the context of a phase 3 clinical trial, and the BDC tool developed in this work could be applied with this to provide a more complete diagnostic pathway in routine clinical practice.
Conclusions
The identification of cases of BL is clinically critical. Classic cases of this disease are treated effectively with intense regimens but not with the standard treatment for DLBCL. However, an intense regimen is more costly, less convenient and unsuitable for weaker patients who may not withstand the toxic challenge. Intermediate cases therefore represent a significant difficulty. Our data show that it would be naïve to suggest that gene expression-based classification can solve this problem, but that it does have a potential role to play. We suggest that in cases with a standard diagnosis of DLBCL, gene expression could be used alongside other evidence and phenotypic features in deciding whether to treat with more intensive therapy. Future work should evaluate this suggestion, alongside the incorporation of genetic data in classification.
Additional files
Additional file 1: Additional methods on gene selection and quality checking.
Additional file 2: Additional tables of tested classifier results.
Additional file 3: Gene sets tested in different classifiers.
Additional file 4: Performance of the classifier trained with different BL definitions tested on GSE4732_p1 with a heatmap of Z-score normalized 28 classifier gene-expression values. The training set threshold is adjusted according to data set GSE4475 and the class probability given to each sample by the original classifier; for example, training set Th=0.9 means only include the samples that have a confidence over 0.9 in GSE4475 to train the classifier, and Strict and Wide refer to the strict and wide definition used previously. The GSE4475 (strict) trained classifier classifies cases similar to the original category in the paper, while other training sets would classify a small group of DLBCL cases as BL. However, the heatmaps of those cases exhibit similar expression patterns as classic BL, suggesting these are intermediate cases with less confidence for which class they belong to.
|
v3-fos-license
|
2019-07-02T13:47:47.134Z
|
2019-06-27T00:00:00.000
|
195764786
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0218879&type=printable",
"pdf_hash": "1a1cb529a63b57eabc854af69a604d0aad09a037",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:535",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "1a1cb529a63b57eabc854af69a604d0aad09a037",
"year": 2019
}
|
pes2o/s2orc
|
Corneal nerve healing after in situ laser nerve transection
Purpose We have previously reported that lamellar dissection of the cornea transects stromal nerves, and that regenerating neurites form a dense net along the surgical plane. In these experiments, we have disrupted the stromal nerve trunks in situ, without incising the cornea, to determine the regeneration events in the absence of a surgical plane. Methods Thy1-YFP mice were anesthetized and in vivo images of the corneal nerves were obtained with a wide-field stereofluorescent microscope. A far infrared XYRCOS Laser attached to 20X objective of an upright microscope was used to perform in situ transection of the stromal nerves. 3 types of laser transections were performed (n = 5/group): (i) point transection (a single cut); (ii) segmental transection (two cuts enclosing a segment of nerve trunk); and (iii) annular transection (cuts on all nerve trunks crossing the perimeter of a 0.8 mm diameter circular area centered on the corneal apex). Mice were imaged sequentially for 4 weeks thereafter to assess nerve degeneration (disappearance or weakening of original fluorescence intensity) or regeneration (appearance of new fluorescent fronds). Beta-3-tubulin immunostaining was performed on corneal whole-mounts to demonstrate nerve disruption. Results The pattern of stromal nerves in corneas of the same mouse and in corneas of littermates was dissimilar. Two distinct patterns were observed, often within the same cornea: (i) interconnected trunks that spanned limbus to limbus; or (ii) dichotomously branching trunks that terminate at the corneal apex. Point transections did not cause degeneration of proximal or distal segment in interconnected trunks, but resulted in degeneration of distal segment of branching trunks. In segmental transections, the nerve segment enclosed within the two laser cuts degenerated. Lack of beta-3 tubulin staining at transection site confirmed nerve transection. In interconnected trunks, at 4 weeks, a hyperfluorescent plaque filled the gap created by the transection. In annular transections, some nerve trunks degenerated, while others regained or retained fluorescence. Conclusions Interconnected stromal nerves in murine corneas do not degenerate after in situ point transection and show evidence of healing at the site of disruption. Presence or absence of a surgical plane influences corneal nerve regeneration after transection.
Methods
Thy1-YFP mice were anesthetized and in vivo images of the corneal nerves were obtained with a wide-field stereofluorescent microscope. A far infrared XYRCOS Laser attached to 20X objective of an upright microscope was used to perform in situ transection of the stromal nerves. 3 types of laser transections were performed (n = 5/group): (i) point transection (a single cut); (ii) segmental transection (two cuts enclosing a segment of nerve trunk); and (iii) annular transection (cuts on all nerve trunks crossing the perimeter of a 0.8 mm diameter circular area centered on the corneal apex). Mice were imaged sequentially for 4 weeks thereafter to assess nerve degeneration (disappearance or weakening of original fluorescence intensity) or regeneration (appearance of new fluorescent fronds). Beta-3-tubulin immunostaining was performed on corneal whole-mounts to demonstrate nerve disruption.
Results
The pattern of stromal nerves in corneas of the same mouse and in corneas of littermates was dissimilar. Two distinct patterns were observed, often within the same cornea: (i) interconnected trunks that spanned limbus to limbus; or (ii) dichotomously branching trunks that terminate at the corneal apex. Point transections did not cause degeneration of proximal or distal segment in interconnected trunks, but resulted in degeneration of distal segment of branching trunks. In segmental transections, the nerve segment enclosed within the two laser cuts degenerated. Lack of beta-3 tubulin staining at transection site confirmed nerve transection. In interconnected trunks, at 4 weeks, a hyperfluorescent plaque filled the gap created by the transection. In annular transections, some nerve trunks degenerated, while others regained or retained fluorescence. PLOS
Introduction
The cornea is the most densely innervated structure in the human body and the trigeminal ganglion provides sensory innervation to the cornea. Corneal nerves influence stimuli (touch, temperature and pain) perception and blink reflex, tear formation and maintenance of hydration as well as wound healing and avoidance of injury [1][2][3][4][5][6][7]. Ocular diseases such as Neurotrophic Keratitis and Dry Eye Disease cause considerable morbidity which is attributed to corneal nerve dysfunction [8][9]. Additionally, several studies have reported that routine surgical procedures in ophthalmic practice such as corneal transplantation, photorefractive keratectomy (PRK), radial keratotomy and laser-assisted in situ keratomileusis (LASIK) cause disruption and dysfunction of corneal nerves [10]. Corneal nerve regeneration spans several years after surgical transection and the nerve density never returns to pre-surgery levels [11,12]. Although there has been a lot of progress in the field over the years, there is a paucity in our knowledge and understanding of the underlying mechanisms governing corneal nerve regeneration hence the reason for its high relevance in the ophthalmology field [13]. Prior studies from our lab using the thy1-YFP (yellow fluorescent protein) transgenic mouse model (in which the nerves fluoresce yellow) have shown that after lamellar surgery nerve regeneration occurred via sprouting at the proximal end of stromal trunks and these regenerated nerves sometimes do not demonstrate the same nerve pattern as observed before surgery [14]. We have also reported the infiltration of myeloid-derived YFP fluorescent cells in the cornea after nerve transecting surgery [14][15][16]. Although there are numerous studies on nerve loss, healing and regeneration after injury to the corneal nerves due to surgery, there is relatively scant knowledge on corneal nerve loss and regeneration in non-surgery scenarios of neurotrophic corneas wherein there is an absence of a surgical plane. In our current study, we investigated the regeneration or healing of corneal nerves after an injury delivered directly and specifically to the corneal nerve trunks without creating any surgical planes. Such an injury was likely to somewhat replicate a disease of the nerve trunk without any surrounding anomaly. We proposed these investigations since our previous studies showed that corneal nerves regenerate along a surgical plane created by manual lamellar dissection of the cornea or an excimer laser annular keratectomy.
Ethics statement
All animal experiments were conducted in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The animal protocol was approved by the Institutional Animal Care and Use Committee (IACUC) of the University of Illinois at Chicago (Protocol Number: 13-159). Thy1-YFP neurofluorescent homozygous adult mice (6-8 weeks old) were purchased from Jackson Laboratories (Bar Harbor, ME), and colonies were established by inbreeding. For in vivo experiments, mice were anesthetized with intraperitoneal injections of ketamine (20 mg/kg; Phoenix Scientific, St. Joseph, MO) and xylazine (6 mg/kg; Phoenix Scientific). For terminal experiments, mice were sacrificed according to the IACUC protocol. Euthanasia was performed by CO 2 inhalation followed by cervical dislocation in adult animals. These procedures were chosen based on their reproducibility and the fact they cause no discomfort to the animals. These methods are consistent with the recommendations of the Panel on Euthanasia of the American Veterinary Medical Association. All efforts were made to minimize suffering.
In situ laser transection
Experiments involving in situ transection of the mouse corneal stromal nerves were performed using a far-infrared XYRCOS Laser (Hamilton Thorne Inc; Beverly, MA) which permits noncontact ablation of targeted membranes or structures. The XYRCOS laser module consists of a high power, Class 1, 1460 nm infrared laser plus RED-i target locator integrated into a 20X objective and is compatible with most inverted microscopes. The XYRCOS laser attaches to the turret just like a typical objective and allows full use of all the microscopes standard features, such as fluorescence and Hoffman imaging. In addition, the laser is factory-aligned and locked in place to ensure safe ablation. In our setup the XYRCOS Laser was attached to a Zeiss AxioExaminer A1 Upright Microscope (Carl Zeiss Microscopy, Thornwood, NY). After initial baseline images (day 0) before surgery using a Zeiss Stereolumar microscope (details in next section), three types of nerve transection were performed using the XYRCOS laser (Pulse: 200 μs; Power: 100%); (i) single nerve cut in either an interconnected trunk or stromal nerve ending; (ii) segmental nerve injury on an interconnected stromal nerve trunk; and (iii) annular nerve transections. The mice were followed up after surgery and sequential stereomicroscopic images were taken over the next 4 weeks after nerve transections to assess changes in fluorescence intensity and pattern of regenerating nerve fronds, if any. Only stromal nerves were included in the analysis. Subbasal hairpin nerves were excluded.
In vivo stereofluorescent microscopy
Initial baseline (Day 0) and serial imaging after nerve transection surgeries was performed using a fluorescence stereomicroscope (StereoLumar V.12, Carl Zeiss Microscopy, Thornwood, NY) equipped with a digital camera (Axiocam MRm) and software (AxioVision 4.0) as described previously [14]. An anesthetized mouse was placed on the stereoscope stage. Seven microliters of proparacaine (0.5%, Bausch & Lomb, Tampa, FL) was applied for 3 min, and the pupil was constricted with 0.01% Carbachol intraocular solution (Miostat, Alcon) for 5 min. Z-stack images were obtained at 5-μm intervals and compacted into one maximum intensity projection (MIP) image after alignment using Zeiss AxioVision software. Brightfield images were taken (S1 Fig) to confirm corneal transparency after nerve transection surgeries.
Corneal whole-mount preparation and confocal microscopy
Mice were sacrificed and corneal whole mounts were prepared as described previously [14]. Corneas were excised and directly fixed in 4% paraformaldehyde (PFA) overnight at 4˚C. The corneas were washed thrice with PBS for 5 min each at room temperature. This was followed by permeabilization of the corneas in PBS containing 3% NP40, 3% BSA, 0.25% gelatin, 5mM EDTA for 12 h at room temperature. The corneas were then washed thrice with PBS for 5 min each at room temperature. The corneas were then blocked in Blocking solution (PBS containing 0.025% NP40, 1% BSA and 2.5% Donkey Serum) overnight at 4˚C. Furthermore, the corneas were incubated with primary antibody at appropriate dilution in blocking solution overnight at 4˚C followed by 3 washes with PBS for 5 min each at room temperature. Fluorescence-coupled secondary antibodies in blocking solution at appropriate dilutions were then added to the corneas and incubated overnight at 4˚C. The corneas were then washed with PBS thrice for 5 min each at room temperature. Corneas were mounted onto glass slides with a drop of 4', 6-diamidino-2-phenylindole (DAPI)-containing mounting medium and covered with a coverslip. Primary and secondary antibodies used were rabbit anti-beta III Tubulin (Abcam; ab18207; dilution 1:100) and donkey anti-rabbit Alexa Fluor 594 IgG (Abcam; ab150076; dilution 1:1000) respectively. To study the corneal nerve topography, we acquired confocal Z-stack images of corneal whole-mounts and performed 3D reconstruction using an LSM 710 confocal microscope (Carl Zeiss Meditec GmbH) and images were further processed with the Zeiss LSM Zen Imaging and Analysis Software.
Statistical analyses
Statistical analyses were performed using Microsoft Excel. A Student's t-test was used to compare mean values between groups. Results are shown as Mean ± SEM.
Architecture and pattern of corneal nerves
The branching pattern of corneal nerves shows considerable dissimilarity between the left eye (
Corneal nerve laser transection
To characterize corneal nerve healing and regeneration in the absence of a surgical plane, in situ transection of the mouse corneal stromal nerves was performed using a far-infrared XYR-COS Laser (Hamilton Thorne Inc; Beverly, MA) which permitted non-contact ablation of targeted membrane. Nerve trunk fluorescence was lost in the ablated area (Fig 2, arrow). After a single pulse with the laser (pulse: 200 μs; power: 100%), the size of the transection zone was 11.33 ± 0.25 μm (Mean ± SEM).
To confirm that the laser pulse causes nerve ablation, we performed transection of neurites in in vitro trigeminal ganglion cell cultures (Fig 3A1-3A3). After a single laser pulse to a neurite (Fig 3. A1, arrow), a zone of loss of fluorescence was seen (A2, arrow), that progressed to disappearance of the whole neurite (Fig 3. A3, arrow). To further confirm that the laser pulse causes nerve ablation and not just loss of fluorescence, we performed immunohistochemistry of a nerve structural protein (beta-3-tubulin) after in situ nerve ablation. Absence of beta-3-tubulin staining in the zone of laser ablation confirms that the nerve trunk is discontinuous in the ablated area.
Nerve healing after in situ nerve transection
Three types of laser nerve transections were performed in murine corneas using the XYRCOS laser; (I) Nerve transection at a single point along the trunk in either an interconnected nerve spanning limbus to limbus or dichotomously branching nerve that ends near the corneal apex; (II) Nerve transection at two points on an interconnected nerve; and (III) Annular nerve transections. Sequential stereomicroscopic images were taken over the next 8 weeks after nerve transections to assess changes in fluorescence intensity and pattern of regenerating nerve fronds.
I. Single point transection of corneal nerves
In interconnected nerves (Fig 4A, green dotted line) a single point ablation was performed. The continuous nerve trunk before ablation (Fig 4B) developed an area of discontinuity after laser ablation (Fig 4C, red arrow). The laser ablation did not cause degeneration of proximal or distal segment in interconnected trunks. By weeks 3 to 4, a hyperfluorescent plaque developed at the ablated nerve site (Fig 4D). By weeks 6 to 7, the nerve trunk appeared continuous but the area of injury remained hyperfluorescent (Fig 4E). Mice were sacrificed at week 8 and whole mount confocal imaging was performed. Several cells were seen in the area of hyperfluorescent plaque (Fig 4F). The orientation of cells and hyperfluorescent plaque was perpendicular to the nerve trunk. This is very different from orientation of cells in an intact (untransected) nerve (Fig 5A-5C). In contrast to the healing events in interconnected nerves, single point transections in dichotomously branching nerves resulted in degeneration of the segment distal to the laser ablation (Fig 6A and 6B).
II. Two point transection of corneal nerves
Two-point transections were performed in interconnected nerves. Nerve segment degeneration was the more common response (n = 5/7 corneas) (Fig 7). Healing of the ablated area occurred infrequently (n = 2/7 corneas). During healing a hyperfluorescent plaque filled the gap created by the transection.
III. Annular transection of corneal nerves
In annular nerve transections, single point cuts were made at nerves intersecting a concentric circular area (~600 μm radius). Annular transections caused some nerve trunks to degenerate, while others retained or regained fluorescence after an initial loss of fluorescence (Fig 8). In some areas aberrant nerve regeneration was seen characterized by regenerating nerve fronds.
Discussion
Our study yielded the following findings: (1) Nerve patterns in corneas were found to be different in between mice including littermates; (2) Predominantly two nerve patterns occur in the cornea namely; (a) Interconnected nerve patterns with nerve trunks spanning one end of the limbus to the other and (b) Open-ended, dichotomous branching nerve patterns terminating at the corneal apex; (3) Nerve transection caused an initial zone of clearance at the point of The key question we wanted to address in our current study was whether nerve regeneration would occur if there is no surgical plane present. The reason for asking this question is extremely relevant to disease conditions that specifically affect the nerves within the cornea, for example in diabetes or other immune/inflammatory etiologies of keratitis, where there are no surgical planes. In infectious and non-infectious keratitis, nerves may be damaged in the vicinity of the infiltrate in a manner similar to point transections performed in our experiments. We recognized that results from our prior investigations that surgically incised the cornea could be extrapolated to neurotrophic corneas that resulted from surgical interventions like lasik surgery, but those findings may not be extrapolatable to diseases of nerves that are not attributable to corneal surgery. Prior studies from our group [14,15] involved the use of lamellar corneal flap surgery which yielded a wide-area transection of corneal nerves [14,17]. Lamellar corneal flap surgery in our thy1-YFP mouse model caused an influx of inflammatory cells at the transected site at post-operative day 3 and subsequent aberrant nerve regenerative sprouting from the proximal stump of stromal trunks in a nerve pattern and density different from pre-operative levels. With in situ nerve transection we did not observe aberrant nerve regeneration after single point transections. Nerve trunks either healed (with a hyperfluorescent plaque at the site of transection) or degenerated. Limited amount of aberrant nerve regeneration was only observed in annular transection. Since published studies from our lab have already reported that "hairpin-like subbasal nerves" in the thy1-YFP mouse are prone to changes in nerve distribution pattern even in the absence of surgical intervention [14], all our nerve regeneration studies were carried out with reference only to "corneal stromal nerves". Nerve damage in the cornea is usually associated with common symptoms of ocular irritation, photophobia or pain without accompanying ocular surface disease or in patients with neurotrophic keratitis who may have no pain but significant ocular surface disease due to hypoesthesia [18][19]. Nerve damage due to surgical transections such as cataract surgery causes severe neuropathic pain [20]. Despite major developments and advances to ensure accuracy and precision in the field of corneal laser refractive surgery, procedures such as Laser In Situ Keratomileusis (LASIK), Photorefractive Keratectomy (PRK), Laser-Assisted Subepithelial Keratectomy (LASEK) and Epi-LASIK cause nerve damage and trigger wound healing and changes in keratocytes [21]. One of the major causes of post-refractive surgery Dry Eye Disease (DED) is due to sensory denervation caused by nerve transection during flap creation and excimer photoablation [22][23]. Erie have shown that subbasal nerve fiber density was 98% less than pre-operatively [24] and the ablation zone center showed complete absence of branched nerve fibers, 3 months post-surgery. Both Moilanen and Erie have demonstrated that subbasal nerve density was reduced by 87%, 75% and 60%, (at 3, 6 and 12 months respectively) after PRK, and returned to preoperative levels at 2 and 3 years postoperatively [24][25]. In another study using confocal microscopy, Erie's team proved faster recovery of subbasal nerve density in the central cornea in PRK as compared to LASIK [26].
In other studies on BAK-induced neurotoxicity [16] we found that topical application of BAK to the mouse eye caused two forms of stromal nerve neurotoxicity; (i) reversible neurotoxicity (axonopathy) involving initial disappearance of nerve fluorescence and its subsequent reappearance in the same nerve pattern as before treatment and (ii) irreversible neurotoxicity (degeneration) characterized by complete loss of nerve fluorescence and gradual reappearance in a new pattern and location as compared to baseline. Our reversible neurotoxicity data was similar to that of Shriver and Dittel [27] which suggested that in thy1-YFP mice, loss of yellow fluorescence correlated with a disruption in axonal function and when inflammation was resolved and the mice recovered, a reversal of the axonal dysfunction was observed. However, in the case of irreversible neurotoxicity (nerve degeneration) wherein recovery was by regeneration, the density and pattern of the regenerated nerves differed greatly from normal innervation, and functional characteristics were unknown.
In conclusion, we performed in situ transection of mouse corneal stromal nerves using a far-infrared XYRCOS Laser, which permitted non-contact transection of corneal nerves. We observed evidence of nerve healing. Further studies need to be carried out in order to evaluate and characterize the functional significance of nerve healing after in situ nerve ablation.
Supporting information S1 File. Materials and methods for corneal Brightfield imaging and fluorescein staining after laser nerve transection using in vivo stereofluorescent microscopy. (DOCX)
|
v3-fos-license
|
2024-02-11T16:20:27.665Z
|
2024-02-08T00:00:00.000
|
267599390
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://publications.eai.eu/index.php/phat/article/download/5076/2882",
"pdf_hash": "5edb24803bcd5092e46c52237db5747da99c8717",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:536",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"sha1": "2b2510999860a8c32602c3303e7eed2e9da6343e",
"year": 2024
}
|
pes2o/s2orc
|
Speckle Noise Removal from Biomedical MRI Images and Classification by Multi-Support Vector Machine
INTRODUCTION: Image Processing (IP) methods play a vital role in medical images for diagnosing and predicting illness, as well as monitoring the patient's progress. The IP methods are utilized in many applications for example in the field of medicine. OBJECTIVES: The images that are obtained by the MRI magnetic Resonance imaging and x rays are analyzed with the help of image processing. METHODS: This application is very costly to the patient. Because of the several non-idealities in the image process, medical images are frequently tainted by impulsive, multiplicative, and addictive noise. RESULTS: By replacing some of the original image's pixels with new ones that have luminance values which are less than the allowed dynamic luminance range, noise frequently affects medical images. CONCLUSION: In this research work, the Speckle type noises are eliminated with the help of Mean Filter (MF) and classify the images using Multi-SVM classifier. The entire system developed using python programming.
Introduction
Medical images are being obtained from patients who take X-Rays and MRI scans from reputed scan centers [1].In the medical industry, noise is categorized into two stages.A criterion for estimating the presence of impulsive noise makes up the first stage.The image is processed for the second stage of another criterion to determine whether the noise is multiplicative or additive if the outcome of this criterion is negative.The main reason for this speckle noise is the waves or radiation that has been emitted during scanning in scan centers.There are many types of noises present in medical images.In X-Rays the presence of Poisson noise will be very high.In case of MRI and Ultrasound images we will be having more Speckle noise which will reduce the prediction accuracy during classification [2].Among all noises in medical imaging, this speckle noise plays a main role in reducing the prediction of diseases or the current state of the patient's status.So, our proposed system will help in clearing and removing this speckle noise in an efficient manner.Today, image processing methods are applied in a number of medical applications, such as brain tumour classification, liver image validation, and cancer diagnosis.Breast cancer is the most aggressive type of cancer that currently affects women, particularly in developing countries, and the risk associated with it rises with age [3].Because the exact aetiology of breast cancer is yet unknown, prevention is difficult.The rate of survival can be increased, nevertheless, by earlier disease detection and treatment.By removing noise, the image denoising technique enhances the quality of noisy photos.Given that noise and other important properties of images, such as edges and textures, have relatively identical upper frequencies, it may be difficult to discern them apart, which may result in the loss of some image components.In this paper, Ayana et al. (2022) introduced a method for de-speckling breast ultrasound images known as rotationally invariant block matching (RIBM-NLM).The suggested method for reducing speckles on breast photos acquired from open and commercial databases was compared to three well-known de-speckling techniques.The MSE value, which is lower than that of cutting-edge techniques, shows how well the RIBM-NLM method works to reduce speckle while maintaining exact elements.The PSNR value implies that the suggested approach outperforms the existing filters in terms of de-speckling because it is higher, particularly for low-quality images.Not to mention, the RIBM-NLM method requires less calculation time than previous methods [4].A successful method is used by Chakravarthy et al., (2019), to remove impulse noise from digital mammograms.The technique is built upon statistics like mean, median, and standard deviation [5].This determines the updated intensity that needs to be replaced in the impulse area by computing those measurements in the vicinity of the acquired mammography pictures [6].The suggested method is only iterative and aims to remove impulsive noise without affecting the image's boundaries and other vitally crucial regions [7].The following has been declared as the study's objective in light of the background of the field and any pertinent difficulties: • To suggest a best filtering method in denoising bio medical MRI images.
• Using the mean square error (MSE) and peak signal-tonoise ratio (PSNR) on images tainted by noises like saltand-pepper, poison, and speckle, determine whether the suggested denoising method is effective.• To evaluate the suggested filter's accuracy functionality to that of already-used techniques.
Literature Review
One of the key significant challenges in medicinal IP is removing noise from medical-based images (Image Processing).Abdulaziz Saleh Yeslem Bin-Habtoor et al., 2016 applied three kinds of filters on the images and compared the efficiency of the filtering techniques.The choice of the filtering concept is not based on the type of application.Due to that reason selecting denoising concepts for the concerned application is very important.Here the authors apply the adaptive median filtering on the medical type images and compare the outcome of this technique with other filtering concepts like mean and median type filters.The investigational outcome shows that the proposed filter is suitable for medical-type images [8].Ultrasonic machines are playing a key role in disease identification.The main issue during disease identification is the deformation of visual type signals due to the wave signals transmission.This deformation is called Speckle type noises.Denoising is one of the major stages for the exact disease diagnosis.The present requirement of healthcare institutions is to protect the information with fewer noises.Speckle-type noises decrease the image contrast level.K. Karthikeyan et al, 2011 presented a study to decrease the speckle type noises from ultrasonic components.The authors use a hybrid system that combines wavelet-based BayesShrink, anisotropic diffusion based on PDE, and SRAD type filtering.The standard filters are used to evaluate the proposed model filter.Investigations as a result demonstrates the hybrid type filter is effective and creates the best quality denoised, clear, and smoother picture [9].In this current scenario noise removal from medical images is the hottest research topic.Speckle-type noise images increase the negative impression on the image interpretation process.In recent days, various efforts have been taken to design an efficient denoising model.But various methods are still not effective due to their computational effectiveness, features destructive and less speckle trimming.Ahmed S. Bafaraj 2019 presents a novel approach for speckle type noise removal in medical images using an efficient and optimized gaining combined model.The suggested model contains three important phases.Initially, the speckle type noise datasets are selected.Then optimized gaining combined models are offered with various filters.Finally, the histogram results are displayed and measure the quantitative and qualitative metrics.Among the various filters, the ideal type of filter shows a better noise ratio and superior structural likeness less mean square error rate [10].Noise elimination approaches are an important process in medical-related image handling for learning the human anatomy framework.Managing nosing issues various denosing models like filters are used [11] The outcome says that the new WB-Filter approach produces an optimum quality of the medical images [14].
Problem Statement:
• Usually in all cases accuracy will be always less and non-constant when applying classification algorithms.
• MSE Error will be high during training and testing which means higher MSE means lower accuracy.
• Improper Precision and Recall rate in the terms of output in each and every iteration which makes the model to perform less in any dataset.• If the dataset is huge the constructed model is facing a problem of execution time during simulation.
• Some models constructed are very complex and not understandable.
Proposed Methodology
Ultrasound imaging is a non-invasive, real-time, and radiation-free imaging modality when compared to other imaging technologies like X-rays, CT, and MRI.However, speckle noise invariably distorts ultrasound images because of coherent imaging.Denoising is therefore essential for enhancing an image's ultrasonic quality.As a pre-processing phase, denoising aids picture postprocessing tasks like image segmentation, classification, and registration.The proposed noise removal model's block diagram is shown in Figure 1.The speckle noise intensity value negative exponential type distribution is denoted as the Eqn.(3) From the above equation, u represents the speckle intensity mean value.According to S. Anitha, L. Kola P et al., [14], the standard model of speckle type noises is illustrated as Eqn.(4).
Here identifiers denote the spatial coefficients, indicate the acquired image.The identifier denotes the scene and is the speckle noise.In this research work, the speckle noises are removed using the Mean Filter (MF) approach.MF is the easiest and simple filter for decreasing the noise level of images.In the MF technique, exchange every pixel in the images with the neighbor's average value.According to B. Deepa et al., [15] MF approach in image processing is expressed using Eqn.(5) From Eqn. 5, M stands for the neighborhood's total number of pixels.f(k,l) denotes a given image, while N. g(i,j) denotes the processed image.The classification process is used to classify the filtered images based on their features.SVM is a kind of supervised ML model that assists in categorization or regression issues [15][16][17].The main aim of the SVM model is to discover the best edges among the probable results.SVM model does the complex type data based on selected kernel method and it aims to increase the division of boundaries among the given data points based on defined labels.The performance of the existing and suggested systems is compared in the accompanying
Results and Discussions
OC_8702 data set: This data collection was acquired from an internet repository.The dataset comprises 275 female patients who came in for scans to check on uterine issues.Peak (Ciphergen), LogFC, p-value, corrected p, Peak (Cormwell), and other key characteristics are included.Now let us see how our suggested Multi-SVM handles speckle noise removal and segmentation.Images for the input were taken from online datasets.After that, the algorithm receives this image.The algorithm will first resize the image, after which the noise in the image will be examined and eliminated using the Wiener filter.The data is segmented using multi-SVM.The accompanying Figure 3 displays the input image, noise image, and denoised image.After our research and work the value of PSNR of noisy images is 20.194928 db and PSNR value of denoised image is 24.473850 db.
Figure 3. Output Analysis
The following Table 2 and figure 4 represents the accuracy obtained after simulation.The total number of images used for testing is 220.After testing the accuracy of Decision Tree ranges from 88.2% to 90.1% and MSVM accuracy ranges from 88.7% to 90% respectively.So, it's been proved our algorithm works best.
. Nalin Ku et al., 2017 done research work with Weiner, Gaussian, and median filter with medical related images.The common noises that affected the MRI images are Speckle, Salt and Pepper, and Poisson-type noises.The three filters mentioned above were tested with a variety of noises, including Speckle, Salt and Pepper, and other noises.Based on the filters' age, histogram level, and image clarity level, their performances are evaluated.According to the outcome of this research work, a median type of filter suitable for eliminating Poisson and salt and pepper type noises from the gray level images.The Weiner type filter is opted for removing Gaussian type and Speckle type noises.Gaussian type filter handling blurred type noises from the given images.Here the authors use 20 various images for eliminating noises [12].Nowadays, AI (Artificial Intelligence) techniques and ML models are generally applied in the medical field to EAI Endorsed Transactions on Pervasive Health and Technology | Volume 10 | 2024 | diagnose diseases.Mainly speckle type noises influence all types of medical-related images.So, reducing speckletype noises is necessary for the medical domain.These noises decrease the accuracy rate of the classification results.Pichid Kittisuwan et al., 2018 recommended a new speckle-noise elimination technique using Bayesian evaluation with wavelet type methods due to its efficiency and less processing time.The authors propose the MAP (Maximum a Posteriori) estimator for handling speckletype noises.This new method produces better denoising outputs [13].S. Rameshkumar et al., 2016 implemented a novel filter known as WB-Filter for reducing images from the medical images.WB-Filter is the mixture of median and bilateral filters.It executes at every pixel of an image, is exchanged with weighted mean intensity values from neighborhood pixels and decreases the MSE value among measured and the desired value.This suggested filter is mainly suitable for speckle and Gaussian type noise removal.The resultant image quality is evaluated using MSE, RMSE, and PSNR values.
Figure 1 .
Figure 1.Proposed Noise Removal Model Speckle images have mainly occurred in laser, ultrasonic, radar, and MRI images.This noise amplitude value has Gaussian distribution, and it is represented as the following Eqn.(1)Here denotes the phase element, and indicates the e quadrature element for speckle amplitude.The Speckle type noise intensity value is described as Eqn.(2)
Table 1 and
Figure 2. From EAI Endorsed Transactions on Pervasive Health and Technology | Volume 10 | 2024 | the table we can see that performance of existing algorithms such as Naive Bayes (NB) is 68%, K Nearest Neighbour (KNN) is 74%, Random Forest (RF) is 82%, Decision Tree (DT) is 89% and proposed Multi Support Vector Machine (MSVM) produces 91% respectively.
Table 1 :
Existing and Proposed Performance Analysis
Table 2 :
Accuracy Comparison of DT and MSVMIP techniques are mainly used in the medical domain for identifying and classifying diseases in an earlier manner.Various kinds of noises occurred due to different causes.In the medical domain, noises are decreasing image qualities.Denoising can assist healthcare professionals to detect diseases.In this research work, the MRI image speckle noises are removed from the images using MF.The filtered images are then classified using the Multi SVM model.The findings of the suggested system's Python implementation are contrasted with those of more conventional methods.Future neural network algorithms, including Convolution Neural Networks (CNN) and Recurrent Neural Networks (RNN), will provide results that are more accurate than those of our suggested approach.
Figure 4. Accuracy Comparison of DT and MSVM Comparison Graph EAI Endorsed Transactions on Pervasive Health and Technology | Volume 10 | 2024 |5.Conclusion
|
v3-fos-license
|
2019-05-13T13:06:08.807Z
|
2019-04-23T00:00:00.000
|
150735140
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.16997/jdd.328",
"pdf_hash": "104d9fbefec50588844cca4007e171bf4718169f",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:538",
"s2fieldsofstudy": [
"Political Science"
],
"sha1": "104d9fbefec50588844cca4007e171bf4718169f",
"year": 2020
}
|
pes2o/s2orc
|
Democratic Self-Determination and the Intentional Building of Consensus
This paper defends two fundamental but under-theorized insights coming from the theory of deliberative democracy. The first is that consensus is valuable as a precondition of democratic collective self-determination, since it ensures that democratic decisions display an adequate degree of integrity and consistency and therefore that the polity can act as a unified agent. The second is that consensus in this integrity-building role is essential if citizens need to act as decision-makers; it ensures that the decisions that issue from the exercise of their political rights are meaningful, and that they are so as the intended result of their joint agency. Aggregative approaches, which do not acknowledge this role of consensus, offer an atomistic account of voting and other political rights, and model the outcomes of democratic decision-making as unintended aggregative consequences of individual votes. In these models, democratic political agency and the decision-making power of citizens are curtailed, because citizens do not exert any intentional control on the final outcome of the decision-making process in which they participate. Although the insight on these shortcomings comes from the deliberative camp, I show that the most prominent accounts of how deliberation is supposed to further consensus in its integrity-building role can be subject to the same criticisms. In fact, in these models consensus is achieved as a by-product of people's engaging in deliberation. Although interactive, these approaches are still atomistic and unintentional. As an alternative, I propose a model of democratic decision-making that acknowledges the role played by the citizens' intentional consensus-building through the strategic use of their political rights. Author Biography Valeria Ottonelli is Associate Professor of Political Philosophy at the University of Genova, Italy. Her main research interests are in the normative theory of democracy and in the theory of justice in migration. Her work appeared in Political Studies, The Journal of Political Philosophy, Critical Review of Social and Political Philosophy, International Migration Review and Politics, Philosophy and Economics.
Democratic Self-Determination and the Intentional Building of Consensus
In the normative theory of democracy, consensus can play various roles and functions.In some theories, consensus works as an ideal point toward which democratic decision-making should tend for democracy to meet the presuppositions of citizens' communicative rationality (Habermas, 1996;Lafont, 2006) or their relations as free and equal (Cohen, 1989).Other theories stress the role of consensus as a guarantee against unfreedom and domination; the broader the consensus within a political community, the fewer who will be coerced into following rules they do not endorse. 1 A third role in which consensus features in democratic theory is as evidence-given the appropriate conditions-of the epistemic reliability of democratic decisions (Nino, 1996; for an updated discussion, see Landemore & Page, 2015).Finally, consensus has been recently theorized as motivated by aversion towards the destructive effects of untamed political conflict (Ani, 2014).
Besides these prominent functions of consensus in democratic theory, there is one that has been less explicitly theorized, but which nevertheless underlies many debates on the feasibility of the democratic ideal.This is the role of consensus as a guarantee of the unity and integrity 2 of the polity in its decisionmaking capacity.Integrity, in this context, is defined as the unity of agency of the polity through time, as instantiated by the coherence and stability of its lawmaking and decisions.Integrity is guaranteed when majorities converge on a common and stable voting pattern, so as to ensure that the collective decisions made though majority rule will be consistent and intelligible.I will hereafter refer to consensus in this specific role, which serves as a guarantee for integrity, as integrity consensus.
This role of consensus is crucial.In fact, integrity consensus is a necessary condition of the collective self-determination of the political community, which many see as a defining principle -along with political equality-of democracy itself.Democratic self-determination means that the members of the polity jointly and collectively determine the course of their political action and the laws that will rule their domestic and foreign affairs.Consensus is crucial to the fulfilment of the principle of collective self-determination because selfdetermination requires some degree of integrity and unity of agency of the polity, at least in the minimal sense that its acts are not incoherent or its decisions meaninglessly casual.A divided community, in which different and scattered majorities take over cyclically and randomly, cannot provide unity of agency.
Integrity consensus consists in a convergence on common policy alternatives that is broad enough to generate a stable and meaningful course of action at the collective level, thus providing the grounds for the required unity of agency of the democratic polity.
In fact, it is exactly consensus in this integrity-building role that has come under the attack of the long line of political theorists and scientists who have sought to discredit the notion of popular self-determination as meaningless and suspect, from Hans Kelsen (1955) to Joseph Schumpeter (1942) and to the well-known anti-populist manifesto by William Riker (1982).Summarising this line of thought, Riker famously argued that we cannot make sense of the ideal of popular self-determination because such an ideal presupposes that the polity displays at least a minimal degree of consistency or "integrity" through time, and such consistency requires a degree of consensus that cannot be expected in conditions of pluralism, that is, in a society where people have different value systems and worldviews.Indeed, in such a society people are likely to have different orderings of the alternatives on the agenda, and when this happens the aggregation of votes through democratic procedures is liable to generate cycling majorities that contradict each other.In other words, if we want selfdetermination, we need to have integrity consensus; but integrity consensus is impossible and undesirable in a pluralist society, in which people have different views of how the alternatives on the political agenda should be ranked, and vote accordingly; therefore, the ideal of self-determination is unachievable and should be purged from democratic theory.
In this paper I have three aims.My first aim is to present an argument in defence of the importance of integrity consensus, drawing from some powerful insights-mainly developed within the deliberative camp-on the essential connection between integrity and self-determination at the collective level and the role of democratic citizens as decision-makers (first Section of the paper).My second aim is to show that, notwithstanding the fact that these insights were mainly developed within the deliberative camp, the most popular models of how integrity consensus can be achieved through democratic deliberation fail in two respects: a) they do not accommodate pluralism, which, in accord with a longstanding and established tradition of thought, I assume to be a permanent and defining feature of liberal democratic societies; 3 and b) they do not do justice to the notion of citizens as decision-makers (second and third Sections), if we believe that decision-making comprises an element of intentionality regarding the meaningfulness of one's decisions.My third aim is to sketch an alternative 3 The assumption that in a free democratic society people will have different systems of values and worldviews is shared not only by scholars like Kelsen (1955) or Riker (1982), but by most thinkers in the liberal democratic tradition, from Mill to Dewey and to Rawls and Habermas.On the relevance of this assumption for the theory of deliberative democracy, see for example Bohman (1996); Mansbridge et al. (2010); Chambers (2018, p. 63).
account or model of how integrity consensus can be produced, centred on a dimension of political agency I call 'political prudence.'This model, in which political prudence complements deliberative agency, provides a better account of the decision-making powers of the citizens of a democratic polity and is more respectful to pluralism than the standard models that suppose that integrity consensus can be produced through deliberative agency alone (fourth and fifth Sections of the paper).
The Importance of Integrity Consensus
Can and should we really do without integrity consensus as a defining element of democracy, as suggested by Riker and other critics?
As just recounted, this may appear to be a necessary conclusion if we want to preserve pluralism.In fact, social choice theory has shown that in conditions of pluralism any collective choice procedure that fulfils some minimal democratic desiderata is liable to lead to inconsistent sets of decisions, which makes the requirement of consistency and integrity impossible to fulfil.Conversely, if a democratic polity needs to escape inconsistency and voting paradoxes, it must achieve a degree of integrity consensus and homogeneity that in our pluralist societies is unlikely to occur and would be unreasonable to expect (Hardin, 1993;Knight & Johnson, 1994;van Mill, 1996).
According to this line of thought, however, not much is lost if we give up integrity consensus, since we can make perfect sense of the democratic ideal without it.In fact, it can be argued that the three distinctive principles of democracy-participation, freedom and equality-can be honoured by a 'liberal' model of democratic decision-making (Riker, 1982), in which what counts is that each and every member of the polity has an equal chance to affect the political process by participating through democratic rights.More recently, Fabienne Peter (2009) has responded to the worries about the voting paradoxes discovered by social choice theory and the newer literature on the paradoxes of judgment aggregation (Pettit 2001;List & Pettit, 2002;Dietrich, 2006) by pointing out that what really counts for democratic legitimacy is that decisions are arrived at through a fair and reasoned process.
In short, according to this line of thought integrity consensus, and the ideal of self-determination that it makes possible, should be dropped from democratic theory because they are incompatible with pluralism.Moreover, dropping integrity consensus and democratic self-determination does not cause any significant loss, because we can make sense of individual democratic rights without presupposing that they serve the purpose of building a unified, meaningful line of action at the collective level.
Both of these conclusions have been challenged by powerful arguments developed from within the deliberative conception of democracy.
Let's consider first, in this Section, the conclusion that not much is lost when we drop integrity consensus-and thus self-determination-as a democratic ideal.Contrary to this claim, some defenders of deliberative democracy have pointed out that if we give up the possibility of self-determination, we miss something crucial about the rights and prerogatives of the individual members of a democratic polity (see especially Bohman, 1996;Richardson, 1997Richardson, , 2002)).In fact, such rights lose much of their meaning and value if they are decoupled from the notion that they guarantee the individual participation in a collective process of decision-making.In a democracy, individuals are granted political rights that allow them to act as decision-makers over meaningful lines of action rather than participants in a random, meaningless and haphazard process that only guarantees a roughly equal chance to achieve one's ends or to protect one's negative liberty.
The poverty of accounts like Riker's (1982) in elucidating democracy as participation in a collective decision-making process has been exposed from various angles.Jürgen Habermas (1996), for example, has claimed that if we understand democracy simply as an institutional procedure for accommodating on a random basis the conflicting interests of different social groups, then we cannot make sense of the actual practice of democracy, in which citizens seem to act on the belief that they can affect the democratic process so as to produce rational and just decisions.If citizens did not believe this, parties' programs, public debates, and elections as we know them would lose much of their point and significance.David Estlund (2008) has pressed a similar point against purely proceduralist accounts of democracy.If democracy were simply a fair process for selecting the winning opinions or interests, there would be no point in adopting majority rule rather than a random decision-making device like tossing a coin.Habermas's (1996) argument is meant to vindicate the discursive side of democracy, while Estlund's (2008) is meant to show that democracy has an essential epistemic component.From these arguments, though, two morefundamental points can be drawn that are relevant for our present purposes.The first is that to make sense of democratic voting, we need to think of it as capable of producing meaningful decisions rather than random choices.Integrity consensus, as a minimal precondition of meaningfulness, cannot be given up so light-heartedly.Second, if we want to account for the role of the participants in the democratic process as decision makers, such meaningful decisions must come through institutional procedures that are responsive to their individual choices (votes).Citizens must be able to see that the decisions taken at the collective level (and their meaningfulness) are affected by their choices as voters.Majority rule is essential to democracy, while a random decision procedure such as coin-tossing is not, because majority rule, unlike selection by lot, responds to how each and every citizen decides to cast their vote.
We can further sharpen these insights.Consider a decision procedure by which each citizen has the right to express their requests or claims by a vote, and then an impartial judicial body adjudicates conflicts between them in a meaningful and principled way, thus producing a unified and consistent line of action. 4This procedure would be responsive to people's votes, and would issue meaningful decisions.However, in this process citizens would not act as decision-makers, but simply as sources of claims and bearers of interests that would constitute so to speak the raw material of meaningful decisions.Democracy, unlike the procedure just outlined, gives citizens the power and duty to directly participate in the production of meaningful collective decisions, by casting votes in a way that aims at a given result, in conjunction, of course, with the votes of all the other participants.
This means that if we want to account for the role of democratic citizens as decision-makers, rather than mere providers of inputs that feed a process they do not control, we need to conceive of democratic processes not simply as meaningful and responsive to citizens' vote, but as responsive to citizens' votes as intentionally aiming at producing meaningful results; in other words, the achievement of meaningful decisions must not be the product of chance, or of the intervention of some third party, but must be intended by citizens as they participate in decision making.
The comparisons with the drawing of lots and with the third-party hearing procedure, then, throws light on two major and interrelated shortcomings of conceptions of democracy such as Riker's (1982) and their dismissal of integrity consensus as a precondition for meaningful, unified lines of action at the collective level.The first is that they are atomistic conceptions of democratic decision-making.They conceive of individual participation in the political process as the separate expression of personal preferences or opinions rather than the active participation in a collective decision-making process.In these accounts of democracy citizens do not make decisions, because the final outcome of democratic procedures is determined, so to speak, 'behind their back': decisions are made as a result of their desiderata or preferences but are eventually determined by processes of aggregation and adjudication in which they do not participate as decision-makers.Atomism prevents them from acting as decision-makers because democratic decision-making is a collective process, which requires contributing with one's vote to a common effort to bring about a meaningful decision by a majority (or to oppose it by a minority).
The second and consequent shortcoming of these conceptions is that they conceive the outcomes of the democratic process as unintentional.Of course, in a process such as democracy in which many people participate with equal influence, for each of the participants the result cannot be 'intentional' in the sense that they have personal and absolute control over it.However, the results of the democratic process can be intentional in the sense that each of the participants aims at producing-by interacting with all the other participantsthe final content and meaning of the outcome of the collective process of decision-making.
In sum, by construing democratic decision-making as atomistic and unintentional, Rikerian models of democracy do not simply rule out democratic self-determination and integrity consensus.They also offer a very impoverished and shallow reconstruction of the value and functioning of democratic prerogatives, and of the individual political agency of democratic citizens.Integrity consensus is an essential condition for collective decisions to be meaningful and therefore for democratic participation to be conceived as a process of collective decision-making, rather than a random device fed by an atomistic pattern of individual votes.
Deliberation, Integrity Consensus and Pluralism
The insights about the essential connection between individuals' role as decision-makers and self-determination on which I built my argument in the last section mostly come from the deliberative camp.They aimed at vindicating not only the importance of self-determination for the democratic ideal but also a richer model of political agency than the one assumed by minimalist accounts of democracy like Riker's (1982).In the deliberative model, political agency does not simply consist in the expression of one's wishes and opinions, but in the participation in a reasoned collective process of decision-making, in which citizens exercise what is called discursive rationality (Bohman, 1996;Dryzek, 1992).
Another important, and related, suggestion coming from the deliberative camp responds to the second major claim of the Rikerian model of democracy, that is the claim that the kind of consensus required for collective self-determination and integrity is impossible to achieve, and in any case undesirable, because such integrity consensus is incompatible with pluralism.
In response to the claim that integrity consensus is impossible to achieve, some deliberativists have suggested that once we construe political agency as the participation in a reasoned collective process of decision-making, then we also find the way to overcome the paradoxical results exposed by social choice theory, thus proving that integrity consensus is not only needed, but also possible (for some classical statements see Cohen, 1989;Dryzek, 2002;Elster, 1986;Sunstein, 1988), and therefore that democratic decisions can be meaningful.
The fundamental idea is that democratic decision-making turns out to be a random and meaningless process only if we assume, like Riker's model does, that democracy consists in the aggregation of idiosyncratic preferences or opinions that people develop and entertain in private.If instead we conceive of democracy as a decision-making method in which voting only comes at the end of a process of collective reasoning, then we also produce the conditions for avoiding cycles and inconsistent sets of decisions, thus ensuring an adequate level of integrity and unity at the collective level.The factual premise that underlies this claim is that by deliberating-by exchanging reasons in publicthe democratic polity will naturally converge towards a reasoned consensus, and reasoned consensus guarantees consistency and integrity in the decisions made through majoritarian devices.
This factual premise, in turn, is based on the assumption that public deliberation has a transformative effect on individual preferences (Christiano, 1993;Goodin, 1986;Pildes & Anderson, 1990;Sunstein, 1991).Deliberation makes people justify their preference orderings in public; this induces them to revise their preference orderings in such a way that they will be justifiable according to reasons and principles others can share.Selfish or idiosyncratic preference orderings will not withstand public scrutiny and will therefore be discarded or revised (Elster, 1998b;Goodin, 1986).The process will then naturally order individual preferences in common and publicly shared patterns.In other words, it will produce consensus (Cohen, 1989;Elster, 1986).
These claims have been challenged by a copious empirical and theoretical literature. 5But even if it be granted for the sake of the discussion that deliberation does produce consensus, we are left with the second main challenge to collective integrity, which is the claim that, even if integrity consensus is possible, it is undesirable because it suppresses pluralism, which is a defining and valuable feature of a free, democratic society (Knight & Johnson, 1994;Rescher, 1993).In fact, the suppression of difference is politically suspect (Sanders, 1997;Young, 1996) and the dissolution of disagreement may even undermine rational public discourse (Friberg-Fernros & Schaffer, 2014;Sunstein, 2003). 65 Serious doubts have been raised about the assumption that in a pluralistic society, in which people are free to reason in public, the natural outcome of these deliberative interactions would be consensus (Knight & Johnson, 1994;Ottonelli, 2010;Van Mill, 1996).For an assessment of empirical data, see Steiner, 2012, Ch. 6.For overviews of the empirical literature, see Thompson (2008) and Ryfe (2005). 6For a recent overview and discussion of the contrast between pluralism and consensus in deliberative democracy, see Martí (2017).
The preoccupation with difference and pluralism, as is well known, has made many leading deliberativists give up consensus as an ideal or a goal of deliberation (Mansbridge et al., 2010).However, some strands of the theory of deliberative democracy specifically preoccupied with integrity consensus -i.e., with the specific role that consensus plays in ensuring the conditions for integrity unity of agency at the collective level-have not abandoned this ideal; for it may be argued that, contrary to other functions of democratic consensus, integrity consensus does not require especially high levels of homogeneity in people's views. 7All that is needed is for the polity to display a sufficient degree of consensus to make its decisions meaningful and consistent.
In fact, a well-known discovery of social choice theory is that for majority voting to be immune from paradoxical and inconsistent results it is enough that individual orderings of the alternatives on the political agenda be 'single-peaked' (Black, 1958), meaning that they can be represented along a common spatial dimension.Borrowing this model from social choice theory, some mainstream deliberative theories of democracy have replaced the ideal of full consensus as a guarantee for integrity with less demanding ideals, such as 'metaconsensus', 'agreement at a metalevel' or consensus on the dimensions of political decisions (Dryzek & List, 2003;Dryzek & Niemeyer, 2006;List, 2002;List, 2018;List & Koenig-Archibugi, 2011;List, Luskin, Fishkin, & McLean, 2013).These lighter forms of consensus do not demand that citizens have identical preferences with regard to the available policy options; they only require that they share the same ways of conceptualizing such policy options and their implications along the same dimensions.It can be shown that if this requirement is fulfilled, then the individual orderings of the options on the political agenda will be 'single-peaked' (Dryzek & List, 2003;Dryzek & Niemeyer, 2006;Miller, 1992), and therefore majoritarian decision-making will produce decisions that display consistency and integrity.
The role of deliberation in achieving consensus has been revised accordingly: we cannot hope for deliberation to produce full convergence on how people rank different policy alternatives on the agenda, but we can still expect deliberation to produce consistent and rational collective decisions to the extent that it will generate a common dimension along which to conceptualize and evaluate such alternatives (List, 2002).This common conceptualization, by fitting the requirement of single-peakedness, will ensure that the outcomes of democratic procedures display integrity.So, for example, in a society that is deeply divided about the immigration policies to adopt, people will have very different views about how to rank the available options in terms of the right of immigrants to access to the national territory, their enjoyment of social and economic rights, and the acquisition of citizenship.But through deliberation people might come to characterise and weigh those disparate stances along one and the same evaluative dimension, for example exclusion/inclusion, ranging from those positions that most value excluding foreigners from all sort of access, to those that most value their full access and integration into society.If this shared conceptualization is made possible, then it can be shown that the choices made by majority voting will be consistent and display a unified line of action, despite the deep pluralism that characterizes the society in this policy area.
Why the Deliberative Approach to the Building of Integrity Consensus Does Not Fulfil Its Promises
The models that rely on deliberation as a device for producing integrity consensus (from now on, D-models), by stressing the importance of discursive rationality as an essential component of democratic agency, promise to rescue the notion of popular control and self-determination as essential to the democratic ideal.According to this line of reasoning, a democratic process guided by discursive rationality is conducive to consensus to a degree sufficient to guarantee integrity and consistency at the collective level, thus avoiding the paradoxical outcomes that would make the idea of self-determination nonsensical.Moreover, to the extent that D-models allow for lighter forms of consensus than full substantive unanimity, they promise to achieve such a goal without undermining pluralism.In this section, I would like to raise doubts about the validity of these claims.
Let us consider first the claim that D-models do not undermine pluralism.We have seen that this claim relies on the notion that to guarantee integrity and consistency in democratic decisions we do not need to achieve a high degree of uniformity in how people rank the options on the political agenda and vote on them.Integrity consensus, that is the kind of consensus that is needed in order to ensure collective integrity, can simply amount to a sufficient degree of uniformity, or to the 'metaconsensus' over shared dimensions of policy evaluation that ensures single-peakedness.However, the apparent attenuation of these uniformity requirements does not really reconcile D-models with pluralism.In fact, the problem with D-models does not lie as much in the degree of uniformity that they require in order to ensure integrity as in the way in which such uniformity, however mitigated, is supposed to be promoted by deliberation.
As we have seen, in fact, D-models explain the integrity-promoting role of deliberation by claiming that deliberation creates common patterns of evaluation, opinion and principles that guarantee a sufficient degree of convergence in how people cast their votes.This represents a serious threat to pluralism, because pluralism concerns above all exactly such a variety of patterns of evaluation, opinions and principles.
Most importantly, this also holds for the varieties of the D-model that rely on 'metaconsensus'-the representation of political issues along the same conceptual dimensions, which guarantees single-peakedness.As we mentioned, metaconsensus is compatible with different preference orderings of the alternatives on the political agenda and different ways in which people cast their vote; however, it requires that the members of a polity conceptualize the issues on the political agenda in exactly the same way.This substantially curbs pluralism by making everyone look at the world through the same conceptual lenses (Ottonelli & Porello, 2013).Again, it can be argued that pluralism does not concern so much how people rank policy alternatives but most importantly consists in people's having different worldviews-different ways to conceptualize their social world and represent different policies, their impact and their relations to underlying values and principles.For instance, in our example of a community divided on immigration policies, the reduction of all their different stances to the dimension exclusion/inclusion significantly shrinks the variety of ways in which the difficult political choices involved can be conceptualised, ruling out other possible dimensions of the debate, such as the relation to local and global economic justice, or the degree in which personal freedom and free movement are achieved.
In sum, with regard to pluralism, the problem with the D-models of how integrity consensus can be built-including the apparently less demanding variety based on 'metaconsensus'is not so much that they produce uniformity in voting patterns, but that they promise to achieve such an outcome by narrowing the cognitive and evaluative landscape of the members of the polity.
Let us now turn to a second problem with the way D-models promise to rescue the integrity-building role of consensus.In D-models, integrity consensus, and the consistency of collective decisions that such consensus makes possible, are generated as side effects of deliberation, and therefore cannot be seen as the intentional product of participants' political agency.In this respect, deliberative accounts of consensus do not rescue democratic participation from the main shortcomings of Rikerian models: atomism, by which democratic decisionmaking is conceived as the product of separate decisions at the individual level, and unintentionality, by which the outcomes of the democratic process of decision-making are produced as unintended results of the participants' involvement.
These shortcomings appear evident if we look at how deliberation is assumed to produce consensus in D-models.As previously mentioned, according to Dmodels the convergence of preference orderings, or their alignment along single-peaked patterns, is achieved because deliberation acts as a 'filter', by discarding those preference orderings that cannot be publicly justified to others (Dryzek, 1990;Dryzek & List, 2003, quoting Elster, 1998a;Goodin, 1992).
However, this filtering is produced as a side effect of the participants' engagement in discursive processes in which they aim at providing reasons and making claims that others can also share.The final vote they take on the issues on the political agenda is fully determined by this process, rather than by their conscious attempt to steer collective decision-making in a way that instantiates integrity and consistency.In deliberative theories of the democratic process, consensus-whenever it emerges-is a by-product of deliberation (Fuerstein, 2014).
It might be objected that this is not true of all accounts of deliberation.In Habermasian accounts, for example, participants do not simply aim at exchanging reasons that can be shared by others, but actively aim at reaching (rational) consensus because such a consensus is an essential presupposition of their discursive practices.However, this is not enough to make the process intentional in the sense relevant for the present discussion.In fact, if we want to obviate the faults of Rikerian models of the democratic process and fully account for citizens as engaged in a process of collective decision-making, we need to conceive of citizens as pursuing consensus in view of its importance for achieving meaningful decisions.That is, consensus must be pursued in its integrity-building role, as a guarantee of collective integrity and democratic self-determination.In D-models, instead, participants are at best interested in consensus simply as an ideal endpoint of their rational deliberation.When such consensus is reached, and therefore the choices produced through the democratic process display the degree of consistency and integrity necessary for them to be meaningful, this further outcome happens without being actively sought by the participants in deliberation.In this sense, the reaching of consensus-in its integrity-building role-can be described as unintentional.
It might be objected that this discussion assumes too narrow a view of what it takes for integrity consensus to be intentional.After all, in D-models participants in deliberation know that integrity consensus follows, as a natural by-product, from their deliberative activities, even if they do not directly aim at it.Why should not this count this as an intentional result?In reply to this possible objection, compare this to the imagined procedure that we discussed in the first Section of the paper, in which people submit their votes to a commission that subsequently mingles them into a series of policies that display integrity and consistency, and therefore a meaningful and unified line of action.Also, in this case the participants know that a meaningful result will ensue from their participation and take an active part in producing such result by submitting their vote; however, this is not enough to make them the intentional authors of the meaningfulness and unity of such line of action.In the same way, participants in a deliberative process that does not aim-among other thingsat producing integrity consensus cannot be said to be the intentional authors of the unity of such line of action.
This feature of D-models pairs with their atomism.In D-models a participant's stance on the decision to be made after deliberation and the way she votes is independent of how other people vote.It might be objected that this charge of atomism assumes that D-models display only a form of 'weak dialogicality' (McMahon, 2000).In 'weakly dialogical' models, people exchange reasons in public but then issue separate individual judgments about the decision to be made.But this is not the only way we can conceptualize deliberation in Dmodels.In 'strongly dialogical' models not only the deliberative stage in which individual judgments are formed and transformed, but also the very moment at which people vote or otherwise express their conclusive views about what should be done are collective and reciprocal-each participant expresses her judgment while knowing that others will do the same.In William Rehg's (1991) words, I can be rationally convinced of the worthiness of a norm only if I suppose that others are rationally convinced, which in turn depends on their supposing that I am rationally convinced.If this is not to be a vicious circle, then rational conviction must be something that we arrive at together (p.44). 8 Appealing to strong dialogicality, however, is not enough to dispel the charge of atomism in the relevant sense for our discussion.If citizens must act as decision-makers, and therefore must directly aim to achieve the consensus needed to produce meaningful decisions, then the decision-making process must be non-atomistic in the sense that it must allow them to mindfully coordinate their actions.An account of decision-making is genuinely nonatomistic in the sense specified, then, when in making such an effort of coordination each voter takes other people's votes as an independent reason for deciding how to vote.In D-models, instead, voters do not take into account the way other people vote or express their political preferences as an independent reason for action, but only as evidence of other people's judgments and only to the extent that they are supposed to head in the right direction.This is required by the deliberative nature of the process.When I am deliberating with others, to take their vote as a basis for my voting, I need to assume it reflects good or valid reasons; this means that in casting my vote what I actually take into account is not my fellow citizens' votes, but the reasons behind those votes.In this sense, although in deliberation people influence each other by arguing and exchanging reasons, the deliberative model of decision-making is no less atomistic than Riker's model.
In sum, D-models of democratic decision-making, notwithstanding that they offer a richer account of political agency than Rikerian and minimalist models, still represent democratic decision-making as atomistic in the sense that votes 8 Quoted by McMahon (2000, p. 521).
are fully independent of each other, and are unintentional in the sense that integrity consensus is not aimed at by participants but is instead a by-product of their deliberative exchanges.Therefore, D-models cannot rescue the fundamental notion that, in a democracy, citizens must act as decision-makers.
The Missing Element: Political Prudence
If we want to fully rescue the integrity consensus that underlies selfdetermination, we need a different account of consensus-building from the one provided by D-models.Such an account must explain consensus-building as the intentional product of the agency of participants in democratic decision-making.
In this section, I suggest that what is needed is to add a further dimension of political agency to the discursive rationality instantiated in D-models.I propose to call this missing element political prudence (PP).
By political prudence I mean the general capability of devising the opportunity and rationale for engaging in the different political activities allowed by democratic political rights.Examples of the exercise of political prudence are the decision whether to participate in a general strike against the government; whether to desert the ballot box as a form of protest; whether to engage in deliberation and pursue one's political vision by trying to convince the general public of the rightness of one's views, or whether to resort to compromise and accept a middle ground with one's political opponents; and more generally all the instances in which it behoves citizens to judge how best to exercise their political rights.
Many deliberative theories of democracy, and notably all those that allow for compromises and deviations from pure deliberation, seem to rely on the capacity for political prudence: they assume that citizens may judge when the moment has come to relinquish the purely deliberative mode and engage in negotiations and compromises (Bohman, 1996;Habermas, 1996;Mansbridge, 1999;Mansbridge & Martin, 2013), or, conversely, when they should try to push for a given political issue to be debated in the deliberative mode.However, in deliberative accounts, such a capacity is seldom theorized as a distinct component of democratic agency.
Among the tasks of political prudence, I want to suggest, is also the assessment of the reasons, circumstances and means for achieving integrity consensus.In a model of how integrity consensus is achieved through the resort to political prudence (from now on, PP-model), citizens engage in deliberation and other political activities enabled by their democratic rights while at the same time keeping an eye on the overall effects of their activities on collective decisionmaking.This may guide their choices and actions in at least two ways.
First, it can encourage citizens to think about the right timing for pressing an issue for public deliberation, in consideration, for example, of the destabilizing effects that it may have on the political system and existing majorities, or of the proximity of electoral junctures that would make such effects dangerous for political stability.Deliberation does not flow by itself, and the order and mode in which specific issues come to the fore in the public sphere is decided by political parties, movements, associations and individual citizens.These decisions have an enormous impact on the capacity of the polity to preserve integrity and unity of agency.In fact, in his attack on the populist ideal, William Riker (1982) provided various examples of how throwing a divisive issue on the agenda, or framing the underlying principles in an inflammatory way, can create cycles, paradoxes and inconsistency.9There is no reason to believe that this knowledge can only be used (as Riker suggests) by malevolent political actors who aim at destabilizing existing majorities.It can be used, and is often used, by the citizens of a democratic polity to maintain consensus and integrity when they are needed.
Second, and very importantly, citizens can also decide to vote 'insincerely' or 'strategically'-that is, in a way that does not reflect the way they would order the policy options if they had merely to decide on the basis of their own conscience or preferences.Strategic voting is often associated with manipulation, by which voters falsely represent their preferences to produce an outcome that favours their preferred options.However, strategic voting can also be used for other purposes than self-advantage.It can be instrumental to the pursuit of the consensus that is necessary to make meaningful collective decisions, when citizens realize that 'sincere' voting would create instability or inconsistent results at the collective level, thus compromising collective integrity and the meaningfulness of democratic choices.
Upon reflection, we can see that the PP-model offers a familiar description of powers and activities already instantiated in our democratic polities and protected by democratic rights.The introduction of the notion of political prudence is not a call for dramatic changes in our institutional practices; rather, it is meant to account for the importance of the ample leeway democratic rights offer citizens in terms of intentionally steering collective decision-making through their political activity and engagement.
If we conceive the building of integrity consensus as the product of intentional actions and decisions by the citizens of a democratic polity, then we also recognize that it is up to them whether and to what extent they want to pursue such an aim.The premise of our argument so far is that the members of a democratic polity have an interest in acting as decision-makers, rather than mere gears in a mechanical device that produces random selections among policy alternatives.This interest must translate into the possibility of intentionally taking action to ensure the consistency of democratic decisions and the integrity of the democratic polity.However, this is an interest that citizens always need to balance against other considerations, including the dictates of their conscience or the need to keep dissensus alive.In other terms, conceiving of consensus-building as the product of the intentional actions of citizens, as the PP-model does, also leaves citizens the room to decide how to balance and negotiate the unity of agency of the polity with other interests that pull in the opposite direction.So, for example, the PP-model recognizes that citizens can decide when restoring collective integrity would require too high a price to pay in terms of justice, so that they should pursue their view of justice by arguing for it and voting accordingly.Moreover, this model recognises that citizens have the discretion to decide when it would be a good thing, after all, if the existing consensus and integrity were broken. 10 The PP-model of how consensus is built in democratic decision-making is obviously-almost trivially-immune from the flaws that affect the models we have previously discussed.First, in the PP-model, democratic decision-making is not atomistic, since citizens, in casting their votes, take into account the consequences their votes will produce once aggregated with the votes of others.This means that the way other people will vote counts as an independent reason for deciding how to vote, rather than mere evidence for what counts as a good decision, as is the case with D-models.Convergence and consensus are created as the result of an intentional pursuit of coordination by each and every citizen.
Second, in the PP-model the seeking and achievement of integrity consensus can be intentional.Citizens cast their vote and participate in public deliberation having in view the overall results of their actions in terms of the integrity and consistency of their political community as a self-directing polity.Integritybuilding consensus, then, will not be achieved mechanically, or as a by-product of democratic activities that are directed to other goals (be that deliberating, compromising, 11 or other forms of political interaction), but it will be the product of the exercise of intentional actions by citizens.
10 This is a main difference from those models of democratic decision-making that are based on a pre-commitment to abide by the decisions of the majority (Gilbert, 2006), as well as from those models that make the acceptance of the position of the majority conditional on the quality of the deliberative process (Moore & O'Doherty, 2014). 11Henry Richardson (2002) has addressed the problem of agreement within a democratic polity by appealing to 'deep compromises.'These compromises are motivated by the joint intention to achieve convergence in the light of the realization that each party has its own separate and legitimate aims and goals.Richardson's compromises are not atomistic, because they require coordination.However, they are not a satisfying solution to the problem we are considering here, because like other kinds of compromise they cannot guarantee integrity at the collective level (Besson, 2005); and, whenever they happen to produce such integrity, integrity is not the object of the intentional action of the participants in the process.
Third, the PP-model can find room not only for pluralism, but also for open dissent.This is true in two senses.First, in the PP model the substantive consensus that guarantees collective consistency and integrity does not need to be reached through a transformation and reductio ad unum of people's underlying views, values and principles.Second, as we mentioned political prudence allows for a careful and calculated balancing of the need to reach collective integrity and meaningful decisions with the contrasting interest in expressing one's differences with, or open dissent from, the views of the majority.The control that political prudence leaves to the democratic public allows minorities to consider acceding to the consensus only to the extent strictly required to ensure integrity at the collective level.
Coordination, Motivations, and the Risk of Manipulation
The PP-model of democratic decision-making, whatever its merits, may not look feasible.The exercise of political prudence, even if limited to the achievement of collective integrity, requires a high degree of coordination and the ability to forecast the overall results of one's choices once they interact with other people's actions through democratic procedures.In response to this concern, two important points need to be stressed.
First, the PP model is just a normative and regulative model.It does not ensure citizens will always be able to meet its standards.What such a model does is to make sense of our intuition that in democratic government an essential connection exists between individual agency and collective self-determination and that this connection requires that citizens may intentionally contribute to steering the course of collective decision-making.
Second, the objection overstates the coordination problems involved in the PPmodel, and overlooks some of the powerful instruments citizens can count on in devising their deliberative and voting strategies.A complete account of how coordination problems relate to how collective integrity can be overcome exceeds the limits of present discussion.However, I would like to mention two obvious tools we are currently using, in our democracies, to coordinate our actions with those of our fellow citizens and forecast their deliberative and voting behaviours.The first tool is parties and other political organisations.Parties themselves are powerful means for coordination (Budge, 2006;Dewan & Myatt, 2007;Sartori, 2005), but their internal life and relation to the rest of the polity are also a fundamental source of information about the intentions and aims of their affiliates.A second tool is opinion polls.These are very controversial elements of our current practices, especially when they are uncritically taken as plain expressions of 'public opinion' (Bourdieu, 1973), or when they are subject to manipulation by party leaders to fake consensus (Herbst, 1995;Jacobs & Shapiro, 1995).However, they play an essential informative function for the democratic public.Polls change, and we may conjecture that this happens because citizens reorient their intentions in light of data acquired though previous polls (Fey, 1997).
Even if concerns about the actual capacity for coordination are dispelled, doubts can be raised about whether the democratic public can ever develop the right motivations for engaging in such complex practices.This motivational challenge can be addressed by recalling the response that some deliberativists have offered to a parallel objection that was directed against the idea that citizens would be motivated to engage in deliberation.They have rightly pointed out that, in spite of being presented as 'realistic', the alternative models that depict democracy as a game for the competition of interests are unable to explain the actual practice of democracy and fail to offer a credible account of why citizens should be motivated to participate in such a dismal game.In the same vein, it may be argued that political prudence, far from being unrealistic, is what makes sense of the actual practice of democracy.Citizens would not be as motivated to participate if they knew they could not exercise any direct control on the meaningfulness of democratic decisions, or if they knew their participation would be likely to issue in inconsistent and paradoxical decisions.So, they have an inherent interest in steering the decision-making process in such a way that its final results make sense and display integrity, and that they do so as a result of citizens' actions and choices.
A final, important concern may be raised about the fairness and nonmanipulability of citizens' voting as guided by political prudence.The PPmodel may seem to hand too much power and discretion to participants, and we may worry about possible abuses and path-dependence.Minorities can be induced to consensus by the threat of breaking up collective integrity. 12Even worse, when a sufficient degree of consensus is lacking and needs to be built through the conversion of some participants to voting patterns that will guarantee consistency and stability, which groups get saddled with that task may depend on arbitrary circumstances and may unfairly burden disempowered minorities.These effects are supposedly not possible in D-models, in which the whole process of consensus-building is guided by public argument and therefore is not subject to arbitrariness.However, three important facts need to be recalled about the PP-model.First, the danger of manipulation can be tamed by a high degree of publicity and transparency in the institutional tools used for coordinating votes.For example, measures can be taken to ensure the independence, openness and non-manipulability of the data provided by opinion polls, and the internal life of parties can be made more transparent to the general public.Second, in the PP model the exercise of political prudence is not supposed to replace deliberation, but to complement it.Moreover, the PP-model extends deliberation not only to the merits of different proposals but to the very process of elaborating collective strategies for arriving at integrity-building consensus.Arguments for voting in a way that does not reflect one's judgment on what the correct decision would be, arguments for withdrawing one's consensus and openly expressing dissent, and arguments for doing so even when this threatens collective integrity, should be made public and be part of the deliberative process. 13So, if deliberation has any power to resist manipulation and arbitrary path-dependence, it could be counted upon also in the PP-model.
Conclusion
Integrity consensus is an essential element of the democratic ideal because it guarantees that citizens can act as decision-makers rather than mere givers of inputs into random and potentially inconsistent decision procedures.However, if our valuing integrity consensus is grounded in our interest in citizens' selfdetermination and agency, then such consensus must be reached in ways that allow for self-determination and agency.I have argued that purely deliberative models of how integrity consensus is to be achieved cannot fulfil this requirement.If we want to account for the integrity-building role of consensus as a guarantee of democratic self-direction, along with the exercise of deliberative rationality we need to explicitly theorise a dimension of political agency-political prudence-that consists in exercising intentional control on the overall effects at the collective level of one's actions in the course of collective decision-making.
|
v3-fos-license
|
2018-12-19T07:07:01.110Z
|
2009-04-14T00:00:00.000
|
59043638
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/stni/2009/213981.pdf",
"pdf_hash": "18fbad4ca6f454869142f2b85356d5f805973b4b",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:539",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "18fbad4ca6f454869142f2b85356d5f805973b4b",
"year": 2009
}
|
pes2o/s2orc
|
CFD Application to Hydrogen Risk Analysis and PARQualification
A three dimensional computation fluid dynamics (CFD) code, GASFLOW, is applied to analyze the hydrogen risk for Qinshan-II nuclear power plant (NPP). In this paper, the effect of spray modes on hydrogen risk in the containment during a large break loss of coolant accident (LBLOCA) is analyzed by selecting three different spray strategies, that is, without spray, with direct spray and with both direct and recirculation spray. A strong effect of spray modes on hydrogen distribution is observed. However, the efficiency of the passive auto-catalytic recombiners (PAR) is not substantially affected by spray modes. The hydrogen risk is significantly increased by the direct spray, while the recirculation spray has minor effect on it. In order to simulate more precisely the processes involved in the PAR operation, a new PAR model is developed using CFD approach. The validation shows that the results obtained by the model agree well with the experimental results.
Introduction
During severe accidents, hydrogen can be generated in watercooled reactors by metal-steam reaction.Hydrogen which is released into the containment may form combustible or even detonable gas mixture in the containment.As one of the mitigation measures against severe accidents in the Qinshan-II NPP, the containment spray system which starts when the containment pressure reaches the threshold value is utilized to prevent the containment overpressure.In the viewpoint of hydrogen risk, the spray operation is concerned due to two aspects.In one respect, the condensation introduced by the operation of the containment spray reduces the steam concentration, which leads to an increase of the hydrogen concentration and adds to the hydrogen combustion or detonation risk.In the other respect, the containment spray brings a gas temperature difference in the containment which promotes the gas mixing and leads to a more uniform distribution of the gas concentration.In this paper, a threedimensional CFD code GASFLOW [1] is utilized to simulate the gas mixing and distribution in the LBLOCA and to evaluate the effect of the containment spray on the hydrogen risk.The case assuming no activation of the containment spray is selected as the base case.Two other cases assuming different containment spray operation strategies are involved in this paper.One of the spray cases considers only the direct spray (indicated as Case A, hereafter), while the other simulates the complete spray operation according to the design of the containment spray system, which includes the direct and recirculation spray (indicated as Case B, hereafter).The same source term is used in all the cases.
As one of the major hydrogen mitigation measures, the passive autocatalytic recombiner (PAR) has been widely used in nuclear power plants.In most studies presented in open literatures, including lumped parameter code analysis and CFD code analysis, the PAR is simply simulated by introducing energy and mass source terms obtained from empirical correlations.According to the state-of-art report on PAR proposed by the PARSOAR project [2], a theoretical PAR model is recommended for the CFD analysis codes, such as GASFLOW, TONUS.In this paper, a PAR model is developed based on the CFD approach, in order to provide more insight into the processes inside a PAR.
Hydrogen Risk Analysis for Qinshan-II NPP
2.1.Containment Geometry and Mitigation Systems.The containment of Qinshan-II NPP is a large dry containment which consists of a cylindrical part and a spherical dome, as indicated in Figure 1.The height of the containment is about 60 m, and the diameter is about 38 m [3].The compartments are mainly located below the operation deck at the height of 20 m.The main components of two primary loops are symmetrically arranged in the containment.The deck at the height of 4.5 m supports the major heavy components including the steam generators (SGs), the reactor coolant pumps (RCPs) and the safety injection tanks (SITs).Similar to most of the pressurized water reactor (PWR), the pressurizer (PZR) is located in a room next to one of the SG towers.The bottom of the PZR room is at a height of about 11 m.The PZR relief tank room is located on the 0 meter floor and right under the PZR room.The refueling pool is from 6 to 20 m in height.It connects with the reactor cavity in the center of the containment and reaches the containment wall.There are other small rooms accommodating the valve, piping, and heat exchangers on the 0 meter floor and the underground floor.All the abovementioned rooms are located inside a cylindrical missile shielding wall which protects the containment from ejected missiles.The space above the operation deck is much opener.Only the SG towers and the PZR room extend beyond the deck.moreover, in the dome a crane is installed.GASFLOW can generate structural mesh in both Cartesian and cylindrical coordinates.According to the characteristics of the containment geometry, the cylindrical coordinate is selected.The mesh size is adjusted according to the location of the structures in the containment so that the geometry can be described with a coarser mesh system without increasing the computation cost.In the mesh system 18, 60, and 51 cells are, respectively, arranged in radial, circumferential, and axial directions.The free volume of the containment is about 50 000 m 3 .The average cell volume is about 1.4 m 3 .The total wall surface area is about 24 000 m 2 , most of which is considered as concrete.The components in the primary loop are treated as adiabatic because the adiabatic layer is utilized on them.In order to mitigate the hydrogen risk during severe accidents, 22 passive autocatalytic recombiners (PAR) of Siemens type are installed in the Qinshan-II NPP containment compartments.Table 1 lists the position and type of the PAR.Each PAR is simulated with a single mesh cell.
In Qinshan-II NPP, two separated containment spray systems are installed.According to the design of the spray systems, the containment can be depressurized with only one of them during the severe accidents.Each system includes two nozzle rings, as shown in Figure 2, on which about 250 nozzles are attached.The mass flow is uniformly distributed to every nozzle.Heat exchangers in the system control the temperature of spray water.The spray systems are designed to operate in two modes: direct spray mode and recirculation spray mode.The direct spray starts while the pressure in the containment reaches 2.36 bar.During the direct spray, the spray water comes from the refueling tank, and the temperature at the nozzle outlet is in the range from 20 to 40 • C. In this paper, the temperature of spray water is given to be 27 • C. In about 30 minutes, the water in the refueling tank will be used up.Then the spray switches to the recirculation mode.In the recirculation mode, the spray water is pumped from the water sump in the containment.Because the water in the sump originally comes from the primary loop or the spray water and is at a high temperature, the temperature of the recirculation spray water is higher and is designed to be from 40 to 120 • C. In this paper, the water temperature of the recirculation spray is given as 77 • C. According to the design of the system, the spray mass flow rates in the direct and the recirculation spray modes are 814 and 1050 ton per hour, respectively [3].
Physics Model.
In GASFLOW two approaches are provided for two-phase flow simulation [4].In the case that the spray model is not activated, the homogeneous equilibrium approach is applied automatically, which assumes the liquid and gas phases are in both thermodynamic and mechanical equilibrium.Because the containment spray brings strong transient and thermodynamic non-equilibrium, in order to more exactly simulate the interaction between the liquid and gas phases during the spray operation, GASFLOW offers another approach in which the thermodynamic nonequilibrium between liquid and gas phases is considered, but the difference of mechanical behavior between the liquid and gas phases is still neglected.The GASFLOW spray model has been validated with TOSQAN experiments and provided satisfactory predictions of the experiment data [5].
The spray simulation induces a much smaller time step size and increases the computation time by several times.Due to heavy time consumption in the spray simulation, turbulence model is not used in all the cases because the computational cost becomes unacceptable while using the turbulence model.According to the experience from Karlsruhe Research Center (FZK), GASFLOW predicts nearly the same flow field and vortex formation with and without turbulent models in a coarse grid [6], such as the grid used in containment geometry model.The heat conduction inside the structures is simulated in a one-dimensional approach.The PARs installed in the containment were simulated with the standard model provided by the GASFLOW code.
Accident Scenario.
The hydrogen/water source term for this analysis was obtained by scaling from GKN surge line LBLOCA source term reported in [6], as indicated in Figure 3.The break is located in the lower part of the SG tower which is next to the pressurizer room.At the beginning, a large amount of saturated water is discharged to the containment.The water discharge flow rate decays promptly due to the limited coolant inventory in the primary loop.Water in the reactor core is heated by the decay heat which rises the water temperature to superheated condition after the saturated release period.At about 1400 seconds, hydrogen generation and release starts.About 270 kg hydrogen is released into the containment during the first 7000 s.secondsAt 5930 seconds, hydrogen release peak rate is produced due to an enhancement of steam/zirconium reaction after the failure of the core support.Besides the source term, Figure 3(a) indicates also the two spray periods.The start time of direct spray is determined according to the pressure variation obtained in the base case.In about 60 seconds, the pressure in the containment reaches the threshold value and the direct spray starts.
Thermal Hydraulics.
At the beginning of the LBLOCA accident, along with a heavy discharge of water to the containment, the pressure and gas temperature inside the containment increase sharply.The hot steam mixes promptly and intensively with the atmosphere and soon spreads through the containment, which results in strong condensation on the structure surface.The condensation helps delaying the containment pressurization in the severe accident.Figures 4 and 5 compare the pressure and gas temperature variation in three cases.In all the cases, a sharp pressure and temperature increase occurs and the pressure reaches the maximum at the beginning of the blowdown.Although different approaches are utilized to deal with the two phases in the base case and spray cases, large discrepancies of the global behavior are not observed before the spray activation in the cases.For the base case, the pressure and gas temperature variation inside the containment is mainly affected by the water injection and the condensation on the structure.The computation result shows that, with the current source term, the pressure does not exceed the design value.In Case A and B, the pressure and the gas temperature inside the containment show lower peak values and decrease fast after the activation of the direct spray until its shutdown.Without the circulation spray, the pressure and gas temperature rebounds to high level in Case A because the hot steam continues being discharged after the shutdown of the direct spray.In Case B, the recirculation spray holds the pressure and the gas temperature slightly higher than the lowest value induced by the direct spray.Figure 6 presents the bulk evaporation rate in the containment during all the cases.The direct containment spray introduces intensive bulk condensation (negative value in Figure 6).Conversely, the recirculation spray brings bulk evaporation.The evaporation of the spray water is beneficial with respect to hydrogen risk because it increases the steam concentration and builds up an inertial atmosphere which resists the hydrogen combustion.With respect to hydrogen risk, the outlet temperature of the recirculation spray should be optimized to enhance the evaporation of the spray water.Figure 7 gives the total condensation rate on the structure surface.During the direct spray phase in the Cases A and B, the condensation on the structures is at a much lower rate than that in the base case due to the enhancement of the bulk condensation which reduces both the gas temperature and steam concentration.The condensation on the structures is one of the major ways in which heat is transferred from the atmosphere to the structures.Hence, a low condensation rate leads to a low heat transfer rate from the gas to the structures, as indicated in Figure 8.However, during the recirculation spray in Case B, the condensation on the structures is observed to be at a similar rate with that in the base case.In Case B, both the bulk evaporation and the surface condensation are enhanced by the recirculation spray.The evaporation of spray water takes away a lot of sensible heat from the atmosphere and contributes to stabilize the gas temperature inside the containment.Comparing both the bulk evaporation rate and the surface condensation rate in the Cases A and B shows that the recirculation spray generally increases the steam inventory inside the containment, which reduces the hydrogen combustion or detonation risk.Hence, it can be concluded that besides controlling the pressure and gas temperature inside the containment, the recirculation spray can build a comparatively inertial atmosphere for hydrogen.
2.5.Flow Field. Figure 9 presents the flow fields during the heavy hydrogen release period in all the cases.In the base case and Case A, similar flow fields are observed.Because the released gas is hydrogen-rich and of high temperature and low density, a buoyancy jet flow forms above the SG tower in which the break is located.The jet flow is reflected by the dome and flows downward into the other side of the containment.A large-scale vortex can be observed in the upper space.The condensation and the convection heat transfer on the structures remove the steam and heat from the atmosphere, the gas near the structures is heavier than the gas in the bulk.Hence, the downward flow can be observed near the structure surfaces.During the hydrogen release peak in the base case and Case A, the magnitude of velocity is generally less than 0.5 m/s.A chaotic flow field is induced by the recirculation spray, as indicated in Figure 9(c).The flow velocities in Case B are much higher than that in the other two cases.As mentioned in Section 2.2, GASFLOW spray model uses the mechanical equilibrium assumption while dealing with the two-phase flow.Actually, the heavy liquid phase is more inclined to drop down than the gas phase.
The assumption inevitably leads to an artificial flow.In the actual situation, mechanical interaction between the liquid and gas phases could lead to a flow pattern different from the obtained results.
Hydrogen Recombination, Hydrogen and Steam Distribution.
In the analyzed scenario, the hydrogen release can be generally divided into two periods.The first period lasts from 1400 s to 3500 s.In this period, the global hydrogen volume fraction in the containment reaches 3%, but the flammable clouds (at hydrogen concentration above 4%) rarely appear.Due to the hydrogen-oxygen recombination, the hydrogen concentration can be reduced to less than 3% before the second hydrogen release period starts.During the second period, the hydrogen release is discontinuous.However, intensive hydrogen release between 5900 and 6000 s could lead to extremely high local hydrogen concentration.Figure 10 presents snapshots of hydrogen and steam clouds right after the hydrogen release peak.A clear gas stratification can be observed in both the base case and Case A. Hydrogen-rich clouds are enveloped by steam-rich clouds during most of the time, which provides an inertial atmosphere for hydrogen and prevents early hydrogen combustion.The combustible hydrogen cloud in Case A is of the biggest size among the three cases.Compared with the base case, the steam concentration is low in the spray cases.Due to strong mixing induced by the recirculation spray, hydrogen stratification is not observed in Case B, as shown in Figure 10(c).Hence, the direct spray reduces the steam volume fraction and increases the hydrogen volume fraction, while the recirculation spray does not lead to an increase of hydrogen concentration but prevents the hydrogen concentration stratification.
In the Siemens PAR correlations [4], the recombination rate depends on the pressure and inlet hydrogen and oxygen concentration.Although the pressure is very different in the analyzed cases, as mentioned in Section 2.4, the total recombination rate of 22 PARs does not show great difference in all the cases, as shown in Figure 11.Generally, the evolution of the recombination rate is in the same trend.Following the hydrogen release into the containment, the recombiners start up when the inlet hydrogen concentration reaches the startup threshold (2 vol.%).Along with the hydrogen accumulation in the containment, the recombination rate ascends.At the end of the first hydrogen release period, the recombination is at an almost stable rate.During the second period, when the hydrogen release is discontinuous and at a quite low rate, the recombination rate reduces smoothly and slowly.The oscillation of recombination rate is observed in this period in Case B because strong flow caused by the spray brings the strong variation of hydrogen concentration at the inlet of PARs.After 5900 s, the spray cases show a higher hydrogen removal capability than the base case due to higher global hydrogen concentration.In GASFLOW, the volume flow rate through the PAR is deduced from the recombination rate obtained from Siemens correlations.In this case, the recombination rate is affected only by the gas species concentration and pressure at the inlet of PAR.However, BMC Zx test results [7] suggest an increase of the volume flow rate through PARs and recombination rate due to the spray.
In order to indicate the characteristics of the hydrogen mixture and the hydrogen combustion risk in the containment, the volume of sigma cloud is involved in this paper.The sigma cloud is a volume of the hydrogen-air-steam mixture with a combustion expansion ratio higher than the critical value obtained from experimental data [8].Flame acceleration could occur in a sigma cloud.Figure 12 the evolution of total sigma volume inside the containment in three cases.During the analyzed accident, a sigma volume peak can be observed in all the cases.And the maximum sigma volume in the spray cases is larger than that in the base case, while the maximum sigma volume in the Cases A and B is not quite different.However, if compared carefully, it can be found that the sigma volume in Case A shows to be larger than that induced by Case B at the second peak, and smaller at the third peak.It implies that the recirculation spray reduces the hydrogen risk during the slow hydrogen release period, but can slightly increase the local hydrogen concentration around the hydrogen source and lead to a larger sigma volume at the moment when the peak hydrogen release rate occurs.
Passive Autocatalytic Recombiner.
A passive autocatalytic recombiner consists of a vertical channel and a stack equipped with a catalyst bed in the lower part, as presented in Figure 13.In the case of severe accidents, the catalyst is in contact with the gas mixture of the containment.Hydrogen reacts with oxygen at the catalyst surface and generates steam, as indicated in Figure 14.The reaction heat released at the catalyst surface causes a buoyancy-induced flow which increases the inflow rate and thereby feeding the catalyst with a larger amount of hydrogen that ensures high efficiency of recombination.The buoyancy-driven circulation ensures a continuous gas supply to the PAR [2].The catalyst sheets can be heated up to 900 K or even higher; so, considerable amount of heat is also transferred from the catalyst to the environment by heat radiation.Left in Figure 14 shows a typical channel between two catalyst sheets.For small and medium recombiners of Siemens type, both height and depth are about 15 cm.The width of flow channel is less than 1 cm.In the PAR, the gas velocity, u, is in the magnitude of 1 m/s.The gas temperature could vary from 300 K to 700 K. Assuming the gas in the PAR is dry air, the Reynolds number of the flow between the catalyst sheets is Re = 2 ud/υ = 400∼1250.The flow is considered as a laminar flow in the channel.
Model Development.
A two-dimensional PAR model is developed to simulate the flow in the channel, the heat transfer between the catalyst sheet and gas flow, the heat conduction in the catalyst sheet and the chemical reaction on the catalyst surface.The variation of flow velocity, temperature and gas concentration in the depth direction is then neglected.The continuity equation, Navier-Stokes equation, and energy equation are coupled and solved with the SIMPLE algorithm.The Boussinesq assumption is applied to consider the buoyancy caused by heatingup.Since the flow is laminar, no turbulence model is utilized in the present model.For the radiation heat transfer, the emissivity and absorption ratios of the catalyst sheet are assumed to be one.The view factor can be easily obtained for the parallel and perpendicular plates in a two-dimensional model, as indicated in Figure 15.An environment temperature is assigned at the inlet and outlet of the channel to calculate the radiation heat transfer between the catalyst and the environment.The effect of steam in the heat radiation is currently not considered in this model.In the catalyst plate two-dimensional heat conduction is simulated.It was observed that the temperature difference can be neglected in the normal direction of catalyst plate surface.It can be concluded that one-dimensional heat conduction will be enough.
Besides the basic equations, the concentration equations (1) are solved for all the gas species except for nitrogen A one-step reaction model developed by Ikeda et al. [11] is applied to simulate the chemical reaction on the catalyst surface In (3), the gas temperature in the cell next to the catalyst surface is applied in order to avoid an extremely high reaction rate.Based on the reaction rate obtained from (3), the source term for the energy and species equation can be easily calculated.
Model Validation.
The REKO-3 experiment results [9] are utilized to validate the model.REKO-3 experiments are conducted by Forschungszentrum Juelich, Germany.The test section of the REKO-3 facility consists of four catalyst sheets forming three flow channels.The facility provides the measurement of catalyst temperature and gas concentration at different heights.Experiment results are obtained at different inlet velocities.
Figures 16 and 17 compare the numerical results with the experimental data under three inlet velocities.The hydrogen volume fraction at the inlet is 4% in all cases.Among all the cases, model gives the best prediction at lowest inlet velocity (0.25 m/s).A clear deviation of the catalyst temperature near the inlet is observed for other two cases.An increasing catalyst temperature leads to a significant heat loss from the catalyst to the environment, especially for the inlet neighborhood where both the temperature and the view factors to the environment are high.The deviation of the catalyst temperature can be minimized by optimizing the environment temperature and by setting the exact emissivity and absorption ratio of the catalyst material.In the cases where the inlet velocity is 0.5 m/s and 0.8 m/s, an overestimation of recombination by the model is observed.This could be caused by overestimating the chemical reaction rate on the catalyst or by overpredicting the mass transfer to the catalyst.Generally, the model gives satisfactory prediction of the experiment results.
Conclusion
The hydrogen analysis with the CFD code GASFLOW is conducted to investigate the effect of spray modes on hydrogen risk in the Qinshan-II NPP containment during a large break loss of coolant accident (LBLOCA).The direct spray sharply depresses the pressure and temperature in the containment and reduces the heat transfer from the atmosphere to the structures.However, the direct spray mode (case A) is still not capable of controling the pressure and gas temperature during the accident due to the strong release of hot steam after the shutdown of the direct spray.A considerable evaporation of the recirculation spray water is observed.Compared with Case A, the enhancement of the condensation on the structures is also observed during the recirculation spray (case B).Because the evaporation induced by the recirculation spray is generally stronger than the enhancement of surface condensation, the steam inventory inside the containment is increased due to the recirculation.During the hydrogen release peak, a chaotic mixing flow field is produced by the recirculation spray, while a regular natural convection flow forms in the other two cases.From the aspect of hydrogen safety, the direct spray increases the global hydrogen concentration and the maximum sigma volume, but does not prevent the stratification.The recirculation spray does not increase the global hydrogen concentration inside the containment and promotes mixing, but can increase the local hydrogen concentration near the hydrogen release source.The effects of the containment spray on the PAR performance are found to be minor.
A CFD recombiner model is developed in order to provide more detailed insights into the process along the catalyst sheets.The model is validated with the data from the REKO-3 experiment [9] and gives satisfactory prediction.Further work is needed to develop a full recombiner model by incorporating the chimney part.For implementation of the CFD PAR model in containment analysis, appropriate boundary conditions are needed at the inlet of the PAR.Nomenclature c i : Mass fraction of ith gas species D i : Diffusion coefficient of ith gas species, m 2 /s d: Gap width between two catalyst plates, m R: Universal gas constant, J/(mol-K) Ṙ: Reaction rate, mol/(m 3 -s) Re: Reynolds number S ρ,i : Mass source term of ith gas species per unit volume, kg/(m 3 -s) T: Temperature, K u: Velocityinx direction, m/s v: Velocityiny direction, m/s ρ: Gas density, kg/m 3 φ H2 : Hydrogen molecular concentration, mol/m 3 υ: Kinetic viscosity, m 2 /s
Figure 3 :
Figure 3: Source term and spray activation.
Figure 9 :
Figure 9: Flow fields during the hydrogen release peak.
Case B at 5950 s
Figure 10 :
Figure 10: Hydrogen and steam clouds at the moment when hydrogen release reaches peak value.
Figure 15 :
Figure 15: View factor of parallel and perpendicular plates.
Figure 17 :
Figure 17: Average hydrogen concentration in the flow channel, model prediction versus experiment.
|
v3-fos-license
|
2023-02-18T16:02:29.123Z
|
2023-02-01T00:00:00.000
|
256968909
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "a727bfaa1555d641b7729cba8743f8165c69655a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:541",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "0d6f88432e7bfb5d3874788ba74ea98ede47c9c0",
"year": 2023
}
|
pes2o/s2orc
|
The Potential of Spent Coffee Grounds in Functional Food Development
Coffee is a popular and widely consumed beverage worldwide, with epidemiological studies showing reduced risk of cardiovascular disease, cancers and non-alcoholic fatty liver disease. However, few studies have investigated the health effects of the post-brewing coffee product, spent coffee grounds (SCG), from either hot- or cold-brew coffee. SCG from hot-brew coffee improved metabolic parameters in rats with diet-induced metabolic syndrome and improved gut microbiome in these rats and in humans; further, SCG reduced energy consumption in humans. SCG contains similar bioactive compounds as the beverage including caffeine, chlorogenic acids, trigonelline, polyphenols and melanoidins, with established health benefits and safety for human consumption. Further, SCG utilisation could reduce the estimated 6–8 million tonnes of waste each year worldwide from production of coffee as a beverage. In this article, we explore SCG as a major by-product of coffee production and consumption, together with the potential economic impacts of health and non-health applications of SCG. The known bioactive compounds present in hot- and cold-brew coffee and SCG show potential effects in cardiovascular disease, cancer, liver disease and metabolic disorders. Based on these potential health benefits of SCG, it is expected that foods including SCG may moderate chronic human disease while reducing the environmental impact of waste otherwise dumped in landfill.
Coffee Waste Products-From Farm to Landfill
Spent coffee grounds (SCG) are the ultimate waste product in the consumption of coffee as a beverage. Coffee beverage consumption continues to have a remarkable economic, social and cultural impact across the globe [1,2]. Worldwide, the estimated coffee bean production in July 2020 to June 2021 was~175 million 60-kg bags [3], approximately 10.5 million tonnes. Coffee production requires farming, harvesting, pulping of coffee cherry, fermentation and hulling in wet methods, roasting and brewing (Figures 1 and 2) [4]. Significant wastage occurs during all stages of production, with the major impact being in developing countries, where most coffee is grown, such as Brazil, Vietnam, Colombia, Indonesia and Ethiopia. The economies of these countries rely heavily on coffee production. For example, in Brazil, more than 8 million people (around 4% of the population) are employed in coffee production by the coffee farms, excluding retail outlets, hospitality and other businesses involving commercialisation of coffee products [5]. Brazil's coffee production constituted 5% of total export revenue at over USD 4.8 billion in 2019/2020 [6,7]. Therefore, the economic and environmental impacts of coffee farming are mainly observed in developing countries in tropical areas; these impacts include subsistence farming, heavy application of pesticides and fertilisers, destruction of complex tropical ecosystems, decreased soil fertility and contaminated water resources [8][9][10][11], which collectively can [14,17,18]. [14,17,18]. farming, heavy application of pesticides and fertilisers, destruction of complex tropical ecosystems, decreased soil fertility and contaminated water resources [8][9][10][11], which collectively can increase health risks for local communities [12,13]. The estimate of waste produced from coffee production worldwide is at least 6-8 million tonnes every year [14,15]. This value exclusively represents by-products of coffee production such as cascara, husk, mucilage, silverskin and post-brewing spent coffee grounds (SCG), and excludes contaminated soil and water [16]. [14,17,18]. Figure 2. Coffee by-products and their potential applications. Data sourced from [14,17,18].
The major coffee buyers are developed countries with large populations such as the USA, which imported about 29 million 60-kg bags (about 1.75 million tonnes) of coffee worth about USD 6.9 billion in 2021 [19]. The USA is one of many countries that produced very limited amounts of coffee, with production of 2270 tonnes in Hawaii and California in 2021 [20,21]. Post-consumption coffee waste includes SCG as well as increased slowly decaying microplastics [22] and nanoplastics [23] in the environment, with unknown long-term health risks. As an example, Planet Ark in 2016 in Sydney estimated that Australians use 1 billion coffee cups each year, producing 60,000 tonnes of plastic waste [24]. Therefore, unlike the waste products of coffee production, post-consumption pollution is mainly a problem of developed countries, the highest consumers. SCG production provides an increased 50-100% of the weight of the coffee beans as a standard double-shot espresso uses around 30 g of dry ground roasted coffee beans to yield 45-60 g of moist SCG [25]. The Planet Ark report in 2016 estimated that the 921 cafes in Sydney produced more than 3000 tonnes of SCG per year, with 93% ending up in landfill [24]. Thus, developing processes to use SCG in industrial or food production may be the more feasible approach to limiting the environmental and health damage from coffee waste, as these methods are more likely to be implemented in developed countries.
However, there are few studies investigating the health benefits of SCG, although there is a growing interest in its potential, due to the presence of many bioactive compounds that are also found in coffee beverages [26,27]. Our aim in this review is firstly to explore the production and identification of waste products in coffee production and consumption. Secondly, we will evaluate the potential health benefits of SCG, mainly using studies on coffee as a beverage and the compounds found in SCG. Thirdly, understanding the health benefits could allow the development of functional foods at affordable costs using SCG as an easily available product rather than complete purification of individual bioactive compounds. Functional foods provide health benefits beyond their nutritional values and thus have the potential to reduce the risk of chronic diseases. As an additional benefit, use of SCG in functional foods could reduce environmental damage by post-consumption coffee waste.
Bioactive Ingredients of SCG
The first step in value-adding to a complex mixture such as SCG is an understanding of the compounds present in the mixture, including the chemical changes during the process of roasting. By definition, the compounds present in SCG from roasted coffee powder are the compounds that have not been extracted during the production of the beverage. Low-molecular-weight bioactive compounds extracted into coffee as a beverage include caffeine, chlorogenic acids (CGA), trigonelline, tryptophan alkaloids and diterpenes such as cafestol and kahweol [28]. Further, coffee as a beverage produced by hot-brew processes contains approximately 1000 volatile organic compounds, which vary on growing and post-harvest conditions [29]. Cold-brew coffee differs from hot-brew coffee in the extent of compounds extracted during brewing rather than in the compounds that are present [30]. Roasting leads to the production of melanoidins by the non-enzymatic Maillard reaction which make up about 13-25% of the dry weight of the roasted coffee powder [31,32]. The chemical structures of the many melanoidins are only partly known, but the compounds include polysaccharides such as galactomannans and arabinogalactans, denatured proteins and CGA [31]. After roasting, coffee powder also contains carbohydrates (38-42%), proteins (8-14%), phenolic compounds (3-4%), lipids (11-17%), minerals (5%), fatty acids (3%), caffeine (1-2%) and trigonelline (1%) [29]. Degradation products during roasting include the carcinogen, acrylamide; practical solutions to reduce acrylamide production were proposed [33]. The SCG remaining after production of hot-brew coffee include carbohydrates such as 8-15% celluloses and 30-40% hemicelluloses, 20-30% lignins, 7-21% lipids and minerals, and 13-17% proteins, together with phenolic compounds (12 mg/g), caffeine (14.5 µg/g) and CGA (31.8 µg/g) [34]. Further, isolation and characterisation of these components were reviewed [34]. Similar information on SCG produced after cold-brew extraction processes is not available.
The chemical composition of coffee produced by either hot-brew or cold-brew processes will alter depending on factors including farming practices and extraction methods. The differences in concentrations between extraction processes implies that the content of bioactive compounds including caffeine and CGA remaining in SCG will depend on the extraction method used in the production of the beverage. The major compounds in coffee extracted by either hot-brew or cold-brew processes include caffeine, CGA, trigonelline and the diterpenes kahweol and cafestol [29,[35][36][37]. Hot-brew processes with Ethiopian Arabica coffee showed highest caffeine and CGA concentrations in espresso coffees, up to 3-6 times higher than in Moka and filtered coffees [37]. The most efficient espresso methods used 14 g of fine powder and extraction for one minute at 93 • C and pressure of 9 bar [37]. The highest caffeine extraction from a 95% Robusta and 5% Arabica blend was in an espresso machine using 7.5 g of powder and 25 mL water at 92 • C and a pressure of 7 bar [35]. Unlike hot-brew coffee, cold-brew coffee extraction is a low-temperature, long-contact process, with different reported procedures to extract medium-roasted Arabica coffee under various conditions, such as using 50-100 g powder/L at 8 • C for 24 h [30] or 25 g powder/L for 282 min at 20 • C [35]. Extraction of caffeine and CGA by cold-brew procedures at room temperature reached a steady-state condition after around 400 min [36]. Further, per cup caffeine and CGA contents were greater in cold-brew processes than hot-brew espresso coffees [37]. The highest scores in sensory evaluation of cold-brew coffee, characterised by strong sweetness, fruity and floral flavours, medium bitterness and acidity, and a creamy body, were found after a 14 h extraction of coarse ground medium-roasted coffee at room temperature of 20 • C [38]. Cold-brew coffee showed increased floral flavour when compared to hot-brew coffee, and hot-brew coffee exhibited increased bitterness, sour taste and rubber flavour [39]. The differing results from different extraction procedures means that it is not possible to extrapolate the daily doses of caffeine and CGA from the number of cups of coffee consumed per day [37].
Value-Adding to SCG Outside the Health Industry
Utilisation of coffee by-products including SCG by industry has been a worldwide topic of research [14]. As a valuable industrial resource [27], industrial uses may provide the expertise in purifying compounds from SCG for therapeutic studies and also provide the financial support for these studies. This section will summarise some of the potential industrial uses of SCG including animal feed, biofuels, nutraceutical, cosmetic, fertilisers, composting and biopesticides [17,40,41].
Raw materials from coffee waste such as polysaccharide-rich fraction in SCG provide viscous and stable liquid solutions that are suitable for use as raw materials for biodegradable films or coating for packaging [42,43]. Such alternative materials for nonbiodegradable fossil fuel-derived plastic packaging have been the target of research as many countries make the commitment to replace plastic packaging with environmentally friendly bioplastics by 2030 [44,45]. The biodegradable industry is growing, worth over USD 250 billion in 2020 and is expected to reach over USD 380 billion by 2028 [46]. SCG contain nitrogen and other important minerals required in both compost and fertilisers, and so could be used by the agriculture industry [4,14,27,47]. The current high costs of agricultural fertilisers, including nitrogen fertilisers, and compost prices at around 1320 AUD/tonne are expected to keep increasing [48]. Nutrient density increases in coffee plantations have been reported using SCG as part of fertilisers [49]. Potentially toxic responses to SCG to soil due to caffeine and high amounts of antioxidants [50] can be decreased by farming earthworms to decrease the caffeine content of SCG [51,52], making SCG then usable for the composting and fertilising industry.
Biofuels such as bioethanol, biogas and biodiesel can be produced from SCG, thus redirecting large amounts of coffee waste as a sustainable source of biofuel [53]. The commitment to reduce and eliminate fossil fuel use by generating renewable energy resources is the focus of almost the entire world [54]. The oil (10-30% of dry weight of SCG) extracted using ultrasound from SCG has the potential to be re-utilised to produce biodiesel [18,55,56]. Further, hydrothermal liquefaction of SCG has also been investigated as a viable option for producing crude bio-oil without the need of oil extraction [57]. In addition, fermenting the remaining oil-free SCG carbohydrate compounds showed the potential to use this waste to produce bioethanol [55]. Research findings are very promising, even though the methods to produce biofuels from SCG need improvements in scalability and efficiency [14]. SCG may be an efficient, low-cost and certainly environmentally friendly source of antioxidants, polyphenols and biomaterials for pharmaceutical products [26,58]. These compounds have been tested in cosmetics as anti-ageing and protective agents such as sunscreens, natural fillers and preservatives [58,59].
SCG, along with other coffee plant wastes, have been successfully used as a sustainable, cost-effective and healthy food additive in baked products, granolas, slow-cooked meals, seasoning for barbecues and desserts [60][61][62][63][64]. The production of food additives is a growing industry that turns over close to USD 45 billion a year [65]. SCG have been used in the preparation of baked products such as cookies and cakes as well as the production of beverages including alcoholic beverages [66]. Cookies prepared using SCG showed presence of caffeine, phenolic acids and polyphenols such as CGA [63]. Further, CGA extracts from coffee have been used in fried doughnuts, soymilk, wheat bread, liquid Khask, dark chocolate, yogurt and even instant coffee, which may increase the health benefits of these foods [67]. Increasing attention towards using coffee waste as a food additive will help to provide sustainable economic options to coffee farmers and reduce environmental impacts [66,68].
As SCG are a good source of caffeine, polyphenols such as CGA and melanoidins, it can be used as a raw material for the isolation of these compounds. The recovery of these compounds from SCG has used a range of methods of extraction [69,70]. Some of these methods include conventional solvent extraction, high hydrostatic pressure-assisted extraction, ultrasound-assisted extraction and microwave-assisted extraction [69][70][71][72]. Further, extraction methods have also been studied for coffee oil, which is a rich source of fatty acids and caffeine [73,74]. SCG after coffee oil extraction can be used for the extraction of galactomannan, diterpenes and mannose [74]. The leftover material can then be fermented into bioethanol [74]. This scheme, termed as biorefinery, can generate many valuable components from the SCG biomass that is generally being discarded in landfill, potentially generating many avenues for generating commercially viable components [41]. Thus, biorefinery using SCG can provide many valuable products including biodiesel, hydrocarbon fuel, bio-hydrogen, glycerine, many pharmaceutical-grade bioactive compounds, bioethanol, bio-oil, biochar, polymers and biogas [75].
Health Benefits of SCG
Although there are relatively few studies describing the physiological effects of SCG, they suggest that SCG intake may improve health and is safe. We have reported that modulation of gut microbiota by SCG from a hot-brew process, probably by melanoidins, reduced body weight, abdominal fat mass, systolic blood pressure and plasma triglycerides, improved glucose tolerance and improved the structure of the heart and liver in a rat model of diet-induced metabolic syndrome [76]. Coffee and SCG have been linked with changes in gut microbiota including increases in Bifidobacterium and decreases in Clostridium and Escherichia coli [77][78][79][80][81]. Further, CGA from coffee has shown non-polysaccharidebased prebiotic effects in an in vitro study through selective growth of human faecal microbiota [82]. These beneficial changes can help in improving the short-chain fatty acid profile produced by the gut microbiota and hence improve their composition and function. Pilot human studies found that consuming cookies enriched with SCG containing prebiotics promoted short-term satiety and reduced overall energy consumption even without other lifestyle changes [83]. Studies with SCG in humans include a small, randomised control single-blind parallel-group study and a pilot crossover randomised single-blind control study. Both studies observed better outcomes when participants ingested an extract of SCG antioxidant fibre. However, SCG (as a whole) also showed positive effects when compared to placebo [83,84]. An in vitro study suggested that SCG prebiotic fibre increased short-chain fatty acid production, resulting in gut microbiota modulation [85]. A small human clinical trial looking at chronotype and circadian locomotor activity in young adults found that the consumption of antioxidant fibre from SCG improved quality and length of sleep associated with an increased fermentation in the colon and short-chain fatty acids [84]. Furthermore, the inclusion of SCG with gluten-free flour (rice) in cookies improved sensory acceptance, with higher nutritional value as a source of fibre and polyphenols [86]. Figure 3 summarises the potential health benefits based on existing experimental evidence on SCG.
single-blind control study. Both studies observed better outcomes when participants ingested an extract of SCG antioxidant fibre. However, SCG (as a whole) also showed positive effects when compared to placebo [83,84]. An in vitro study suggested that SCG prebiotic fibre increased short-chain fatty acid production, resulting in gut microbiota modulation [85]. A small human clinical trial looking at chronotype and circadian locomotor activity in young adults found that the consumption of antioxidant fibre from SCG improved quality and length of sleep associated with an increased fermentation in the colon and short-chain fatty acids [84]. Furthermore, the inclusion of SCG with gluten-free flour (rice) in cookies improved sensory acceptance, with higher nutritional value as a source of fibre and polyphenols [86]. Figure 3 summarises the potential health benefits based on existing experimental evidence on SCG.
Health Benefits of Compounds in SCG
Bioactive compounds found in SCG have been researched for over 20 years, presenting evidence on the therapeutic effects when sourced from coffee [87,88]. Health benefits associated with the consumption of these compounds are directly associated with dose and frequency as well as source of compounds (for example, isolated pure compound vs. compound in coffee form). A summary of findings and types of studies is presented in Table 1. This section clarifies existing evidence on the compounds present in SCG concluding the health benefits associated with the consumption of SCG.
Health Benefits of Compounds in SCG
Bioactive compounds found in SCG have been researched for over 20 years, presenting evidence on the therapeutic effects when sourced from coffee [87,88]. Health benefits associated with the consumption of these compounds are directly associated with dose and frequency as well as source of compounds (for example, isolated pure compound vs. compound in coffee form). A summary of findings and types of studies is presented in Table 1. This section clarifies existing evidence on the compounds present in SCG concluding the health benefits associated with the consumption of SCG. Table 1. Studies analysing the components of spent coffee grounds and their benefits.
Chlorogenic acid
Human Regulated blood pressure [89,90]; improved insulin secretion, uptake of glucose by intestinal cells [91]; improved insulin secretion [91]; improved dyslipidaemia and endothelial function [92,93]; improved fasting glucose in patients with impaired glucose tolerance [94,95]; body weight reduction and waist circumference reduction [95] In vitro Improved lipase reaction [96] Animal Reduced accumulation of fat in the liver and reduced blood lipids [97,98]; improved body weight and reduced visceral fat [97] Caffeine Human Improved cognitive health in patients with degenerative disease [99]; better performance in tests in age-related cognitive impairment [100]; enhanced memory and cognitive performance in young adults [101] Trigonelline Animal Improved specific neuron function [102]; improved memory in Alzheimer-induced mice [103]; suppressed oxidative stress and inflammation in the brain [102,104]; reduced blood glucose and improved lipid in metabolically ill animals [105,106] In vitro Promoted regeneration of neuronal network by neurite outgrowth [107] Melanoidins In vitro Antioxidant activity [108]; antibacterial activity against Gram-negative and Gram-positive bacteria [109]; antioxidant activity and activation of other gene-protective mechanisms in different cell lines [110] Ex vivo Antioxidant activity and activation of other gene-protective mechanisms in human gut tissue [110]; fermentation of gut bacteria, activation of antioxidant pathways and modulation of gut bacteria population [111]
Chlorogenic Acids (CGA)
CGA is an abundant polyphenol found in plant foods. Coffee is the major source of CGA for humans, with amounts varying from 0.5-6 g to 100 g of dry coffee prior to the brewing process [36]. CGA is also present in SCG and preliminary experiments using SCG indicates compound activity [87,88].
Effects on cardiovascular health include potential benefits in regulating blood pressure, endothelial function and dyslipidaemia [36,89,92,93,112]. Specific mechanisms of action by which CGA may have a direct effect on blood pressure and endothelial function include increases in nitric oxide (NO) bioavailability by inhibiting reactive oxygen species (ROS), NADPH oxidase and superoxide anion generation [93]. Other mechanisms of action to improve cardiovascular risk factors such as dyslipidaemia include increased uptake of fatty acids in the liver and reduction in plasma low-density lipoprotein cholesterol in both animal and pilot human studies [93,115].
CGA effects on glucose metabolism may provide an alternative and non-invasive approach for the treatment and prevention of chronic metabolic diseases such as type 2 diabetes [112]. CGA reduced fasting blood glucose concentration in patients with impaired glucose tolerance at various doses and treatment duration [94,95]. CGA may act similarly to metformin, one of the most commonly prescribed pharmaceutical drugs for type 2 diabetes, as an insulin sensitiser [116]. Mechanisms of action for CGA to assist glucose metabolism include improving intestinal and adipocyte glucose absorption, potentially decreasing plasma glucose concentrations as well as influencing the glycaemic impact of foods and release of carbohydrate-specific digestive enzymes [95,117].
CGA decreased obesity, inhibited in vivo lipase enzymatic action and prevented lipid absorption [96,98,113,114]. In rats, CGA improved body weight, visceral fat accumulation and liver function, and decreased inflammatory cell infiltration in obese, hypertensive rats fed a high-fat and -carbohydrate diet [97]. In humans, a reduction in body weight and most markers associated with obesity, glucose and lipid metabolism were reported [114]. Further studies are needed to compare the difference in activity and concentration of CGA from different sources, including coffee (beverage), SCG, and their different methods of extractions (hot or cold brew), since temperature maybe relevant to this compound.
Caffeine
Caffeine is widely known for its mild stimulant effects, temporary energy boosts and sometimes changes in mood [118]. Caffeine absorption occurs 30-45 min after consumption and blood concentrations may take up to two hours to rise [119]. Caffeine is predominantly ingested as coffee in our diet, with an average double-shot coffee providing around 150 mg [118,120], and also found in SCG [88].
Similar to CGA, caffeine can impact cardiovascular health. The effect of caffeine in cardiovascular health depends on factors such as dose, time ingested, absorption variation and hepatic metabolism [121]. The mechanisms of action in which caffeine affects the cardiovascular system can include a reduction in cytoplasmic calcium concentrations in the vascular smooth muscle cells through cyclic adenosine monophosphate and an increase in the same in the endothelial cells favouring the endogenous synthesis of NO [121]. The main cardiovascular effect of caffeine is the increased concentration of NO, and, consequently, vasodilation.
The effects of caffeine in the nervous system are widely researched. One of the mechanisms of action in which caffeine affects the brain is by antagonising adenosine receptors, increasing the release of excitatory neurotransmitters such as glutamate and noradrenaline [122,123]. Caffeine potentially improves cognitive symptoms and has protective characteristics in neurodegenerative diseases such as Parkinson's disease [99,[123][124][125][126][127]. A widespread concern with caffeine is potential negative effects in those with existing cardiovascular conditions [128,129]. However, consumption of up to six cups of caffeinated coffee a day was not associated with an increased risk of cardiovascular outcomes, even with those who have history of hypertension and other cardiovascular diseases [120]. In addition, a meta-analysis showed those who consume three to five cups of caffeinated coffee a day have lower incidence of coronary artery disease, stroke and death due to cardiovascular causes [130]. However, longer-term or overconsumption of caffeine may cause addiction, insomnia, migraine and other adverse effects [131].
Trigonelline
Trigonelline is a pyridine alkaloid compound and a methylation product of vitamin B 3 , niacin [132], found in plant foods such as barley, cantaloupes, corn, onions, soybeans, tomatoes, peas, fenugreek seeds, coffee and SCG [133]. A 250 mL volume of brewed coffee provides 27 mg of trigonelline [134]. Higher concentrations of trigonelline are found in green coffee beans from the C. arabica species, and trigonelline changes into N-methylpyridinium and nicotinic acid during roasting [135,136].
In the nervous system, trigonelline improves the function of specific neurons, and sometimes the potential to regenerate certain neurons. Therefore, it is a possible intervention for neurovegetative diseases which are now incurable [102,103,107,137]. β-Amyloid peptide accumulation is a common risk factor and cause of Alzheimer's disease [138]. The similarity of trigonelline to cotinine, an anti-Alzheimer's drug, pushed studies that checked whether trigonelline had any affinity to interact with β-amyloid peptide, and results were promising [139]. Trigonelline was effective in suppressing oxidative stress, astrocyte activity and inflammation to prevent neuronal loss in the hippocampus to alleviate Alzheimer's disease in mice [102]. Neuroinflammation is also a contributor to Alzheimer's disease [140]. An animal study showed the anti-inflammatory effects and improvement of memory of trigonelline against liposaccharide-treated adult mice brains [104]. The positive results could be due to higher concentrations of brain-derived neurotrophic factor, lowered oxidative stress and decreased concentrations of tumour necrosis factor α, interleukin 6 and acetylcholinesterase [104]. Recently, a comprehensive animal study confirmed that trigonelline recovered memory function in a mouse model of Alzheimer's disease [103]. The anti-Alzheimer's disease effects of trigonelline in this study were confirmed by the reconstruction of neuronic networks after brain damage.
Cafestol and Kahweol
Cafestol and kahweol are the main diterpenes and their content is about 15% of total lipids in coffee [141].
Kahweol is largely found in C. arabica beans, whereas 16-O-methylcafestol ester is found mainly in C. robusta [142]. However, cafestol is found in both C. arabica and C. robusta [143]. Coffee consumption was associated with elevated serum cholesterol concentrations due to the presence of cafestol and kahweol esters [144,145]. Diterpenes are extracted from coffee during the brewing process, and when coffee is filtered, diterpenes are almost completely removed. In SCG, the presence of these compounds will also depend on the preparation method.
There are few data on the bioavailability and pharmacokinetics of cafestol and kahweol, especially in diseases, with most data from healthy individuals. An estimated 30% of cafestol is broken down in the stomach by gastric juices, with the remaining 70% absorbed in the duodenum at a rate of 84-93% [146]. Kahweol has a similar absorption rate and was absorbed in the small intestine at a higher rate of 91-95% [146].
Most of the evidence on these compounds' health benefits is related to their ability to suppress activity, migration and proliferation of cancer cells. Kahweol acetate and cafestol inhibited proliferation and migration of prostate cancer cells, where other coffee compounds did not show the same effect [147]. The synergistic effects of both compounds may allow lower concentrations of these compounds to be effective in inhibiting prostate cancer progression. These findings could be important for those consuming unfiltered coffee, as concentrations of diterpenes are much higher than in filtered coffee. The mechanisms of actions were described as an ability to induce apoptosis and suppression of the epithelialmesenchymal transition, and a reduction in androgen receptors and chemokine receptors CCR2 and CCR5, preventing cancer cell migration and proliferation [147].
Anti-angiogenesis activity from cafestol and kahweol has been published with experiments in vitro and protective effects in cancer proliferation and migration in endothelial cells. Angiogenesis plays an important role in cancer cell proliferation and migration [148].
Other health benefits of cafestol and kahweol include anti-diabetic and anti-inflammatory activity [149]. These compounds show anti-diabetic actions by increasing insulin secretion and glucose uptake by skeletal muscle, and AMP-activated protein kinase activation, which mimics metformin action [150,151]. Both compounds showed the ability to inhibit inflammatory mediators such as prostaglandin E 2 and NO synthesis in lipopolysaccharideactivated macrophages, thus indicating their anti-inflammatory activity [152].
Melanoidins
Melanoidins are nitrogen-containing polymers produced during the non-enzymatic browning Maillard reaction, making these compounds a differentiation marker between green and roasted coffee beans present in coffee beverages and SCG. Melanoidins are not unique to coffee or SCG as other foods such as bread, roasted cocoa and beer undergo a Maillard reaction during preparation to produce melanoidins [153]. Hot-brewed coffee is likely to be the main source of melanoidins in the human diet [154]. Melanoidin concentrations in coffee may vary in roasted coffee beans, making around 25% of dry weight or slightly higher in a darker roasting process and around 29% in brewed coffee [155]. Melanoidins provide specific characteristics to foods such as flavour and brown colour [31]. Published biological activities of melanoidins include antioxidant, antimicrobial, ability to change xenobiotic enzymatic activity, prebiotic fibre and antihypertensive actions [153].
A recently published study concluded that melanoidins from coffee also undergo minor digestion in the upper gastrointestinal tract [156]. Melanoidins can be fermented by gut bacteria and produce short-chain fatty acids, modulating the bacterial population. This fermentation may also release phenolic compounds which can then be absorbed, increasing phenolic absorption from foods containing melanoidins. Modulation of gut bacteria by short-chain fatty acid production reduced symptoms of metabolic diseases [157,158].
The potential antioxidant activity of melanoidins on human health has been linked with protection against oxidative damages and it has been highly related to degrees of roasting [159]. Their ability to bind undesirable dietary metals also prevents oxidative damage [160]. High-molecular-weight fractions of coffee were able to completely inhibit lipid peroxidation in rat liver microsomes [161]. However, once isolated compounds were tested, they failed to duplicate the protective action alone, so two non-melanoidin compounds may be responsible to protective actions. In different in vitro methods, Maillard reaction products such as melanoidins may have similar antioxidant compounds to preor light-roasting polyphenol compounds found in coffee against oxidation from human low-density lipoproteins [155]. There are limited data published on in vivo antioxidant effects of coffee consumption, which is not specifically linked to melanoidins' individual effect as it would also include polyphenols in coffee such as CGA. The antioxidant effects of roasted and brewed coffee were mainly attributed to melanoidins, as other antioxidant compounds in coffee are decreased by heat from roasting and brewing processes [162]. Other applications for melanoidins originating from foods other than coffee such as antioxidant and modulator of Phase I and II enzymes for detoxification were briefly described in a review, which could be applicable to coffee [162].
Although there is robust evidence on these compounds when sourced from coffee, studies to analyse each compound and their biological activity when sourced as part of SCG are needed. Understanding the biological responses of the compounds including caffeine, CGA, trigonelline, polyphenols, melanoidins and other antioxidants when sourced from SCG, rather than coffee (beverage), may provide options to test for therapeutic benefits as these compounds. SCG may be a sustainable resource for bioactive compounds with established health benefits and safety and efficacy for human consumption [18,27,88]. However, SCG are currently not utilised to the full potential.
Conclusions, Challenges and Future Directions
SCG can contribute to a wide variety of sustainable products, including animal feed, biofuels, fertilisers, compost and biopesticides. However, SCG applications can go beyond non-health-related purposes due to the presence of bioactive compounds in potentially therapeutic doses, such as CGA, caffeine, trigonelline, cafestol, kahweol and melanoidins. SCG, similar to hot-brew coffee, containing these compounds has potential to attenuate wellknown metabolic disorders, including NAFLD, type 2 diabetes and cardiovascular disease.
This review shows that the literature on responses to SCG as a functional food component is often preliminary, so many more studies are required to understand what other compounds are present in coffee by-products, especially SCG, and how these could benefit human health. Moreover, it is important to elucidate the precise mechanisms by which the bioactive compounds obtained from SCG support health-related effects. As with other functional foods, SCG may experience challenges in clinical studies [163]. Some of these challenges may include limited industry funding to support the study, unsuitable placebos, maintaining food products and their safety, and limited opportunities to check compliance [163]. Thus, more focus on food by-products such as SCG from industries and funding bodies will help in making initial steps towards obtaining suitable clinical data that can be used for appropriate translation of the clinical outcomes.
As studies on SCG are limited regarding their health benefits, it is important to understand the differences between hot or cold extraction methods as well as differences in responses of these compounds when consumed in coffee beverages and SCG. Ultimately, more scientific investigations can promote economic and health benefits worldwide. The change from considering SCG as a waste product to one with widespread health and industrial uses could benefit both coffee producers and consumers. Finally, SCG as a low-cost raw material can provided affordable functional food products when used directly. However, purified nutraceutical product development can dramatically increase the cost of the process and hence the product, thus suggesting SCG as the most viable and affordable functional food option.
Limitations of current studies include the precise characterisation of the components of cold-brew coffee and SCG as well as longer and larger clinical trials in patients with chronic diseases. Further, limited attempts have been made to extract bioactive compounds from SCG for developing nutraceuticals. These nutraceuticals can serve as a viable option for the longer-term option for human consumption as supplements. Large-scale epidemiological or clinical studies in people consuming these supplements from SCG will be important in confirming any reduction in disease risk.
|
v3-fos-license
|
2017-08-03T02:48:15.521Z
|
2013-06-07T00:00:00.000
|
13813115
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://skeletalmusclejournal.biomedcentral.com/track/pdf/10.1186/2044-5040-3-13",
"pdf_hash": "4d3c6a9e2c39a8a08a72b4eecbb1fffb25b63574",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:543",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "847bde98087aa1779b63ef35f99176214ae5032f",
"year": 2013
}
|
pes2o/s2orc
|
Idiopathic inflammatory myopathies: pathogenic mechanisms of muscle weakness
Idiopathic inflammatory myopathies (IIMs) are a heterogenous group of complex muscle diseases of unknown etiology. These diseases are characterized by progressive muscle weakness and damage, together with involvement of other organ systems. It is generally believed that the autoimmune response (autoreactive lymphocytes and autoantibodies) to skeletal muscle-derived antigens is responsible for the muscle fiber damage and muscle weakness in this group of disorders. Therefore, most of the current therapeutic strategies are directed at either suppressing or modifying immune cell activity. Recent studies have indicated that the underlying mechanisms that mediate muscle damage and dysfunction are multiple and complex. Emerging evidence indicates that not only autoimmune responses but also innate immune and non-immune metabolic pathways contribute to disease pathogenesis. However, the relative contributions of each of these mechanisms to disease pathogenesis are currently unknown. Here we discuss some of these complex pathways, their inter-relationships and their relation to muscle damage in myositis. Understanding the relative contributions of each of these pathways to disease pathogenesis would help us to identify suitable drug targets to alleviate muscle damage and also improve muscle weakness and quality of life for patients suffering from these debilitating muscle diseases.
Idiopathic inflammatory myopathies (IIMs) include polymyositis (PM), dermatomyositis (DM) and sporadic inclusion body myositis (sIBM). The clinical features of these diseases include muscle weakness, fatigue and elevated muscle enzymes in serum, and their histological characteristics include mononuclear cell infiltration and myofiber degeneration. Immunological features include autoantibodies and autoreactive lymphocytes, with unusual over-expression of major histocompatibility complex (MHC) class I molecules on the surface of the affected myofibers. MHC molecules present processed non-self and self-antigenic peptides to T-lymphocytes and mediate immune response. The relative contribution of the autoimmune component to myositis pathogenesis is not yet known. Recent data suggest that innate immune activation and metabolic defects occur in the myositis muscle, suggesting a role for these pathways in disease pathogenesis [1][2][3]. Thus, the emerging paradigm indicates that not only innate and adaptive immune mechanisms but also intrinsic defects in skeletal muscle contribute to muscle weakness and damage in myositis. The muscle microenvironment is complex, and we propose that active interactions occur between innate, adaptive, metabolic and homeostatic pathways in muscle in these diseases.
Innate immune mechanisms
Innate immunity, also known as native immunity, is considered the early line of host defense. The innate immune system includes physical barriers (epithelial surfaces), phagocytic cells (neutrophils, macrophages, eosinophils, etc.), natural killer (NK) cells, the complement system, and cytokines. Innate immune cells primarily detect pathogen-derived antigen structures with common patterns, but not fine differences, through Toll-like receptors (TLRs) and nucleotide-binding oligomerization domain (NOD)-like receptors (NLRs), to initiate pro-inflammatory responses. We discuss TLRs, NLRinflammasomes, NF-kB, and cytokines in the context of muscle inflammation below. All the information discussed in this section is summarized in Figure 1.
TLR signaling in skeletal muscle
TLRs are the trans-membrane receptors expressed on immune and non-immune cells that recognize pathogens as well as self-molecules. Altogether, 13 TLRs have been identified in mice and humans. All TLRs, except TLR-3, signal via myeloid differentiation response gene 88 (MyD88), the central adaptor protein, and induce activation of the nuclear factor-kB (NF-kB) pathway, the master controller of inflammation. TLR-3 signals via Toll interleukin (IL)-1 receptor domain-containing adaptor inducing IFN-γ (TRIF) and activates the NF-kB pathway or type I interferons (IFNs) [1,2,14]. TLRs recognize patterns in microorganisms termed as pathogen-associated molecular patterns (PAMPs) and endogenous ligands termed as damage associated molecular patterns (DAMPs), and initiate immune signaling [15,16]. PAMPs are associated with infectious agents (e.g., bacteria, fungi and viruses) whereas DAMPs are host-encoded molecules released during tissue injury, necrosis and cell death. DAMPs include nucleic acids (RNA, DNA), cytosolic heat shock proteins and nuclear high mobility group box protein 1 (HMGB1), and extracellular matrix proteins such as fibrinogen and fibronectin [5,6,17]. DAMPs
Immune cells & Capillaries
Cytokines & Chemokines Figure 1 Innate immune mechanisms of muscle damage in myositis. Skeletal muscle undergoes continuous injury and repair in response to a variety of physiological (exercise) and pathological (infection) insults and releases damage-associated molecular patterns (DAMPs) from dead and damaged cells (Step 1). DAMPs initiate innate immune signaling by binding to surface or endogenous TLRs on various cells including skeletal muscle fiber, infiltrating macrophages (Mϕ), myeloid dendritic cells (mDCs), plasmacytoid DCs (pDCs), capillaries, and other cell types such as fibroblasts (Step 2) [4][5][6]. This innate signaling through TLR and other innate immune receptors induces the secretion of pro-inflammatory cytokines and chemokines [e.g., Type 1 interferons (IFN-α, IFN-β), TNF-α, IL-1, IL-12 and IFN-γ] into the microenvironment (Step 3). These cytokines and DAMPs bind to their respective receptors on muscle and capillaries [e.g., tumor necrosis factor receptor (TNFR), IL-1 receptor (IL-1R)] and exert downstream effects (Step 4) [7][8][9][10]. Cytokines and/or chemokines directly cause damage to capillaries and hypoxia in the affected muscle. Cytokines such as TNF-α can directly induce cell death of muscle cells, while NF-kB is known to block MyoD and inhibit formation of the new muscle fibers [11][12][13]. Thus this pathway not only effectively enhances the death of existing muscle fibers but also inhibits formation of new muscle fibers leading to the loss of skeletal muscle mass and weakness in these disorders.
shown to induce stimulation of TLRs, resulting in immune activation and the release of cytokines, resulting in a self-sustaining autoinflammatory response that contributes to chronic inflammation in the affected tissue [18][19][20][21].
Excessive physical activity and strenuous exercise in normal individuals leads to modest elevations in serum muscle enzymes such as creatine kinase (CK), whereas myositis patients generally show a significant increase in CK, suggesting that skeletal muscle leakiness and damage occur in this disease. It is likely that some DAMPs leak from the injured skeletal muscle and engage their receptors on both skeletal muscle and immune cells, thereby perpetuating the inflammatory process. In fact, muscle biopsies of myositis patients show a significantly increased expression of TLR-2, TLR-3, TLR-4, and TLR-9 in the skeletal muscle and infiltrating cells as well as the enhanced expression of cytokines such as IFN-γ, IL-4, IL-17, TNF-α, IL-6 and type 1 IFNs. These findings suggest that TLR receptors are engaged in the milieu of affected muscle and that the downstream genes are activated [7][8][9]. Further, IFN-β and IFN-γ are shown to enhance MHC class I expression on immature muscle precursors, suggesting that these cells may be one of the sources of local type 1 IFNs and that the regenerating fibers are potential targets of immune attack in myositis muscle [22].
More recently, one study has independently validated the enhanced expression of TLR-2, -4, and −9 along with MyD88 mRNA transcripts, as well as enhanced protein levels in all subtypes of inflammatory myopathies [10]. The evidence for activation of TLR-4, MyD88, and the NF-κB pathway is also shown in a myosin-induced experimental autoimmune myositis (EAM) mouse model [23]. An enhanced expression of transcripts such as IFN-γ, IL-12p40, and IL-17 along with the expression of the co-stimulatory molecules CD80 and CD86 in the inflammatory milieu of the affected muscle suggests the link between innate and adaptive immune systems in the muscle microenvironment [10].
Recognition of DAMPs that activate the TLR pathway in myositis muscle is slowly emerging. For example, the histidyl-tRNA-synthetase (HRS) protein has long been associated with myositis, since it was identified as the antigen of the myositis-specific autoantibody Jo-1. Previous studies indicated that cleaved HRS serves as a chemokine by binding to CCR5 and facilitates immune cell infiltration into muscle [24]. More recent studies indicate that the N-terminal portion of the HRS protein binds to TLRs, and immunization with HRS peptides induces both autoantibody formation and immunoglobulin class switching in mice. A loss of TLR-4 inhibits class switching, and a loss of TRIF inhibits both class switching and autoantibody secretion [25]. The exact mechanisms by which HRS cleavage and release from muscle cells occurs is unclear, but there is evidence that HRS-expressing immature muscle cells express high levels of MHC class I and therefore likely become targets of cytotoxic T-cells and granzyme B-mediated cleavage of the HRS antigen [26].
Another well-characterized DAMP that is involved in myositis pathogenesis is high mobility group box protein 1 (HMGB1). High expression of HMGB1 was detected not only in the cytoplasm of muscle, infiltrating cells and endothelial cells, but also in the interstitial space in myositis muscle suggesting its potential to engage TLRs in this milieu [4]. Exposure of HMGB1 to muscle fibers induced irreversible decrease in calcium release from the sarcoplasmic reticulum during fatigue induced by repeated tetanic contractions [27]. A recent study reported that HMGB1 induced muscle fatigue occurs via the TLR-4 pathway in muscle and that the HMGB1-TLR-4 pathway plays a role in the pathogenesis of myositis patients [4].
Taken together, these studies clearly suggest that TLRs, acting through MyD88-dependent and/or independent mechanisms, induce pro-inflammatory signals in myopathic muscle. It is likely that new advances in this field would identify additional novel DAMPs in myositis muscle. Blocking DAMP induced MyD88 dependent and independent TLR pathways using chemical and genetic methods may provide additional insights into these mechanisms. Although there are substantial gaps in our knowledge of the relationship between myositis and TLRs, and their stimulation by endogenous DAMPs, the accumulating evidence suggests that the TLRs are the connecting link that mediates interactions between innate and adaptive responses and in turn activates NF-kB signaling cascades in myositis.
NF-kB and NLR-inflammasome activation in skeletal muscle
The NF-kB pathway is one of the predominant regulators of a variety of essential biological processes, including inflammation. In myositis both immune and skeletal muscle cells modulate inflammation via the NF-kB pathway. NF-kB is a ubiquitous transcription factor composed of a heterodimer with two subunits, p65 (Rel A)/ c-Rel/Rel B and p50. NF-kB is kept sequestered in an inactive form in the cytoplasm through an interaction with its specific inhibitor IkBα. When a stimulus is received, the upstream IkB kinase (IKK) phosphorylates IkBα, leading to its proteosomal degradation. Free NF-kB is then translocated to the nucleus, where it regulates the expression of several pro-inflammatory genes, including TNF-α and IL-1β. We have previously demonstrated that unusual overexpression of MHC class I on the muscle fibers of myositis muscle can also cause the activation of NF-kB, including the induction of ER stress response pathways [27]. Further evidence suggests that downstream NF-kB target genes such as intercellular adhesion molecules (ICAM) and MCP-1 are also highly up-regulated in myositis muscle. Several groups have independently validated NF-kB activation in inflammatory myopathies and its role in modulating the immune response, myogenesis and muscle repair [11][12][13]28].
NLR-inflammasomes are intracellular multi-protein complexes formed by the adaptor molecule apoptosisassociated speck-like protein with caspase recruiting domain (ASC), caspase-1, and the members of the NLR family such as NLRP1, NLRP3 and NLRC4. NLR-inflammasomes are also activated by PAMPs/DAMPs and result in secretion of the pro-inflammatory cytokines [29,30]. Although the process is not yet completely understood, the general consensus is that inflammasomes are activated through three signaling pathways: 1) potassium efflux, 2) generation of reactive oxygen species, and 3) production of cathepsin B [31]. More recently, our group has shown that normal primary skeletal muscle cells are capable of secreting IL-1β in response to combined treatment with TLR-4 ligand, lipopolysaccharide and P2X7 receptor agonist, ATP, suggesting that not only immune cells but also muscle cells can actively participate in inflammasome formation implicating skeletal muscle cells in perpetuating a pro-inflammatory environment [32].
The inflammasome pathway is connected to the TLR signaling pathway. TLR-2/4 signaling results in the synthesis of pro-IL-1β, and inflammasomes process pro-IL-1β into mature IL-1β; signaling by released extracellular ATP via P2X7 receptors (DAMP signaling) facilitates the secretion of mature IL-1β from the skeletal muscle cells [32]. Another recent study has characterized the mechanism of IL-1β secretion following respiratory syncytial virus (RSV) infection of airways [33]. This study underscored the requirement for the (TLR-2)/MyD88/NF-κB pathway prior to the activation of the inflammasomes and subsequent IL-1β release in the affected tissue [33]. In sum, these findings suggest a possible cross-talk between TLRs and inflammasome pathways. In myositis, the activation of inflammasomes and the subsequent release of cytokines in affected muscle have not yet been investigated; however, enhanced expression of both TLRs and IL-1α and IL-1β in areas surrounded by inflammatory cells suggest that TLRinflammasome pathway is active in myositis muscle [34]. Therefore, it is possible that the cytokines released from the activation of inflammasome pathways can stimulate innate and adaptive immune cells and further augment the secretion of either pro-inflammatory or anti-inflammatory cytokines.
Cytokines and chemokines in skeletal muscle
Cytokines are produced by a wide variety of cells and regulate immune cell activation and infiltration in affected tissues. The most predominantly reported cytokines in myositis include pro-inflammatory cytokines such as IL-1α, IL-1β, TNF-α and transforming growth factor (TGF)-β [34][35][36][37][38][39]. IL-1α was predominantly expressed in capillary endothelial cells of PM, DM and sIBM muscle biopsies suggesting a prominent role for endothelial cells in myositis pathology [34,35]. Furthermore, IL-1α was suggested to play a role in myofibrillar protein break down and muscle regeneration; however, these claims are yet to be proven [36]. The pathogenic role of TNF-α in myositis muscle was not completely understood; however, it has been hypothesized to attract immune cells by enhancing transendothelial cell trafficking in affected muscle [37]. In addition, TNF-α has been hypothesized to activate immune cells and induce MHC class I expression in the myositis muscle. TGF-β was proposed to play a profibrotic role based on the correlation between its expression and connective tissue proliferation in DM muscle [39]. A plethora of studies have also reported the expression of additional cytokines and chemokines in myopathic tissues [40][41][42][43][44][45][46][47][48][49][50] (Table 1).
Even though a majority of the reports suggest that cytokines have a pro-inflammatory role in myositis muscle, one recent study reported a protective role for some cytokines. This study reported enhanced expression of neurotrophin receptor p75NTR on the muscle fibers of DM, PM and sIBM patients [52]. p75NTR binds to various neurotrophin-like cytokines such as NGF, BDNF, NTF3 or NTF4, and protects muscle cells against IL-1β induced cell death. Taken together, these studies indicate that cytokines and chemokines have different roles in the affected skeletal muscle.
Adaptive immune mechanisms
Adaptive immunity to self-antigens is induced in autoimmune diseases. This arm of immunity predominantly includes autoreactive lymphocytes and autoantibodies. Initial reports have indicated that there are differences in the lymphocyte subsets seen in PM, DM and sIBM; however, recent studies have indicated that those differences are not clear-cut and that T-cells (CD4, CD8), B-cells, macrophages, and DCs are present in all inflammatory myopathies. All the information discussed in this section is summarized in Figure 2.
T-cells and CTL-cell-mediated injury
T-cells are involved in cell-mediated immune responses within the adaptive immune system. These cells express surface receptors (T-cell receptors; TCR) that recognize peptide fragments of foreign proteins when presented on the MHC molecules of antigen-presenting cells. Functional subsets of T-cells include CD4+ T helper cells (which recognize MHC class II-presenting peptides) and CD8+ cytotoxic T-cells (which recognize MHC class I-presenting peptides). The role of CD4+ and CD8+ T-cells in inflammatory myopathies has been recognized; however, their precise roles in the pathogenesis of myositis are not completely understood. In the pathology of DM, CD4+ T-cells are thought to play a major role; in contrast, CD8+ T-cells seem to be the predominant actors in PM [59,60]. CD8+ T-cells infiltrating myositis muscle have been shown to express perforin-1 and granzyme-B enzymes, indicating that they have a cytotoxic effect on the affected muscle (Figure 2) [58]. Recent studies demonstrate the presence of CD28 null T-cells, Th17 cells, and Tregulatory cells in the muscle of PM and DM patients [53,56,57] (Figure 2). The CD28 null T-cells arise as a result of a chronic inflammatory stimulus (such as infection from virus) and are generally long-lived and pro-inflammatory in nature. Likewise Th17 cells produce IL-17 and IL-22. IL-22 has both tissue protection and proinflammatory properties. Contribution of Th17 cells to inflammatory process in autoimmune diseases, such as rheumatoid arthritis, is well delineated. Regulatory T-cells, which express CD25, reduce inflammation and tissue damage by inhibiting the function of antigen presenting cells and T-effector cells. Even though the presence of different T-cell subpopulations in myositis muscle has been well documented, their precise role in muscle pathology is not yet clear.
B-cells and autoantibodies
B-cells that are derived from bone marrow migrate to secondary lymphoid organs to elicit antigen specific humoral immune response. B-cells and terminally differentiated plasma cells have also been reported not only in PM and DM but also in sIBM, indicating their role in the pathogenesis of these diseases [61]. More recent reports indicating an up-regulation of B-cell activating factor (BAFF) have also suggested that a local maturation of B-cells to antibody-producing plasma cells may occur in myositis muscle [61,62]. Despite the presence of lymphoid aggregates, it is highly unlikely that B-cell maturation occurs in the muscle; rather, these B-cells may serve an antigen-presenting function.
Dendritic cells connect the innate and adaptive arms of the immune system
There is clear evidence that innate and adaptive immune cytokines influence each other. For instance, IL-18 stimulates the secretion of IFN-γ and TNF-α via a Th1mediated response [84,85]. Similarly, IL-1β binds to IL-1 receptor on dendritic cells and produces IL-23 via a Th17-mediated response, and IL-33 binds to IL-1 receptor-related protein (ST2) and enhances the secretion of IL-10 and IL-13 through Th2-mediated responses [86]. IL-33 also induces the secretion of IL-13, IL-10 and TGF-β by stimulating mast cells and T-reg cells [86]. These interactions through cytokines highlight that innate and adaptive immune processes are interrelated TGF-β Pro-fibrotic [39] IL-17 IL-6 production and HLA class I in muscle cells [40,41] IL-15 1 T-cell activation, development of NK cells and NK-T-cells [51] Type 1 interferons (IFN-α, IFN-β) Enhance type 1 interferon inducible transcripts (ISG15, MX1, IFIT3 and IRF7) [42][43][44] Leukotriene B4 Chemo-attractant [45] Macrophage inflammatory proteins (1α, 1β) Contribute to ongoing muscle inflammation [46] RANTES 2 Chemo-attractant [46] Resistin/Adipocyte secreted factor Pro-inflammatory, probably involved in metabolic dysregulation [47][48][49] TWEAK 3 Impairs muscle differentiation and myogenesis [50] 1 IL-15 and IL-6 are also called myokines. and studies to understand their role in muscle disease pathogenesis are imminent. DCs are bone marrow-derived immune cells that connect innate and adaptive immune systems. DCs are considered professional antigen-presenting cells, and their main function is to prime and activate naïve T-lymphocytes. Immature DCs express CD1a and blood dendritic cell antigen 2 (BDCA2) surface markers, whereas mature DCs express DC-LAMP, CD83 and fascin surface markers. We have previously shown that DC-LAMP-positive dendritic cells are highly enriched in perivascular inflammatory sites in juvenile and adult DM patients, along with molecules that facilitate dendritic cell transmigration and reverse transmigration (CD142 and CD31) [87]. Both immature and mature DCs have been found to be present in DM and PM biopsies [88,89]. Recent studies have reported that myeloid DCs may regulate type I IFN-mediated induction of cytokines and chemokines in DM muscle, indicating an association between DCs and type I IFN signatures in myositis muscle [90]. More recently, plasmacytoid DCs (pDCs) have also been implicated in myositis pathology. pDCs are innate immune cells with a plasma-cell morphology that express CD4 or the myeloid-cell markers MHC class II, CD36, CD68 and CD123 [91]. pDCs characteristically produce type I IFNs and other chemokines in response to virusderived nucleic acids, via the activation of endosomal TLR-7 and TLR-9 pathways (Figure 1). They may serve as effector T-cells in the presence of respective cytokines, and in turn produce discrete sets of cytokines that affect a variety of cell types (Step 2) [53]. Th1 cells through IFN-γ generate M1 macrophages, which secrete TNF-α, IL-6 and IL-1, and damage cells. Th2 cells, through IL-4, TGFβ and IL-10, generate M2 macrophages that are known to help tissue repair and remodeling in the affected tissues [54,55]. Th2 cells also help stimulate B-cell maturation and differentiation into plasma cells that produce autoantibodies and further initiate complement mediated damage to capillaries and induce hypoxia (Step 3). Cytotoxic CD28 −/− T-cells and regulatory T-cells (Tregs) reduce inflammation and tissue damage by inhibiting the function of antigen presenting cells and T-effector cells [56,57]. It is also known that activated CD8 T-cells differentiate into cytotoxic T-cells (CTL) and exert cytotoxic effects on the affected muscle through secretion of perforin-1 and granzyme-B enzymes (Step 4) [58]. Thus the myositis muscle microenvironment is complex, with both tissue repair and tissue-damaging mechanisms in play at all times. The relative ratios of these pathways determine the disease severity and progression.
an essential link between innate and adaptive immune mechanisms through the secretion of type 1 IFNs and other cytokines [92,93].
Macrophages are tissue-based phagocytic cells derived from peripheral monocytes. They carry out a multitude of functions, including antigen presentation to T-cells and scavenging of necrotic tissues via phagocytosis. Different types of macrophages in the muscle clearly influence the type of the adaptive immune response (e.g., Th1 or Th2). Distinct subpopulations of macrophages have been described; M1 macrophages, in association with Th1 cells, produce pro-inflammatory mediators and are involved in the phagocytosis of microorganisms and neoplastic cells. M2 macrophages are Th2-associated and are involved in tissue remodeling/repair and the production of anti-inflammatory molecules. Depending on their stage of activation, macrophages exhibit different surface markers; MIF-related protein (MRP) 14 and 27E10 represent early-stage markers; 25F9 is a lateactivation marker. Infiltration of macrophages into myositis tissues and the presence of CD163 positive (M1) macrophages are described in myositis muscle [4,54,55]. Characterization of macrophage subtypes in PM and DM muscle indicated that they express both early, MRP14 and 27E10 (M1 macrophage) and late activation 25F9 (M2 macrophage) and inflammatory markers such as iNOS and TGF-β [54,55]. These studies indicate that both M1 and M2 macrophages exist in the myositis muscle and their relative proportions may vary depending on the stage of the disease process. Therefore, interactions between innate immune cells/cytokines and lymphocytes appear to be dynamic and alter with the type and stage of the disease.
Non-immune mechanisms
Because of the presence of immune cells, it is generally thought that myofiber damage is the consequence of an immune process to muscle derived antigen. However, several observations suggest the involvement of nonimmune mechanisms in myositis pathology: 1) the lack of a correlation between the degree of inflammation and skeletal muscle weakness; 2) the lack of a response to potent immunosuppresants by some myositis patients; and 3) the lack of any amelioration of clinical disease even after complete removal of inflammatory infiltrates from the myositis muscle. Here we describe the literature related to skeletal muscle homeostasis and metabolism that supports a role for non-immune mechanisms in myositis pathology. Hereditary IBM (hIBM) is a an autosomal recessive muscle disorder tied to a mutation in the UDP-N-acetylglucosamine 2-epimerase/ N-acetylmannosamine kinase (GNE) that codes for a rate-limiting enzyme in the sialic acid biosynthetic pathway. Pathogenesis of hIBM is considered noninflammatory and is not discussed in this review. All the information discussed in this section is summarized in Figure 3.
Metabolic/energy pathways in skeletal muscle
Mitochondrial energy-related metabolic pathways play a prominent role in skeletal muscle because of the high demand for energy in these cells. Mitochondria can regulate various signaling pathways via the production of ATP, NADH and reactive oxygen species. Emerging evidence indicates a probable dysregulation of mitochondrial energy pathways in inflammatory muscle diseases [99,105]. Studies have reported abnormal succinic Table 2 Some of the important autoantibodies reported in inflammatory myopathies
One of the often-overlooked features of myositis is the apparent acquisition of metabolic defects within the skeletal muscle. These defects are generally described as deficiencies of glycolytic enzymes and other proteins found preferentially in fast-twitch fibers. One of the oldest proposed metabolic defects in inflammatory myopathies is an acquired deficiency of a rate-limiting enzyme, AMPD1, in purine nucleotide cycle [106,107]. Recently, our group demonstrated that AMPD1 mRNA, protein expression and enzyme activity are significantly reduced in the MHC class I mouse model of myositis, as compared to healthy littermate mice [103]. A cause-andeffect relationship between AMPD1 and muscle weakness has been demonstrated by reducing the levels of [27,[94][95][96][97][98]. Induction of EOR activates downstream NF-kB pathway leading to pro-inflammatory cytokine production and reduction in new muscle formation by inhibiting MyoD. It also induces cell death mechanisms via the activation of caspases 12, 3 and 7 as well as calpain pathways (Step A) [27]. Innate cytokines, mitochondrial energy-related metabolic pathways, and purine nucleotide pathways are interconnected in myositis muscle. For instance, IL-1 reduces the production of nitric oxide (NO) and causes mitochondrial dysfunction by affecting NADH reductase and succinate CoQ [99][100][101][102]. Likewise, unknown cytokines reduce expression of rate-limiting enzymes of the purine nucleotide cycle and of AMPD1 in skeletal muscle. This acquired deficiency of APMD1 causes muscle weakness and fatigue in myositis (Step B) [103]. Activation of TRAIL forms autophagosomes and induces autophagy (Step C) [104]. TLR signaling leads to inflammasome activation, IL-1 secretion and pyroapoptosis in the affected muscle. There are active interactions between autophagy, ER stress, and inflammasome and purine nucleotide pathways. Even though all these pathways are interconnected, we have represented them as linear pathways in this illustration for easier understanding. Thus, several non-immune and metabolic pathways directly and indirectly contribute to muscle weakness and damage in myositis.
AMPD1 in normal mice. The most novel observation was that a significant loss of AMPD1 enzyme activity and muscle strength occurs prior to the appearance of infiltrating lymphocytes. These results suggest that the metabolic deficiencies seen in myositis are independent of the action of infiltrating autoreactive lymphocytes.
At this time, it is unclear what factors/cytokines regulate AMPD1 levels in skeletal muscle. Evaluation of the AMPD1 promoter has indicated that cytokines are likely to modulate AMPD1 expression in skeletal muscle. For example, the cytokine IL-15 has the potential to serve as a link between inflammation and muscle metabolism. IL-15 was first described as a weak ligand for the IL-2 receptor complex, and as such is capable of stimulating T-cell proliferation, among other immunomodulatory effects. Recent work has shown that IL-15 signaling affects the formation of fast-twitch fibers in mice; in the absence of the IL-15 receptor, muscle fibers appear to convert from fast-twitch to slow-twitch fibers [108]. Furthermore, strong staining for IL-15 has been detected in myoblasts but not in mature muscle fibers [51]. These results are particularly interesting, considering the previously mentioned evidence that immature fibers may become a focal point of inflammation as a result of the secretion of IL-15, and the subsequent loss of these IL-15-positive fibers might explain the observed shift toward slow-twitch fibers in myositis patients [51]. Even though the precise role of these metabolic pathways in the myofiber damage seen in myositis is not yet clear, it is possible that innate TLR pathways and proinflammatory cytokines regulate these mechanisms.
Endoplasmic reticulum stress
A non-immune role for MHC class I has been reported in myositis. Muscle-specific overexpression of MHC class I causes the myositis phenotype in mouse skeletal muscle [109]. Studies have reported an induction of endoplasmic reticulum stress as the result of an unusual up-regulation of MHC class I in myositis muscle [27,[94][95][96]. More recently, studies to understand the role of endoplasmic reticulum stress in muscle pathology reported the expression of classical markers of endoplasmic reticulum stress (GRP78, GRP94 and calreticulin) in the affected skeletal muscle of both mice and humans [27,97,98,110]. A recent study has reported the presence of stress response proteins and heat shock proteins (Hsp) in IIM patients [111]. More specifically, the authors have examined the effects of chronic inflammation on the distribution of Hsp families 70 and 90 in muscle biopsies. Their results have indicated that regenerating, atrophic and vacuolated muscle fibers show an upregulation of both protein families, whereas infiltrating cells show enhanced levels of Hsp 90 family proteins. These results indicate a differential expression of stress proteins in muscle cells and immune cells. Thus, the authors suggest that chaperones play multifaceted roles in inflammatory muscle tissue. For more detail and a comprehensive discussion of the relationship between endoplasmic reticulum stress and muscle pathology, readers are referred to a recent review on this subject [112].
Autophagy
Autophagy is the lysosomal degradation of a cell's own proteins or organelles. Evidence of autophagy is often seen in PM and sIBM. Muscle biopsies from humans with sIBM and PM with mitochondrial pathology display the autophagosome marker LC3-II [99]. However, the precise role of autophagy in muscle diseases is controversial. It is likely that autophagy has both beneficial and adverse effects, depending on the cell stage and disease process involved. The in vitro inhibition of lysosomal autophagic enzymes has been reported to activate γ-secretase, which cleaves amyloid precursor protein to release the self-aggregating amyloid-β fragment [113]. We have demonstrated that TNF-related apoptosis-inducing ligand (TRAIL) and markers of autophagy are upregulated in myositis muscle fibers. Incubation of skeletal muscle cells with TRAIL induces IκB degradation and NF-κB activation, suggesting that it mediates the activation of NF-κB as well as autophagic cell death in myopathic muscle [104]. Another recent report has also indicated that TNF-α induces macroautophagy and subsequent expression of MHC class II on muscle cells [114]. More importantly, blockade of TNF-α with monoclonal antibodies has been shown to improve C protein-induced myositis (CIM) in mice, suggesting a probable role for autophagic pathways in myositis pathology [115]. In addition, immunomodulators such as fibrinogen and HMGB1 are correlated with the progression of myositis and are believed to induce autophagy by signaling through TLR-4, indicating a probable association with innate immune mechanisms [116]. Even though these findings indicate that autophagy plays a role in myofiber damage in myositis, further studies are needed to show how and when these autophagic mechanisms are triggered in the affected muscle.
Conclusions
The emerging picture indicates that myositis is a complex disease with multiple pathogenic pathways simultaneously contributing to muscle damage and weakness. Among these, the most prominent are the innate, adaptive immune and metabolic pathways. Innate immune pathways link the adaptive and metabolic arms of the disease processes. Additional new pathways and the precise interactions between these components are likely to be described in the future, and the relative contribution of each of these pathways to pathogenesis remains to be elucidated. However, it is clear that targeting the adaptive immune system alone is unlikely to provide significant relief from muscle weakness and damage in this group of disorders. New therapies are needed to modulate both the innate immune and metabolic components of the disease processes in order to obtain significant amelioration of the myositis phenotype.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions SR and KN were involved in drafting all sections of the manuscript and revising it critically for important intellectual content. WC and TBK were involved in writing non-immune mechanisms section. All authors read and approved the final manuscript.
|
v3-fos-license
|
2018-11-04T14:44:26.212Z
|
2018-11-03T00:00:00.000
|
53213551
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcimmunol.biomedcentral.com/track/pdf/10.1186/s12865-018-0270-z",
"pdf_hash": "445d584a7e72a3c4411f61e80e964d617355497a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:544",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6c322bb230f62d3dd5d1461d933b7962a16b117f",
"year": 2018
}
|
pes2o/s2orc
|
Altered endotoxin responsiveness in healthy children with Down syndrome
Background Down syndrome (DS) is the most common syndromic immunodeficiency with an increased risk of infection, mortality from sepsis, and autoinflammation. Innate immune function is altered in DS and therefore we examined responses in CD11b and Toll like receptor 4 (TLR-4), which are important immune cell surface markers upregulated in response to Lipopolysaccharide (LPS) endotoxin, and the immunomodulator melatonin. Neutrophil and monocyte responses to LPS and melatonin in children with Down syndrome (DS) who were clinically stable were compared to age-matched controls. Whole blood was incubated with LPS and melatonin and the relative expression of CD11b and TLR-4 evaluated by flow cytometry. Results Children with DS had an increased response to LPS in neutrophils and intermediate monocytes, while also having elevated TLR-4 expression on non-classical monocytes compared to controls at baseline. Melatonin reduced CD11b expression on neutrophils, total monocytes, both classical and intermediate sub-types, in children with DS and controls. Conclusion Melatonin could represent a useful clinical adjunct in the treatment of sepsis as an immunomodulator. Children with DS had increased LPS responses which may contribute to the more adverse outcomes seen in sepsis.
Background
Down syndrome (DS) is caused by an extra copy of genetic material from chromosome 21, and is the most prevalent chromosomal abnormality, affecting approximately 1 in 550 births in Ireland [1], and 1 in 700 births in the USA [2]. Co-morbidities associated with DS include developmental disability, congenital heart disease (CHD), gastrointestinal tract anomalies, and an increased risk of haematological malignancy [3]. In addition, it is the most common genetic syndrome associated with abnormal immune function and immune defects [4]. There is significant evidence of immune dysregulation in Down syndrome including T-cell and B-cell lymphopenia due to impaired expansion of these cell lines in infancy [5], a smaller thymus gland with reduced naïve Tcell and regulatory T-cell numbers [6], suboptimal antibody responses to vaccination [7][8][9][10], and abnormal levels of serum cytokines [11][12][13].
Children with Down syndrome are, therefore, at increased risk of infection, especially in early childhood, particularly respiratory tract infections [14]. Hilton et al. [15] reported a higher risk of admission to hospital and intensive care with respiratory tract infections (RTIs) in children with DS. Mortality from sepsis is 30% greater in patients with DS in comparison to children without DS who also had sepsis [16].
It is challenging to attribute causation to a specific deficit of the immune system with the increased incidence of infections and sepsis seen in this cohort. A normal innate immune system is crucial in providing first line defence against infection. Neutrophils and monocytes are crucial cellular components of the innate immune system. Defective phagocytic activity and neutrophil chemotaxis have previously been reported in DS [17,18]. Monocyte function in DS is poorly described. Increased numbers of the non-classical (CD14dim/CD16+) monocyte pro-inflammatory sub-type have been described in DS in comparison to controls [19].
This monocyte population has previously been implicated in sepsis and chronic disease [20].
CD11b is a cell surface marker involved in mediating neutrophil and monocyte adhesion and diapedesis [21] and is an indicator of activation. Dysfunction in neutrophil adherence and migration has been shown to increase the risk of infection in adults and neonates [22]. Toll-like receptor 4 (TLR-4) is the key receptor involved in lipopolysaccharide (LPS) endotoxin recognition and activation of the innate immune system [23], and has also been implicated in the pathogenesis of autoimmune conditions such as systemic lupus erythematosus (SLE) and rheumatoid arthritis [24].
Immunomodulators can alter responses to infection, alleviate autoimmunity and ultimately improve patient care. Melatonin is an endogenous hormone which mediates its anti-inflammatory effects by modulating pro-inflammatory cytokines and inflammasome de-activation, thereby ameliorating results in endotoxaemia [25,26]. Melatonin has a very good safety profile and is used in paediatrics in sleep management [27]. Clinical trials in adults and neonates with sepsis have demonstrated improved clinical outcomes [28,29].
We hypothesized that children with DS have altered neutrophil and monocyte function which contributes to their increased susceptibility to infection and increased mortality from sepsis. We aimed to evaluate the in vitro effect of LPS endotoxin, and the anti-inflammatory melatonin on CD11b and TLR4 expression on neutrophils and monocytes in children with DS.
Study population
This study was approved by the ethics committees in the National Children's Hospital, Tallaght and Our Lady's Children's Hospital, Crumlin (OLCHC), Dublin, Ireland. All parents and participants received verbal and written information on the study and written consent was obtained in advance of recruitment. There were two patient groups studied: a) Healthy children with Down syndrome < 16 years old attending the dedicated Down syndrome clinic. All children were clinically well with no recent fever or evidence infection and were undergoing annual routine health surveillance and b) Age-matched Controls: healthy controls attending phlebotomy or for day case procedures. Blood sampling occurred at induction of general anaesthetic and controls had no recent fever or evidence of infection.
Experimental design
All blood samples (1-3 mL) for in vitro experiments were collected in a sodium citrate anti-coagulated blood tube and analysed within 2 h of sample acquisition.
Blood sampling coincided with routine phlebotomy or at induction of anaesthesia for day case procedures. Whole blood was incubated at 37°C for 1 h with the pro-inflammatory stimulant Lipopolysaccharide (LPS; E.coli 0111:B4: SIGMA Life Science, Wicklow, Ireland) 10 ng/mL, the anti-inflammatory agent Melatonin (SIGMA Life Science, Wicklow, Ireland) at 42 μM and both combined.
Blood samples were incubated with a dead cell stain (100 μL; (Fixable Viability Dye eFlour 506, Invitrogen, California USA), diluted to working concentration in phosphate buffered saline (PBS). The following fluorochrome-labelled monoclonal antibodies (mAb) were added to each sample (2.5 μL per tube): CD14-PerCP, CD15-PECy7, CD16-FITC, CD66b-Pacific Blue and TLR4-APC (BioLegend®, California, USA) and PE labelled CD11b (BD Biosciences, Oxford, UK; 10 μL per tube. PBA buffer (PBS containing 1% bovine serum albumin and 0.02% sodium azide) was used to make up the antibody cocktail. Samples were incubated in the dark for 15 min. Next 1 mL of FACS lysing solution (BD Biosciences, Oxford, UK) was added to each tube, the samples were then incubated for 15 min in the dark. Cells were pelleted by centrifugation at 450 g for 7 min at room temperature, washed twice with PBA buffer and fixed in 300 μL of 1% paraformaldehyde. The final cell pellet was resuspended in 100 μL PBA buffer and analysed on a BD FACS Canto II flow cytometer.
Quantification of cell surface antigen expression
The expression of CD11b and TLR-4 antigens on the surface of neutrophils and monocytes was evaluated by flow cytometry on the BD FACS Canto II cytometer. Neutrophils were delineated based on SSC-A and CD66b + positivity as previously described [30], monocytes were defined based on SSC-A, CD66b-and subsets based on relative CD14+ CD16 + populations; classical (CD14+/CD16-), intermediate (CD14 +/CD16+), non-classical (CD14dim/CD16+), Fig. 1. A minimum of 10,000 events were collated and relative expression of CD11b and TLR-4 was expressed as mean channel fluorescence (MFI), and analysed using FloJo software (Oregon, USA). Every sample was processed and analysed by the same researcher (DH) thereby reducing variability in results.
Statistics
Statistical analysis was done using paired and un-paired t tests to compare mean results between two independent cohorts. Significance was defined as p < 0.05. Results shown are expressed as mean +/− standard error of the mean (SEM) unless otherwise stated. Data was analysed with FloJo software (Oregon, USA) and GraphPad Prism.
Patient characteristics
There were 23 healthy children with Down syndrome (DS) with a mean ±SD age of 8.67 ± 4 years(y) of which 13 were female (57%), and 21 healthy controls with a mean age of 7.4 ± 4.60 y, of which 10 were female (48%). In the DS cohort, children with a history of significant congenital heart disease requiring surgery in infancy (n = 7) were all clinically stable with no further cardiology intervention. All control participants had no significant medical history. Both groups were well at the time of blood sampling with no recent history of infection.
Effects of LPS endotoxin on CD11b expression
Neutrophil baseline CD11b expression in children with DS was significantly lower compared with controls (p = 0.045). Following incubation with LPS, CD11b significantly increased in both groups ( Fig. 1a: DS p < 0.0001; Control p = 0.0001). When comparing the fold increase in CD11b expression from baseline, children with DS had a significantly higher rise after LPS stimulation (DS: Controls: 116% versus 62.4%; p = 0.03; Fig. 3a).
CD11b expression on total monocytes showed no difference at baseline or after LPS stimulation between both groups (
Effects of LPS endotoxin on TLR4 expression
Neutrophil TLR-4 expression at baseline was not significantly different between children with DS compared to controls (p = 0.57). After LPS incubation there was no significant response in TLR4 expression in either cohort (DS p = 0.15 v Control p = 0.057; Fig. 3a). On comparing the mean percentage rise in TLR-4 expression after LPS, there was a 9.4% rise in children with DS versus 28.7% in the control group (Fig. 5a (p = 0.23)).
TLR-4 expression on total monocytes did not show any difference at baseline between children with DS and controls (p = 0.24). TLR-4 expression post LPS treatment increased significantly in controls (p = 0.016) but did not reach significance in the children with DS (p = 0.07; . Non-classical monocyte (CD14dim/CD16+) TLR-4 expression was found to be significantly higher at baseline in children with DS compared to controls (p = 0.02; Fig. 3e). There were no significant differences in TLR-4 expression after LPS stimulation in either cases or controls (Fig. 5e (p = 0.96)). The
Effects of melatonin on CD11b expression
Neutrophil CD11b expression decreased significantly after melatonin treatment in both cohorts (DS p = < 0.0001; Controls p = < 0.0001), compared with baseline the mean percentage fall in CD11b expression was 25.8% in children with DS versus 23.1% in controls (p = 0.63)). There were no differences in mean percentage fall in CD11b expression when comparing LPS treated samples and those treated with LPS and melatonin in both cohorts (Fig. 4a(p = 0.64)).
Total monocyte CD11b expression reduced significantly after melatonin incubation in children with DS (n = 12; p = 0.02), but not in the control group (n = 17; p = 0.12). The mean percentage fall in CD11b MFI was 19% in children with DS versus 3.4% in controls (Fig. 4b(p = 0.24)). In classical and intermediate mono- showed a significant increase in CD11b expression after melatonin in children with DS (p = 0.03), and in controls but not to a significant level in the latter (p = 0.1). The mean percentage rise in CD11b expression after melatonin was 45% in children with DS versus 15.3% in controls ( Fig. 4e (p = 0.12)).
Effects of melatonin on TLR-4 expression
Neutrophil TLR-4 expression showed no significant change after melatonin treatment in either group. The mean percentage fall in TLR-4 expression was 4.4% in children with DS and 1.3% in controls (p = 0.82). Comparing LPS and LPS + melatonin treated samples there was a 17.5% mean reduction in TLR-4 expression on neutrophils of children with DS compared to a fall of 4.8% in controls (p = 0.48), Fig. 5a.
Total monocyte TLR-4 was significantly reduced after melatonin incubation in both groups (DS p = 0.03; Controls p = 0.05). The average percentage fall in TLR-4 expression after melatonin treatment was 13% in children with DS versus 11.4% in controls (Fig. 5b (p = 0.81)). Monocyte subset analysis of melatonin on TLR-4 expression showed no significant reduction in either group
Discussion
Neutrophil CD11b expression at baseline was significantly lower in children with DS compared with controls. Following LPS treatment children with DS upregulated CD11b, and this was significantly greater than controls. Novo et al. [18] reported that, at baseline, CD11b expression on neutrophils was not significantly different between children with DS (n = 12) and controls, although the smaller numbers and older population in this study may contribute to these findings. Our research suggests that although the level of CD11b may be lower under normal conditions, after contact with endotoxin there is an increased ability to activate and mobilise neutrophils in response to this stimulus. Neutrophils in children with DS may be hyper-responsive to endotoxin, which may have detrimental effects in the setting of sepsis. Adults with sepsis and renal injury in the absence of hypotension, have been shown to have increased activation of neutrophils with upregulation of CD11b [31], worsening prognosis. Furthermore, neutrophil mediated lung injury in sepsis, and multi-organ dysfunction (MODS) have been associated with increased CD11b expression on these cells [32,33]. A blockade of this receptor could have potential benefits in these clinical contexts [34]. In paediatric studies LPS hyper-responsiveness has been demonstrated through increased CD11b expression on neutrophils and monocytes of neonates with encephalopathy [35,36], these infants having developed significant immune dysregulation. Zhou et al. [37] examined TLR4 signalling and the CD11b response on polymorphonuclear cells in mice. The authors concluded that TLR4 mediates CD11b upregulation and is key for PMN activation in response to LPS. Further correlation between CD11b and TLR4 has been described by Guang et al. [38] who reported that CD11b mediates TLR4 signalling and trafficking in a cell specific manner in dendritic cells and macrophages, having a crucial role in balancing the innate and adaptive response to LPS. It appears that the two receptors are inter-linked and have important regulatory roles on one another in initiating the innate immune response.
Zhang et al. [39] demonstrated that mice deficient in CD11b exposed to Mycobacterium tuberculosis developed more severe granulomas, higher leucocyte recruitment and elevated pro-inflammatory cytokines. This demonstrates the immunomodulatory effect neutrophil CD11b expression exerts on the host response to infection. A persistent inflammatory response can be seen in autoimmunity and there is a higher prevalence in DS, recent studies suggest that reduced CD11b is associated with chronic inflammation in SLE and lupus nephritis [40,41]. Neutrophil CD11b is also decreased in septic shock and correlated with poorer outcomes [42,43]. In this context, the increased incidence of both autoimmunity and sepsis in DS is particularly noteworthy [16,44]. We demonstrated that melatonin caused a predominant decrease in CD11b expression in both cohorts; Fig. 4. We also showed that children with DS exhibited a hyper-responsive CD11b response to LPS in neutrophils Fig. 4a. In the acute setting of sepsis/SIRS an upregulation of CD11b may be associated with deleterious effects [45], furthermore, a positive correlation between CD11b expression and the degree of systemic inflammation has been described [46], making melatonin a potential adjunct in acute sepsis/SIRS.
The classical monocyte (CD14+/CD16-) accounts for the largest proportion of monocytes (80-85%) and its main functions include antigen presentation and phagocytosis [47]. We found classical monocytes exhibited significantly higher CD11b expression at baseline, and greater fold increases in CD11b after LPS than other monocyte sub-populations in both groups. This sub-group also displayed the largest rise in TLR-4 after LPS compared with other monocyte subpopulations in both cohorts. This suggests that classical monocytes are significantly pro-inflammatory with the largest CD11b and TLR4 response to LPS than any other sub-population. Regarding differential CD11b expression on monocyte subsets Tak et al. [48] reported no significant differences, whereas another study examining differential in vivo activation of monocyte subsets reported the most significant rise in CD11b on the intermediate monocyte [49]. However, these studies [48,49] characterised CD11b expression after lower doses of LPS with longer incubations in an adult in vivo setting as compared to our study which was undertaken in a paediatric cohort. Monocyte CD11b was highest on classical and intermediate monocytes [50].
Intermediate monocytes (CD14+/CD16+) are elevated in the setting of acute illness such as sepsis in children [51]. In our study, there was a significant rise in CD11b expression after LPS stimulation in children with DS but not in controls on intermediate monocytes. This adds to the evidence that there are hyper-responsive elements to the innate immune system in children with DS. Indeed, intermediate monocytes produce significant quantities of TNF-α once activated [50]. Previous studies have demonstrated elevated levels of TNF-α in patients with DS compared with healthy controls [13] at baseline. Intermediate monocytes demonstrated the greatest TLR-4 at baseline compared with other monocytes in both groups which has also been demonstrated in adults [20].
Non-classical monocytes (CD14dim/16+) have been implicated in both acute and chronic disease and have a pro-inflammatory phenotype with increased production of IL-1β and TNF-α [47]. This monocyte sub-group had significantly lower CD11b and TLR-4 expression in both groups at baseline. Furthermore, non-classical monocytes demonstrated a relative hypo-responsiveness to LPS versus the other sub-populations. Boyette et al. [50] assessed the phenotype, function, and differentiation monocyte subsets, and reported that non-classical monocytes had the lowest CD11b MFI and that there was the smallest response in this subset following TLR-4 stimulation. We found baseline TLR-4 expression was significantly raised in children with DS versus controls. The TLR-4 response plays a significant role in fighting infection but may also be responsible for the dysregulated inflammation seen in septic shock [52]. Williams et al. noted an increased mortality in mice with polymicrobial sepsis who exhibited early up-regulation of TLR-4, and improved survival in those with suppressed TLR gene expression [53]. Suppression of TLR-4 activation, pro-inflammatory cytokine release, and developing endotoxin tolerance is important in limiting the adverse effects of sepsis. Furthermore, a failure of this protective negative feedback process may contribute to increased mortality in sepsis [54].
We demonstrated that melatonin has an anti-inflammatory influence on innate immune function by reducing CD11b expression on neutrophils and total monocytes in children with DS and controls, thereby inhibiting neutrophil and monocyte activation and migration. Although there is a paucity of literature on the effect of melatonin on CD11b, Alvarez-Sanchez et al. reported a reduction in CD11b in melatonin-treated mice [55]. A significant reduction in TLR-4 expression only occurred in total monocyte populations. Melatonin may act as a TLR-4 antagonist and may be modulated via TLR-4 mediated inflammatory genes through molecule myeloid differentiation factor 88 (MyD88)-dependent and TRIF-dependent signalling pathways [56], thereby attenuating inflammation.
Melatonin has beneficial immunomodulatory effects in the setting of sepsis by inhibiting mitochondrial dysfunction and inflammation, reducing nitrosative and oxidative stress [29]. Melatonin has a robust antioxidant or free radical scavenging activity of [57,58] and melatonin administration also impairs NF-κB transcriptional activity, reducing pro-inflammatory cytokine (IL-1β, TNF-α, IFN-γ) release and inhibiting activation of the NLRP3 inflammasome [59]. Melatonin improved survival and clinical outcomes in neonates versus controls in sepsis [28,60,61]. We have demonstrated that the immunomodulatory effects of melatonin in sepsis can also be broadened to include reducing neutrophil and monocyte activation.
Melatonin increased CD11b expression on non-classical monocytes and to a significant level in the children with DS. However, it has been shown that melatonin can have pro-inflammatory actions in response to endotoxaemia. Effenberger et al. [62] reported that melatonin enhanced the general immune response following LPS treatment. Melatonin may have differing actions on distinct cell lines, with the pro-inflammatory non-classical monocyte being preferentially activated. Further evaluation of the immunomodulatory properties of melatonin in children with DS will allow assessment of its potential as a therapeutic agent.
Conclusion
This research highlights important differences in the innate immunity of children with DS versus age-matched controls. To our knowledge this has not been studied previously in this population. Children with DS have an increased response to LPS in neutrophils and intermediate monocytes, while also having elevated TLR-4 expression on non-classical monocytes compared to controls. These variations may be a contributory factor in a heightened/dysregulated innate immune response, which may have deleterious effects, leading to the worse outcomes seen in sepsis in these children. Lastly, melatonin could represent a useful clinical adjunct in the treatment of sepsis as an immunomodulator and our study suggests its anti-inflammatory effects also influence neutrophil and monocyte function.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Authors' contributions
The manuscript is being submitted on behalf of all the authors and is the original work of all authors. DH was responsible for recruitment, sample acquistion, lab experiments, analysis and was responsible for writing the main draft of the manuscript. FM, NL, ER, JB, OF were responsible for patient recruitment, sample acquisition, reviewing and editing the manuscript. AM, and AMM provided expertise in sample analysis on the flow cytometer, review of those results and of the manuscript. TRL, DD and EM were instrumental in study design, supervising the research and its outcomes and providing key editorial assistance. All authors had editorial license to review and re-draft the manuscript, and all contributors had to approve the final edit. All authors are accountable for the accuracy and scientific integrity of this work.
Ethics approval and consent to participate Our research involved whole blood samples from paediatric patients. Ethical approval was received in advance by the research and ethics committees from Our lady's children's hospital Crumlin (Ref: GEN 565/17), and the SJH.AMNCH ethics committee (Ref: 2017-05). Approval Letters available on request. All participant's parents received an information leaflet and full written consent was obtained prior to enrolment in the study.
Competing interests
The authors declare that they have no competing interests.
|
v3-fos-license
|
2017-06-25T09:08:58.010Z
|
2011-12-22T00:00:00.000
|
667914
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/1824-7288-37-59",
"pdf_hash": "44095023508ba3892fb08d63bf4edbb2a15e2ccb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:545",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "bbca5e34443a69314b0df1f103f52c029b37f5cc",
"year": 2011
}
|
pes2o/s2orc
|
Quality of life measures in Italian children with atopic dermatitis and their families
Background The impact of atopic dermatitis (AD) on children's quality of life (QoL) in US and European countries is relatively well known, though rarely evaluated in the Italian population. Moreover, the association between child age and QoL has not been enough investigated, even though few studies detected a worse QoL in youngest AD children. The aim of the study was to evaluate the QoL in an Italian sample of atopic children and their families, also exploring a possible association with child age. Methods 60 AD children aged between 1-12 years and their mothers completed specific QoL questionnaires (IDQoL/CDLQI, DFI) and a clinician completed a measure of AD severity (SCORAD). Results AD severity (Objective SCORAD) significantly correlated with QoL measures. Severe AD children showed higher IDQoL/CDLQI and DFI scores compared to mild and moderate AD groups (P = 0.006 and P < 0.0005, respectively), but only DFI scores differed in these last two conditions (P = 0.014). DFI scores negatively correlated with children's age (P = 0.046), but did not differ when considering child age ranges. Multiple linear regression analyses revealed a significant association between Objective SCORAD and QoL measures. Conclusions A strong association between severe AD and poor QoL, both in children and mothers, was found in the Italian sample, in line with the international literature. Family's QoL scores were sensitively related to AD severity, more than the child's QoL, emphasising that the disease has a deep impact on the family. A significant association between age and QoL was only partially found and needs further investigation.
Background
Atopic dermatitis (AD), also known as atopic eczema, is one of the most common chronic inflammatory skin disease which occurs during childhood, affecting 10-20% of children in Europe [1], and 17% of children in the United States [2]. Time trends in atopy have shown a substantial increase since the early 1960s and there is evidence of consistent associations of these disorders with a Western lifestyle [3].
Usually childhood AD onsets in the first five years of life, with about 60% of cases appearing within the first year, frequently between 0 and 6 months [4].
Recently, the detection of these adverse consequences have led to estimate the AD impact on the young patient's Quality of Life (QoL) [12,[15][16][17][18][19], putting in evidence how a bad skin condition is frequently associated to a poorer QoL.
QoL is a wide concept, influenced by physical health, psychological state, level of independence and social relations. The World Health Organization (WHO) has tried to give quality of life (QOL) an exact definition: "The individual's perception of his position in life in the context of culture and value systems in which he lives and in relation to his goals, expectations, standards and concerns" [20].
In medicine, attempts to construct methods for measuring QOL have primarily focused on individuals with chronic diseases with elevated costs of care and treatment, in order to better evaluate the impact of the disease on patients. A recent study [6] found that atopic children had a worse QoL than children affected by other chronic diseases (like, for example, naevi, acne, alopecia, diabetes, psoriasis, asthma, cystic fibrosis, generalised eczema), except cerebral palsy. Moreover, some studies evaluated the association between child's age and QoL, where a major impact was observed in children of youngest age [21,22] and their families [21].
While several studies have explored prevalence of AD and related QoL in childhood, in USA and European countries, to our knowledge few published studies have analysed these aspects in Italian children [8,23,24].
The present study is a cross-sectional analysis, the aim of which was to investigate the impact of AD on the QoL of an Italian sample of atopic children and their families. The expected result is to find an association between a bad skin condition and a poorer QoL in children and their parents. Secondly, the study aimed at exploring a possible association between child's age and child's and family's QoL.
Study Subjects
During the period March 2006-March 2007, all consecutive AD patients (aged 1-12 years), attending the Dermatology Unit of M. Bufalini Hospital, Cesena (Italy), and their mothers were considered eligible for the study and therefore asked to participate.
AD patients were enrolled in the study during a routine check-up in the Day-Hospital by a dermatologist, who registered the clinical profile and calculated the objective SCORAD, following the diagnostic criteria proposed by Hanifin and Rajka [25]. A psychologist then presented the research project to the mothers and their children; all subjects who were asked to participate consented to take part in the study and received the informed consent form and a range of questionnaires to complete: Infants Dermatitis Quality of Life Index (IDQoL), Children's Dermatology Life Quality Index (CDLQI), Dermatitis Family Impact (DFI). All questionnaires were completed entirely. For children aged 5-12 years, all the mothers completed the DFI, while their children completed the CDLQI; younger children's mothers helped them in understanding and filling in the questionnaires. For the children aged 1-4 years old, all mothers completed the DFI and the IDQoL.
Ethical approval (n°PR 887026) for the study was obtained from the Hospital Ethics Committee.
Instruments
The instruments used in the study were: The SCORAD [26], an index aimed at assessing AD disease severity, where the higher the value the worse the skin condition. Given that in literature [27] it has been demonstrated that subjective symptoms do not always correlate with disease severity and objective assessment, and in relation to the recommendations by Oranje et al. [27], we used the objective SCORAD, instead of the SCORAD index, including extent and intensity of the lesions, divided into three levels: < 15 mild, 15-40 moderate, > 40 severe [27].
A questionnaire including items on the main sociodemographic characteristics.
The Infants Dermatitis Quality of Life Index (IDQoL [28]; Italian validated version [29]), a disease-specific measure for AD children between 0-4 years old, consisting of 10 items concerning physical and social functioning and completed by the caregiver. A total score is calculated (range 0-30), where higher scores represent a poorer quality of life.
The Children's Dermatology Life Quality Index (CDLQI), a skin-related measure for children between 5-16 years old [30], consisting of 10 items on physical and social functioning. The total score is between 0 and 30, where higher scores represent a poorer quality of life.
The Dermatitis Family Impact questionnaire (DFI [13]; Italian validated version [29]), a 10-item scale measuring the impact of AD on families with a child affected by the disease, validated on a parents' sample of AD children aged 6 months-10 years old; the total score is calculated and ranges from 0 to 30, where, again, the higher the scores the poorer the quality of life.
Statistical analysis
Disease severity groups (mild, moderate, severe) were defined in function of the objective SCORAD ranges; patients were also classified in respect to specific age groups (1-4, 5-7 and 8-12 years), which were chosen considering main developmental stages [31,32].
Qualitative variables were compared using Chi Square test. Quantitative variables were analyzed first of all using correlations (Pearson's coefficient) and then using ANOVA; specifically, HRQoL scores of participants in different groups (according to AD severity levels and age groups) were compared using ANOVA and performing post hoc analyses with Tukeys Honestly Significant Difference (HSD) test. A multiple regression was performed in order to verify how much each variable contributed to an explanation of the HRQoL scores. All the following variables were included as predictors: child's gender, child's age, Objective SCORAD, presence of other allergies, caregiver's age, marital status and socio-economic level. Considering the low number of subjects, the Enter method was chosen as the safest one for the regression.
Statistical analyses were performed using the SPSS Package for Windows software, version 17.0.
The majority of mothers were married (89.6%); most of them had a permanent job (78.3%): 48.3% of them were employees, 10% were self-employed and 20% worked in other fields. According to Hollingshead criteria for socioeconomic status [33], women were classified as belonging to one of three levels: low (27.1%), medium (60.4%) and high (12.5%).
According to the categories of the objective SCORAD, 26.7% (16) of children showed a mild AD, 50% (30) a moderate AD and 23.3% (14) a severe AD. The distribution of children across AD severity levels and age ranges did not show any significant difference (χ 2 = 0.559, df = 2 P = 0.97), (Table 1).
Also, the distribution of women in function of AD severity and levels of socio-demographic variables did not show significant differences (marital status: P = 0.25; job: P = 0.34; socioeconomic status: P = 0.64).
Quality of life in relation to disease severity and child's age: comparisons among patient groups IDQoL/CDLQI mean score in the total sample was 7.0 ± 5.21 (range 0-19). According to AD severity (Objective SCORAD categories), IDQoL/CDLQI mean scores appeared to be statistically different (P = 0.006): post hoc analyses showed that the group with severe AD got a higher score compared to the group with mild (P < 0.0005) and moderate AD (p = 0.038), but these last two were not statistically different (P = 0.063; Table 2). When considering child's age range, differences in IDQoL/ CDLQI mean scores did not emerge among groups (P = 0.28; Table 2), not even when looking at the interaction between AD severity and age range (P = 0.87).
DFI mean score in the total sample was 7.95 ± 6.21 (range 0-23). DFI mean scores as well differed in relation to AD severity levels (P < 0.0005): the group with severe AD showed a higher score compared to mild (P < 0.0005) and moderate AD groups (P = 0.024) and the latter was significantly higher than the mild group (P = 0.014; Table 2). Further significant differences did not emerge when considering the child's age range variable (P = 0.074; Table 2), nor when looking at "AD severity x child's age" (P = 0.99).
Multiple linear regression of variables associated with QoL measures
Using the Enter method, a significant model emerged (P < 0.037), in which 22% (Adjusted R 2 = 0.217) of the variance in IDQoL/CDLQI scores was explained by Objective SCORAD (Beta coefficient = 0.479, P = 0.003), while all the other considered variables (sex, child's age, food allergies, mother's age, marital status, socio-economic level) did not significantly contribute to the variance (Table 3).
Considering DFI scores as criterion variable, a significant model emerged too (P < 0.008); again, Objective SCORAD (Beta coefficient = 0.558, P < 0.0005) appeared to be the
Discussion
To our knowledge, the current study is one of the few published investigating QoL in an Italian sample of children with atopic dermatitis. Globally, the results showed a strong association between QoL and disease severity, measured by Objective SCORAD, as already shown by international literature [16,17,19]. In fact, the study found that a bad skin condition was significantly associated with a poorer child QoL and with a significant worsening of family's QoL, evidencing how AD tends to affect the whole family system and not only the individual patient. Compared to a similar Italian study by Ricci et al. [8], including 45 AD children aged between 3 months-7 years old, our children and families' QoL scores were noticeably lower, but we must consider that a higher percentage of severe AD was present in that sample (44.4% severe AD children vs 23.3% in our study).
Looking into the details of our results and considering both individual patients' and families' QoL scores, we found some similarities and differences between the two.
First of all, both QoL scores showed a moderate correlation with Objective SCORAD, reporting very similar coefficients; the scores also showed to be significantly higher in severe AD children, compared to moderate and mild AD conditions. That is, poorer QoL was strictly associated to a more serious AD condition.
At the same time, differences between patients' and families' perspectives emerged in relation to the following aspects: while IDQOL/CDLQI scores were almost equal in moderate and mild AD children, DFI scores showed significant differences across all the 3 levels of AD severity. This means that, in our sample, families' QoL appeared to be noticeably affected by the intensity and severity of the child disease, more than the individual QoL. This aspect emphasises the main role played by the family in the management of the disease and should be more frequently kept in mind by clinicians. Another difference that emerged between IDQOL/ CDLQI and DFI measures was that only the latter appeared to be associated with child's age. In fact, DFI scores showed a negative correlation, even if weak, with the child's age: the parents of younger AD children seemed to experience a poorer QoL, as already suggested in the study by Ganemo et al. [21]. Anyway, given the small size of our sample, the non significant P value detected when considering DFI scores in relation to different age groups (P = 0.074) and the non-relevance of the child's age as predictor variable in regression analysis, more investigations are needed.
The study has nevertheless some limitations. First of all, difficulties in recruiting the subjects have resulted in a limited sample size and a reduced number of considered variables and, because of this, further analyses on wider Italian samples are needed in future to obtain more statistically and clinically relevant data. Secondly, we had no control group and this did not allow the comparison between our results and the QoL of healthy children or children affected by other dermatological diseases. In future studies, therefore, it would be relevant to investigate whether in the Italian population the trends shown in literature, where AD subjects exhibit poorer QoL compared to control groups or other clinical groups, are confirmed [6,19].
Conclusions
However, globally our data support the main clinical implications derived from recent international studies on QoL in atopic children; as underlined by Brenninkmeijer et al. [34], "in cases of severe AD, dermatologists should not only be attentive to the physical aspects but also to the psychological and social aspects of AD. In conclusion, we believe that in clinical care a systematic evaluation of physical and psychosocial consequences in patients with AD is warranted". Understanding HRQoL in the patient and his/her family might help professional operators to improve their relationship with them and to facilitate the management of treatment regimes. While early investigations tended to focus on socio-demographic and clinical variables, recent studies have shown that there are multiple factors which determine adherence, such as personal perception of health status, coping style, worries, burden, social reasons, motivation and side effects of treatments [6,35]. Warschburger et al. [36] have underlined how parental disease management is predicted by familial situation, personal well-being and severity of child disease.
For these reasons, it would be useful in future studies to focus on psychosocial and psychological variables that could affect families' QoL, such as analysing whether there is a substantial difference in mother and father's perceptions, as investigated by Holm et al. [37] and Chernishov [38], or if comorbidity with other parents' diseases can have some influence on their ability to provide the child with adequate care. At the same time, studies examining the effects of education programs on parental support should be promoted [39].
|
v3-fos-license
|
2019-07-24T13:55:15.682Z
|
2019-07-08T00:00:00.000
|
198177181
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://uwe-repository.worktribe.com/preview/850711/Scylla%20minor%20revisions%20final.docx.%2028129.pdf",
"pdf_hash": "37b3f4a4d3b0930bfcdc737463184e7e76766db2",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:546",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "baf9029b9e6e21aa6cfcfca7b7ba37147c488245",
"year": 2019
}
|
pes2o/s2orc
|
Between Scylla and Charybdis: Losing balance in an age of extremes
This article charts a course from the literature of the classical world through to a select number of classic texts in sociology by Marx, Weber and Bauman, and some more recent psychoanalytically informed studies, to offer a metaphor for the vicissitudes of structure and fluidity in this time of turbulence and extremes. The episode from Homer’s Odyssey featuring the passage of Odysseus and his men between the twin monsters of Scylla and Charybdis is seen as offering an image of the two tendencies of our time towards extreme forms of fluidity in the form of the liberalisation of markets and marketisation of most private and public sectors on the one hand, and a proliferation and mutation of bureaucratic practices seen as an aspect of structural conditions on the other. The dysfunctional relation and polarisation of these tendencies and their consequences are analysed in turn as leading first to a culture of narcissism (Lasch, 1978), then to perversion (Long, 2008; Hoggett, 2010) at a social as well as a socially constructed individual level. The article makes use of free association and amplification in working with images and metaphors.
Introduction
We live in turbulent, confused and confusing times.Depending on one's location and context one's experience may go from one extreme to another in terms of how much and what kind of control and bureaucracy one might be subjected to or how much speed and enterprise be expected and demanded of one.In commercial and private business practices the tendency is towards flat structures (see GOOGLE for instance), less faith in a Panopticon (Foucalt, 1977) style of command and control or at least the emergence of doubt in their suitability, while in the arrangements of public institutions bureaucratic, procedural top down practices seem to be proliferating at least in UK and other European contexts.These extremes are what the title alludes to, but also cognisant of Hobsbawm's (1994) historical work.Yet these extremes do not exist in isolation from each other, it is the nature of their relation and how they manifest in different guises and combinations in public service institutions and what they produce that is the subject of this article.
A simple way of characterising this is given by the example of UK Home Office practices that surfaced recently in what has become known as the Windrush scandal, whereby the purposeful creation of a hostile bureaucratic environment meant the deportation and stripping of right to work, pensions and healthcare for people who were originally invited from the Caribbean British colonies to work in the UK.They were already British citizens at the time, yet many do not now have the right documents to prove such despite a lifetime in the UK.
Bureaucratic target driven measures of immigration control were and are being followed to the letter, without hesitation or deviation, to curb immigrant flows, totally ignoring the historical nature of the situation or its ethical social justice dimensions.
The main argument put forward in this paper is that underlying these dynamics there is a reactive and perverse relationship between what Bauman (2000) termed the liquidity of late modernity and top down procedural bureaucratic measures in most public service institutions.
Perversely and paradoxically, while there is a relation between them, this does not offer containment, rather it is dysfunctional, polarised, often based on reactive and reactionary notions, founded on projections and leading to an increasing disconnection between tasks and processes and a fragmented and persecutory environment.
The purpose of the paper is to track this dysfunctional dynamic between two tendencies, loosely characterised in terms of structure and fluidity, at the hand of images and classic texts.The work can be seen as strongly associational and it is heavily influenced methodologically by: a) psychoanalytic work based on free associations and amplification most often used to work with image based content, and b) Bergson and Deleuze's philosophical orientation (see also Manley, 2018).What these frameworks indicate is that an important step towards understanding may be given by discerning and acknowledging some ubiquitous processes and tendencies in social dynamics, also subject to different types of inter-relatedness.What a Deleuzian perspective offers is also a way of thinking that follows the 'becomings' of such tendencies, in other words their transformations and torsions in specific settings, which in my view is best achieved through free association and amplification processes.Thus it identifies basic differences in kind, in this case structure and fluidity as elements of analysis, while acknowledging that in reality 'things…are always mixtures' and 'only tendencies are simple and pure' (Deleuze, 1999a: 45, his italics).The mixtures vary, but one can identify these tendencies in different guises as torsions of the same, topological transformations (Massumi, 2002:ch.8),rather than totally different in kind, while at the same time some disguises are such that one pretends to be the other, as in the way quality of experience is supposed to be measurable by tick box menus of choices.
The tendencies are mostly seen in terms of structure and fluidity, but also following a Bergsonian model, in their association to different manifestations, such as quality and quantity, also subject to a particular kind of dysfunctional relatedness in contemporary social and political dynamics.In keeping with its associational methodology the article assumes that images encapsulate complex messages, attempting therefore, at least partially, to unpack them, while allowing the images to stand as metaphors for the dynamics illustrated.It does this by drawing on some classic images: starting with the episode from Homer's Odyssey of Odysseus navigating between Scylla and Charybdis, the twin monsters standing in for the dangerous aspects of extreme fluidity and petrified structures; moving then to Bauman's 'liquid modernity' and Weber's 'iron cage of bureaucracy' from his classic work on The Protestant Ethic and the Spirit of Capitalism, which are then explored and elaborated with reference to Lasch's (1978) Culture of Narcissism and other more recent psychoanalytically informed perspectives.Instrumentalisation is seen as a central feature of the neoliberal practices in the public and commercial sectors; narcissism as a consequence of instrumentalisation and fluidity, and perversion as an additional and more corrosive consequence of neoliberalism than narcissism.
The key orientation of the paper is psychosocial, drawing from key sociological literature and allying this to psychoanalytically derived concepts and methodologies.Its central tenet is that there is a growing tendency towards reactive, doer/done to (Benjamin, 2004) relation between structure and fluidity, quality and quantity, which is most often unrelated and inimical to the task of institutions or organisations.As Mark Fisher (2009) also pointed out: neoliberal capitalism ideological concern with profit has not liberated us from bureaucratic control.My point is that a quantitative focus on performance is at the expense of qualitative aspects, such as the increasing corrosion of character (Sennett, 1998) and mental health in human lived experience.Where the focus on efficiency and productivity contends with aspects of ethical significance effort is seemingly expended in trying to subdue the excess fluidity, such as staff turnover for example, by procedural and/or bureaucratic means, increased hierarchical and regulatory controls, rather than professional ethics.However this does not address the conditions of turbulence it is trying to manage; rather it fosters an emptying out of values, a culture of shallowness, narcissism, increased turbulence and acceleration and paradoxically a drifting apart, while coming closer of the two tendencies.While the paper draws from classics in literature and sociology, psychoanalytic concepts such as narcissism and perversion give some purchase to our understanding of the lived experience that results from these dynamics.
Between Scylla and Charybdis
The episode from Homer's Odyssey provides a metaphor for our time, in spite of its ancient provenance.Odysseus is a figure of interest in its own right.Horkheimer and Adorno in their Dialectic of Enlightenment (1947: Excursus I) present him as the cunning hero of the Odyssey, itself an epic of individualism, as opposed to the Iliad, an epic of civilisations (see also Crociani-Windland, 2011: 116-122 for more on cunning).On a reflexive note my own upbringing in Southern Europe and previous training in the Classics may have influenced my choice in terms of metaphors, yet I am not alone in seeing this epic as the origin of Enlightenment and Western culture, beyond the difference between North and South that Weber (1904Weber ( -05/1992) pointed to as foundational to the emergence of capitalism.
Being between Scylla and Charybdis, managing not to fall prey to either monster has come to be part of common English language.This refers to the episode of the Odyssey (Homer, Odyssey 12. 231 ff), when, having left Circe, who had turned some of his men to swine and back to humans, Odysseus had to navigate what Virgil (Aeneid 3. 418 ff trans.Day-Lewis) and other classical authors thought to be the Straits of Messina.Being in dire straits is of course another proverbial expression for being beset by danger.The narrative in brief tells us that Odysseus and his men had to pass between Scylla, a six-headed monster, and Charybdis, a whirlpool on the opposite shore.The one imprisoned, chewed and devoured, while the other swallowed whole.On the sorceress Circe's advice Odysseus clung to Scylla's side to avoid Charybdis, being told that thus he would forfeit six men, but save the rest.He had told the men about Charybdis, but not Scylla, lest fear impair their rowing.As they approached the strait, they could hear the roar of Charybdis and as they neared it Odysseus was enthralled by its boiling turbulence and did not see Scylla's long necks stretching to swoop his men away.Watching them being devoured, Odysseus claims 'was the most sickening sight I saw in all my voyages' (Homer Odyssey, book XII trans. Butler).
Overall, the image speaks of the turbulence created when land constricts flow as happens in the geographical feature of straits and generally when water flow is constricted as when it goes down the plug-hole; in other words when structure overly restricts flow.Without delving into the science of this in detail, it may suffice to leave this as an image for the purposes of this paper.Homer gives more geographical detail though.Scylla is in a cave in a rock so high, you cannot see its peak, shrouded in mist at all seasons.The rock cannot be climbed and the cave is so high arrows cannot reach it.How many people these days feel they cannot see where the pressure comes from or get to where the power or responsibility lies?Interestingly Scylla is described as having six heads, each with three rows of teeth, twelve misshapen feet, yet her canine yelp could be thought to be that of a pup.In other words her nature is multiple and most fearful, but sounds innocuous.Nonetheless the sorceress Circe had advised this to be the lesser evil.Her advice was instrumental and utilitarian: sacrifice six men to save yourself and the rest of the crew.The six men Scylla devoured were the strongest and most able.How many restructuring processes and redundancies have the same effect?Charybdis is not described in the same direct way: all we are given is a description of her belching and swallowing water and that she lies between low rocks with a fig tree growing above.It is indeed far harder to describe and understand turbulence and fluidity, as scientists, philosophers and social scientists who have struggled to get a grip on such matters would attest.Nonetheless as stated earlier, where land constricts flow turbulence is produced.Scylla's cave and rocky cliffs stand here metaphorically for imprisonment and demise by structure and Charybdis obliteration by extreme fluidity, the turbulent whirlpool able to swallow and drown us.Bauman's (2000) idea of Liquid Modernity seems to be apt in relating to turbulence.
Liquid or Fluid Modernity-All that is solid melts into the air… (Marx) Bauman's (2000) book which bears the title of Liquid Modernity already spoke of the turbulence of our time characterised by constant mobility and change in relationships, identities, and global economics.The reality of this is part of everyone's experience and it is part of an accelerating trajectory hard to capture in any definitive way.There are signs of what Bauman characterises as a move from "pilgrims" in search of deeper meaning to "tourists" in search of multiple but fleeting social experiences.Certainly in terms of Western values we have gone from a monotheistic outlook, one God, one truth, to a pick and mix approach to belief (Luckman, 1991), yet we also see renewed searches for meaning and identity beginning to emerge and be commented on within social science and management literature (Gabriel, 2012).In terms of modernity we have moved from organised to disorganised modernity (Lash and Urry, 1987), from hardware to software to 'everyware' or ubiquitous computing (Greenfield, 2006).Boundaries of time and space have been eroded (Giddens 1990) and most of us experience information overload.Plenty of information actually results in very little knowledge as our minds stop being able to cope with the inputs.
Identity is tied to consumption and choice.Choice both defines us and torments us, it is hard to choose in the midst of so much information and so little knowledge, meaning a qualitative aspect, ie knowledge as understanding and critical capacity, is being replaced by a quantity of facts parading as knowledge.'Compare' sites, Linkedin and other professional network based systems are cashing in on the need to sort out the wheat from the chaff in choosing who to associate with or buy from, yet the trustworthiness of such systems has to be constantly questioned: for example a recent item of news was that product reviews are written by paid people in the most varied locations, who may never have seen the goods they offer five star reviews for.And then there are fake news… Anthony Bryant extends some of Bauman's ideas in an article (2007) titled Liquid Modernity, Complexity and Turbulence.He begins by linking Bauman's main argument to quotes from Marx's Communist Manifesto, he sees particularly this quote as at the core of Bauman's work: 'All fixed, fast frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify.'I beg to agree and differ, if that can be a defensible position.It is partly true and partly untrue, in that it reflects the speed of change and yet what may be on the increase now is a changing of the same, old wine in new bottles, as the revival of populist fear driven and fear driving narratives attests.Bryant has a positive take on the turbulence and liquidity pointing out that complexity theory offers a little solace in telling us that order tends to emerge from chaos.
However little order seems to be emerging as yet and liquidity is only one side of the coin.
The other is the insidious structural aspect of bureaucratic management practices and how they might shape those living with them.There is a genealogy to this.For that let's go further back to one of the founding fathers of Sociology.
Weber's Iron Cage of Bureaucracy (the rocks-the cave-Scylla)
No one knows who will live in this cage (Gehäuse) in the future, or whether at the end of this tremendous development entirely new prophets will arise, or there will be a great rebirth of old ideas and ideals, or, if neither, mechanized petrification, embellished with a sort of convulsive self-importance.For the "last man" (letzten Menschen) of this cultural development, it might well be truly said: "Specialist without spirit, sensualist without heart; this nullity imagines that it has attained a level of humanity (Menschentums) never before achieved" [Weber 1904[Weber -05/1992, 182: , 182: translation altered] Weber is famous for having characterised modernity in terms of increasing bureaucratic control and instrumental rationality.Parsons' translation of the original German term Gehause as 'iron cage' of bureaucracy has entered the sociological stock of terms.This is linked by Weber to rationalising processes of modernity that create particular structures and conditions, resulting in a sense of petrification and imprisonment on one side and on the other not only a fragmentation of values, but a change in quality whereby values rather than moral, become aesthetic, leaving people empty, while thinking themselves superior, a clear feature of narcissistic personalities.
Weber's image of the iron cage has been around for a long time as a potent metaphor, particularly in organisational studies.Many have used it, though since the turn of the century there has been a sense that it needed updating.Yiannis Gabriel (2005) reconceptualised it as a glass cage, to both retain the sense of imprisonment, while also conveying the aspect of surveillance so prevalent in our time.It is also suggestive of being on show, hence as he puts it '…a medium perfectly suited to a society of spectacle, just as steel was perfectly suited to a society of mechanism' (Gabriel 2008: 314).Clegg and Baumeler (2010: 3) give a brief overview of other reconfigurations of the cage: 'a mental cage (Courpasson 2000), perhaps even reconceived as made of velvet or rubber: velvet metaphorically promises subjects the fulfilment of dreams while rubber is capable of being 'stretched to allow adequate means for escape' (Ritzer 1996: 177).Clegg and Baumeler (ibid) add their own take of 'transparent liquidity'.This is based heavily on Bauman's work, but their focus differs in that it shifts from consumption back to production and what happens to people at work, rather than in shopping centres or increasingly in online shopping.The increasing casualization of work, de-regulation of markets and the need for agility and flexibility of workers, able to manage uncertainty in the face of endless re-structuring, precariousness of contracts and global conditions is indeed more likely to be captured by metaphors of transparent liquidity.Yet to quote Clegg and Baumeler (ibid: 16): 'Some central questions emerge concerning the key metaphors deployed in the field.It is evident that liquidity is not everywhere; it is equally evident that iron cages are still to be found, as are glass cages'.In other words we need to Regulation appears to be in this case a more conventional regulatory response aimed to curb immoral, but very lucrative and power wielding fluidity of practice.While a response to big players' malpractice, it now includes everyone in more bureaucratic practices, while those bent on bending the rules may yet find fluid 'enterprising' escapes.In other words while in this case this may be seen as a containing move to curb excess fluidity and unethical practices it is also a rigid, regulatory procedural approach, which may hamper some perfectly harmless data sharing, while encouraging those with enough resources to find cunning ways to get out of its grasp.'Power is measured by the speed with which responsibilities can be escaped' (Bauman and Tester 2001: 95, also cited in Clegg and Baumeler, 2010: 4).Or as the Italian saying roughly translated goes: when a law is made, a loophole can be found.Cunning can be seen as another face of fluidity, encouraged by regulatory attempts that have little purchase and relation to the actual nature of the crime and culprits.This combined with the reluctance of free market neoliberal ideology to impose rules can drive the speed of flow to unprecedented turbulence, also of course driven by the speed of technological changes forcing the rules to be on a continuous race to catch up.
The advantage of the Homeric metaphor is to speak to a duality of processes and their interaction.There were rocks, cave and a multi-headed monster in Homer, mechanized petrification and empty self-importance in Weber.In Weber's time there were no decentralising restructuring of functions or globalising processes, so it would have been hard to think of companies in terms of multi-headed monsters, but now we could be excused for making that association.In terms of organizational practices, some have argued that we are moving towards a post bureaucratic form of management (Hoggett, 2007), but, as Paul Hoggett (ibid) also argued, in the British public sector this is only a part of the picture: a move to decentralization has produced operationally decentralized units, while increasing centralised control over the monitoring of their operations (multi-headed Scylla in the cave).
The kind of control exercised is key here: rather than ethically based quality control, the quantitative logic of key performance indicators, outputs and competition, in other words the logic, and values, of the market has taken over from the logic of public services and the values associated with them, i.e. to provide a service rather than generate profit.The situation created by these two tendencies in combination is what I think the metaphor I propose is able to characterise.Living with Scylla one can be devoured, if not literally through redundancy, certainly in terms of one's soul and values.As Layton and Redman (2014) remind us: 'neoliberalism… is …a form of governing the soul (Rose, 1990)' from which there is little escape.The cave is not a cage, in that it has an opening, but the fall would be precipitous if one tried to escape and if one were to survive the fall, most likely one would end up swallowed by the whirlpool.The only escape offered in the Odyssey is through either instrumentality (see earlier section) or individual cunning.In his second lone crossing of the straits Odysseus chose to sail close to Charybdis.He avoided being swallowed by holding onto the fig tree growing above the deadly whirlpool, while his raft was swallowed up and then spat out, allowing him, by wisely judging his timing, to get back on it.It is maybe no coincidence that the tree is a fig tree.As most children growing up in Mediterranean countries learn, this is not a weight-bearing tree.The solution here is short term: it uses a pretty flimsy tree structure, available to Odysseus on the basis of being alone: a single, cunning, or to use the preferred neoliberal terms, 'enterprising', supposedly selfsufficient (see Glynos, 2014) individual.
'Specialists without spirit'
To return to Weber, in his time progress was made of iron and steel, cog based machinery and, as Gabriel (2012: 2) also reminds us, a near contemporary of Weber with an obsessive compulsion for control, Frederick Winslow Taylor (1856-1915) made a virtue of this by advocating using a deskilled workforce, managed from above by 'rules, laws and formulae' derived from tabulated traditional knowledge (Taylor 1911, cited in Gabriel 2012: 2).Thus we saw the birth of command and control management and hierarchically structured fragmented processes that make employee skill irrelevant at best, inconvenient or dangerous at worst.Workers as cogs are more compliant and predictable, their outcomes can be measured, and they are accountable as well as controlled.Rational calculation and instrumental rationality were the predominant ways by which capitalism proceeded.This mostly operated in such extremes in factory production processes, but those of us working in public services see a similar procedural erosion of professional competence in target driven neo-liberal regimes of accountability.We may be living in a post-Fordist time, yet it seems that some principles, such as the breakup of functions and deskilled compliance, have mutated and migrated (another aspect of fluidity), rather than being superseded.Policies are now often co-terminus with procedures and protocols, in other words workers are no longer given a principle to guide their actions, rather they are told what to do and made fearful of using their own judgement, lest they fall foul of some regulation or another.More and more the letter of the law or rule, rather than its spirit is what is made to count, counting being the only way to judge performance in neo-liberal management.A recent UK example illustrates this: the House of Lords Economics Affairs Committee has recently reported that HMRC, the UK government tax collector, is failing to 'discriminate effectively' between the different kinds of activities it classes as tax avoidance, pointing out that 'there is a clear difference between deliberate and contrived tax avoidance by sophisticated, high income individuals, and uninformed or naïve decisions by unrepresented taxpayers' (Morrison, 2018: 1).In this as in the earlier Windrush scandal examples, what is being pointed out could be seen as a failure to appreciate qualitative differences, differences in kind (Deleuze, 1991), by workers compelled to follow rules, while driven by quantitative KPI's.More important though is that in such a situation it is to the workers' detriment to exercise judgment and differentiate even if they were trained or allowed to use their judgement.This in turn leads to a further proliferation of rules and procedures to offset this, whereby additional administrative functions and specialists are needed.To link to earlier GDPR example at our university we now have administrators with specialist knowledge of GDPR and administrators outweigh academics in terms of numbers, to the point that even keeping up with where to find advice or who is supposed to do what has become a task in itself.The more staff turns over and regulatory regimes change or proliferate, the more all this is required.Whereas in the 'Windrush' case there was a deliberate creation of a hostile environment, most of the time there appears to be a denial that the conditions created by neoliberalism are the drivers of malpractice and difficult work and living conditions.Aside from eroding professional capacity and creating specialists without spirit, Weber thought this could lead to an emptying out of character or to use Sennett's (1998) term its 'corrosion'.
'sensualists without heart'-Narcissism as the psychopathology of our time
Weber's quote speaks extremely well to the nature of narcissism and to its social construction.He points to both an inflated self-aggrandising self-obsession and an inner emptiness -'this nullity'.
What Weber seemed to prophetically identify was later taken up more fully in the light of psychoanalysis by Lasch (1978) in The Culture of Narcissism.Narcissism in Lasch's view is brought about by social conditions of (liquid) modernity.In turn there is also in Lasch a critique of the rise in a therapeutic expert psychologising of the 'social conditions of the suffering' reducing problems to an individual level.The emptying out of patriarchal authority and fragmentation of families and communities are seen as part of the problem.Families, localised communities and non-expert traditionally communicated values are seen as possible ways of regenerating old ideas and ideals.This led him to being criticised as conservative and nostalgic.The point that got missed according to De Vos ( 2010) is that Lasch was telling us about the deep inter-relation of the social and the psychological.Barry Richards (2018: 19) In conditions of modernity adaptation (another aspect of fluidity) and appearance become values for a fragile identity, plagued not by guilt, but by anxiety, shame and depression.The link between anxiety and poor internal foundations for security giving rise to narcissism is present in Lasch (1978), but the lack of boundaries, and the concern with surface and appearance are the main elements that link to my current analysis.Students' mental health problems are increasing (see among others IPPR report: Thorley, 2017; Storrie et al. 2010).I experience first-hand this worrying increase in mental health issues among university students in my own work with undergraduates, many struggling with at least some of the features of narcissism.So many are plagued by anxiety about how others might judge them.There are selfie obsessions and increasing addiction to online social networking, where one's standing is numerically determined by the number of Facebook friends and 'likes' received, another aspect of the confusion between qualitative and quantitative aspects, whereby numbers take the place of qualitative aspects of relatedness.The fragility of identity, its instability and need for external validation, its continuous need for a curated presentation of self could be also seen as another face of fluidity, fuelling in turn the attachment to the many forms of identification available to the postmodern self as an expression for the need for some anchorage.Gender and sexuality are topics my first year students are most interested in.These issues are well covered in academic work and debate, but what is interesting in relation to my argument is the quality and relation of structure and fluidity in these identity issues.What I have noticed is that gender fluidity goes hand-in-hand with 'I identify as…' statements, an interesting substitution for the more conventional 'I am'.This is more about categorisation, than being; it is a flimsy structure to cling to, in the context of a wished for dissolution of standard binaries of gender and sexuality categories.As the standard categories that characterised organised modernity have been increasingly questioned, in the hope that differences could be embraced along a spectrum without discrimination, the need for some structure has resurfaced along with fraught debates as to who is included and excluded by the various initials associated with such: LGBT, LGBTQ, LGBTQI etc.This is in my view another aspect of the tension between fluidity and structure in terms of identity and its relation to individual and social aspects.The tendency to depression is another aspect of the fragility of identity and linked to narcissism (Bleichmar, 1991;Rosenfeld, 1971;Morrison, 1989;Anastasopoulos, 2007).Narcissism can be seen following Britton's (2004) summary and simplification of the extensive literature on the subject to be either libidinal/defensive or hostile/destructive.Mostly what I see in students appears to be of a libidinal in nature, briefly sketched here, before turning later to more destructive aspects.
Mark Fisher (2009: 22) talking of his experience of UK Further and Higher Education students spoke of 'hedonic' depression.This he characterised as 'constituted not by an inability to get pleasure, but by an inability to do anything else except pursue pleasure.There is a sense that 'something is missing'but no appreciation that this mysterious, missing enjoyment can only be accessed beyond the pleasure principle (ibid, his italics).'In our university some of the cure being peddled for students' mental illness is a culture of positivity (the initiative overall has adopted the neologism of 'mental wealth'!) spread through a campaign of post-it notes ('you can do it'…) scattered across campus, as if there were no reasons to be depressed.Is this not an organisational level invitation to denial and further narcissistic omnipotence, rather than feeling the pain and acting to seek and denounce its actual source?As members of the institution UK academics are swimming in the same soup.While individually attempting to do our best by the students, survival fears often make us accomplices in the system, whether willing or not.
While the fragility of identity and the rise in narcissism in greater portions of populations is troubling, it is not usually as directly and immediately socially destructive as some of the more extreme manifestations to which I turn next.An extreme example of this are the disturbing and unprecedented recent atrocities perpetrated by young men who self-identified as 'Involuntarily Celibates' or INCELs.
Their strong self-identification as victims and their envy of socially more competent peers in relationships drove them to commit atrocities: for example Elliott Rodger killed 7 and injured 14 college students in California University College Santa Barbara in 2014, and in 2018 Alek Minassian killed 14 women and injured 14 more, 10 of them women.Rodger was a heavy Internet user who, prior to the killings, posted YouTube videos and a manifesto where he portrayed his actions as beautiful just retribution (Blommaert, 2017).The killing was given an aesthetic value.Appearance here trumped ethical aspects of judgement.There are few boundaries to acceptable behaviours and an important aspect of narcissism is the refusal of boundaries and limitations, creating a real difficulty in terms of the ability of rules and regulations to contain destructive tendencies.
While there is evidence of destructive narcissism in such events, allied to misogynist and racist attitudes, my focus is not on individual pathology, but on the social conditions that exacerbated these traits.As Blommaert (2017) points out his relationships online and offline over time offered a structure for validation of his most violent fantasies, accelerated and validated his descent into an extreme projective dynamic, a doer/done to mentality, where it became legitimised to go from victim to aggressor.The lack of boundaries of the online environment offered a structuring whereby there could be a movement of positioning to the only place of agency available within a paranoid doer/done to view of the world.While not wishing to digress too much, I will close on this example by pointing out that this phenomenon has burst the boundaries of the dark recesses of the internet and the more extreme, mercifully still few murderous events perpetrated by INCELs: recently Nathan Larson, a self-identified INCEL paedophile ran for Congress in Virginia (Squirrel, 2018).
It appears our time is witnessing not just the engendering, but also the exacerbation and normalisation of pathology.Yet as Richards (2018: 21) points out, this last statement is itself controversial.Using both the work of Hofstadter (1964) and the example of the deliberations of the psychiatric panel commissioned to assess Anders Breivik's state of mind following the murder of 77 people in Norway in 2011, Richards points to the difficulty of pathologising 'normality': the commission declined to see Anders as insane, given that he could be seen as belonging to a subculture sharing his ideological and political perceptions (Thorissen andAspass, 2012: Sec.21.5.2 cited in Richards, 2018: 21).Mad or bad?The boundaries are hard to draw, if we cannot accept that madness can affect social and political dynamics.As Richards (ibid) puts it: '…understanding the cultural supports of destructive movements or toxic leadership must involve accepting the pathological dimensions of everyday normality.'Richards (ibid: 24) avoids the issue however on whether it is right or possible to see society as such as pathological, preferring to limit his analysis to the case of Donald Trump.The more important point I want to take up from Richards (2018) however is that narcissism is not necessarily a destructive force in its own right.According to Hofstadter, Richards states, and more generally the concepts of malignant or negative narcissism in psychoanalytic literature (Kernberg, 1970;Rosenfeld, 1971, cited in Richards 2018) narcissism has to be allied to paranoia to be destructive, along with a perverse idealisation of destructive parts of the self.In Rodger's case there is an extreme sense of victimhood and persecution in conjunction with clear processes of splitting and projection towards 'others', perceived as hostile.Though I cannot be sure without having known the individual involved, it seems that an element of perverse idealisation of destructive parts of self could be gleaned in what I pointed out earlier as Rodger's description of his murderous actions as 'beautiful just retribution'.This section has looked at the kinds of people neoliberal capitalism is producing, from the increasingly common anxiety of students and links to narcissism to more extreme and violent forms of what could be seen as malignant cases of narcissism acting out atrocities.So far my argument overall is that the structures the neoliberal environment offers have taken away any sense of containment and are having a range of difficult effects on individuals and organisations.Nothing new in this argument, Weber, Lasch and more recently Harvey (2005), Layton (2010) and in this journal many others (2014, issue 19, vol. 1 and 2) have made the case that the neoliberal ideology has eroded the social welfare safety nets that were available particularly in the UK.This has given an atomised society, based on individuals, continuously assessed and self-assessing, increasingly anxious and insecure.Those who can suffer the pain, helplessness and rage are in a better place than those who disavow it in favour of grandiose omnipotent fantasies.What I am adding to the literature is that neoliberalism's corrosive impact often results from the interplay of fluidity and structure, their increased polarisation and reactive dysfunctional relatedness between them.As a consequence quality itself has become a confused notion, perverted in its nature by its equation with quantity and an obsession with surface and appearance.Available structures are flimsy, inconstant and quantitatively understood, yet at the same time rigidly applied and proliferating in some sectors: paradox and complexity abound.All kinds of distinctions have become blurred, students are consumers, yet paradoxically also outputs of my activity in terms of KPIs, as measured by numbers related to retention, progression and employment market ready graduates.
From narcissism to perversion
The previous section focused on individuals and touched on the element of perversion.Susan Long (2008) has pointed to this in relation to organisational life.In her book The Perverse Organisation and its Deadly Sins she points to 'evidence of a movement from a culture of narcissism towards elements of a perverse culture' (2008:1).Paul Hoggett (2010: 58) in agreement with Long summarises 'shifts in the psychic economy of capitalism, from instinctual repression (Freud 1929), to repressive desublimation (Marcuse 1966) and narcissism (Lasch 1978), and to a culture of perversion'.Long's analysis of perversion centres on case studies of Enron and Long-Term Capital Management as symptoms of wider social dynamics and Hoggett applies her work to political, social and governmental aspects.Five dimensions of perversion are given (also cited in Hoggett 2010: 57).
In Long's (2008: 15) view perversion "has to do with individual pleasure at the expense of a more general good…(it) acknowledges reality, but at the same time, denies it…(it) engages others as accomplices to perversion…(it) may flourish where instrumental relations have dominance in society…perversion begets perversion." In relation to Long's first point about individual pleasure, Hoggett (2010: 58) points out that 'narcissism and perversion are adjacent and overlapping states of mind', citing the work of Davar (2004), Waddell and Williams (1991), Rosenfeld (1985) and continues by making 'the spread of collusion and organised self-deception… the hallmark of a distinctively perverse as opposed to a straightforwardly narcissistic culture' (ibid), this being the central feature of neo-liberal forms of capitalism and governance.What Hoggett identified as the structural support for the perverse social defence identified by Long is the blurring of real and false, a confusion between image and reality, fostered by both computer technology and the culture of performance indicators, where the indicators skew behaviour and managers act 'as if' they were reality in themselves.Ten years have passed since Long's book, eight since Hoggett's article.The idea of fake news and a president communicating via tweets was not around then, though it was a direction his article was presaging.Cunning and deception have become part of the everyday right through to the highest corridors of power.Their genesis is in the lack of containment that the reactive relation between structure and fluidity produces, in the resulting blurring of differences in kind and the emptying out of quality and meaning.The recent political events in Britain are characterised by this.Does 'Brexit means Brexit', Mrs May's original slogan as she took charge from David Cameron, not speak of meaninglessness in its obvious, yet never admitted tautology?Brexit was peddled on the basis of lies, and going back on one's words is part of how the most important decision of recent times for the UK is being managed.The behaviour of UK politicians is becoming characterised by extreme brinkmanship on all sides.
Uncertainty and anxiety are rife as a result.In an increasingly uncontained and uncontainable environment growing turbulence and polarisation are the main features, with a deleterious effect on the economic growth it is meant to foster.It is hard to find anchorage or orientation, which could offer a modicum of containment.There is more to be explored here in relation to whether individual pleasure at the expense of the common good, as Long suggests, gives an accurate or sufficient answer to the issue of perversion.It seems to be more complex, with survival anxiety, whether exaggerated or realistically assessed, being at least a component of what may be going on.This is a rich seam for further research that requires more space for a fuller elaboration.
Conclusion
To conclude: the article has attempted to outline how an ancient tale can offer a new metaphor for our time of extremes.It has charted this in relation to some of the sociological metaphors and imaginations that have exerted enduring influence and been taken up by many for further elaboration and/or updating, before tracking how the relationship of structure and fluidity might be characterised as one of reactive and dysfunctional relating.Structures and systems are necessary aspects of life as is the capacity to be flexible.It is their extreme and perverted manifestations that become monstrous.
The metaphor of Scylla and Charybdis contains important indications: it is the tension between fluidity and structural geographical features, i.e. constriction of water caused by land features positioning that creates the deadly whirlpool of Charybdis; land is no safer in these conditions as there lies the danger of cave and multi-headed Scylla.It is not land or water, structure and fluidity per se that constitute danger or salvation: it is their polarised dichotomy, their form, position and relationship.That is in itself a cultural and political matter that should lead us to question what containment might mean in terms of culture and politics.Squeezing and controlling, mistrust and dumbing down certainly are not that.
What I have tried to characterise is the increasingly problematic relation and polarisation of processes in tension with each other.On the one hand there is a separation between areas where bureaucratic control is seen as paramount in saving us from chaos and corrupt practices, for example in the conceptualisation of the state's function as a brake (Mazzucato, 2018), rather than bureaucracy as an enabling function to accomplish tasks, while on the other there has been an equally misguided faith in fluidity per se, for instance the power of the economy to regulate itself.How we make a living does indeed affect how we feel and think, what culture we produce.The trouble is that the culture being produced is now often empty and out of balance, both in terms of inequalities and social justice and in relation to individual wellbeing.
Having compliant, non-questioning citizens is not only cheaper, but easier to manage in the short term at least.What is lost is a workforce and organisations that can think through problems, hold responsibility in the face of continuous change, manage themselves in complex situations and above all remember that our labour should be to the benefit of humankind, which of course include our environment, the planet that makes our lives possible.Being mindful of this higher ethical principle could create a thirdness that might allow us to lift our heads from the turmoil of competitive individualistic anxieties.It might give us the possibility of acknowledging both the reality of human suffering as well as the human potential for ingenuity, which I use here in contrast to cunning.What is gained by continuing in the neoliberal trajectory instead is an increase in real human suffering, including the destruction of our planet, while believing we have 'attained a level of humanity never before achieved'.
think of the two tendencies in different combinations and relationships.The fluidity in the commercial sector is driving some interesting examples of perverse mixtures, whereby regulation tries to stop particular types of fluidity, such as employees' absence, by imposing fines, while retaining the right to extreme flexibility by giving workers no security of employment, either by only offering zero hour contracts or by restructuring, thereby getting round worker's protection regulations.The latest reaction to unethical data leaks and use has seen a more conventional government response: Facebook's Mark Zuckerberg was called to account by the US government and Cambridge Analytica's Alexander Nix by the UK government over meddling in recent voting, respectively US elections and UK Brexit referendum.The General Data Protection also re-evaluates Lasch's work as offering a 'very deeply psychosocial model' that offers a 'perhaps unique attempt to capture the full psychosocial arc of change, from the unconscious of the adult parent through the primitive terrors and defences of the infantile psyche to the unconscious of the next adult generation' linked to changing practices and values in social and cultural milieus.In terms of my Homeric metaphor Lasch was telling us about the dangers of the extreme fluidity of late modernity and in terms of Weber's quote proposing a renewal of culture and values, one of the possibilities Weber mused about.
|
v3-fos-license
|
2019-03-09T06:03:12.462Z
|
2015-07-08T00:00:00.000
|
56376519
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://periodicos.ufsc.br/index.php/rbcdh/article/download/1980-0037.2015v17n4p460/29638",
"pdf_hash": "f93cd1af4f81cf1a21fbf9a18c6bfb6c8edbc805",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:548",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f93cd1af4f81cf1a21fbf9a18c6bfb6c8edbc805",
"year": 2015
}
|
pes2o/s2orc
|
Prevalence and factors associated with sarcopenia in elderly women living in the community
The objective of this study was to identify the prevalence of sarcopenia and factors associated with it in a population of elderly women living in the community in the Northeast of Brazil. This was a cross-sectional study of 173 women aged 60 years or older living in the urban zone of the municipality of Lafaiete Coutinho, Bahia, Brazil. Associations between sarcopenia (defined as reduced muscle mass plus reduced muscle strength and/or reduced muscle performance) and independent variables including sociodemographic characteristics, behavioral variables and health status were tested using logistic regression techniques. The significance level was set at 5%. The prevalence of sarcopenia in the study population was 17.8%. The logistic regression technique only identified the variables advanced age (p = 0.005) and hospital admission during the previous 12 months (p = 0.009) as statistically significant. It was concluded that there was a significant prevalence of sarcopenia among elderly women resident in a community with unfavorable health conditions and the findings showed that the strongest associations were with age over 80 years and hospital admission during the previous 12 months.
INTRODUCTION
The aging process triggers changes to body composition that are reflected in increased fat mass and reduced muscle mass 1 .The resulting progressive and generalized loss of skeletal musculature, strength and physical performance characterize sarcopenia 2 , which is a syndrome that predisposes people to adverse consequences such as declining functional capacity 3 and even increased risk of death 4 .In view of this, aging-induced sarcopenia is considered a risk factor for fragility and reduced functionality and its high prevalence means it is seen as an important public health problem.
There are a variety of methods for inferring the degree of sarcopenia in elderly people, ranging from imaging exams to tests of motor performance and the resulting estimates of prevalence vary depending on the criterion employed 5 .A recently-published study conducted to estimate the prevalence of sarcopenia in the 60 to 70-year-old population of the United Kingdom reported that 6.8% had sarcopenia 5 .A cross-sectional study undertaken in the United States by Melton et al. reported rates varying from 6% to 15% among participants aged 65 or older and showed that the rate was dependent on the parameter employed to diagnose sarcopenia 6 .Studies conducted in Brazil, also with elderly people, found that 34% 7 and 15.9% 8 of these individuals had sarcopenia.Against this background, the consensus on definition and diagnosis criteria developed by the European Working Group on Sarcopenia 2 stands out as offering a definition of practical clinical criteria based on just three elements, muscle mass, muscle strength and performance, thus facilitating its application to epidemiological studies.
Considering the high prevalence of sarcopenia, the important implications of the pathology for elderly individuals 9 and the high cost of its consequences, studies of this type should help to monitor the phenomenon in communities living in unfavorable conditions for health and for quality of life.To date, there have been few investigations of the prevalence of sarcopenia and its determinants in Latin American countries that have employed the criteria laid out in the European Consensus 2 .The results of a literature search indicate that just two studies have been conducted employing representative samples from countries in the region 10,11 .The first is a study conducted by Arango-Lopera et al. 10 , using data from a Mexican population to estimate the prevalence of sarcopenia in elderly people aged 70 or older.The second was carried out by Alexandre et al. 11 using data from the Health, Wellbeing and Aging study (SABE -Saúde, Bem-Estar e Envelhecimento) to estimate the prevalence of sarcopenia and factors associated with it in elderly people from a large city in Brazil.
In view of the above, this study is intended to contribute to expanding knowledge on the subject, thereby facilitating preventative actions, by identifying which of these people's characteristics are associated with sarcopenia and, as such, should help to improve health.The objective of this study was to identify the prevalence of sarcopenia and factors associated with it in elderly women living in the community in a town in the Northeast of Brazil.
METHODOlOgICAl PROCEDURES
This was an observational analytical study with a cross-sectional design analyzing data from an epidemiological population study with data collection by home visits that investigated the nutritional status, risk behaviors and health status of the elderly population in Lafaiete Coutinho, BA, Brazil (Estado nutricional, comportamentos de risco e condições de saúde dos idosos de Lafaiete Coutinho-BA).Details of the location, study population and data collection methodology have been published elsewhere. 12riefly, the study population comprised all women aged ≥ 60 years residing in the urban zone of the municipality (n = 195).Of this population of 195 elderly women, 173 (88.7%) took part in the study, 10 refused (5.1%) and 12 (6.2%)were not located after three home visits, on alternate days, and were considered lost to the sample.
The study was designed and conducted in compliance with the World Medical Association's Helsinki Declaration and was approved by the Human Research Ethics Committee at the Universidade Estadual do Sudoeste da Bahia (protocol 064/2010).
Sarcopenia (dependent variable)
Sarcopenia was estimated using the criteria set out in the European consensus on definition and diagnosis 2 , which recommends using three elements: muscle mass, muscle strength and physical performance.In this study, muscle mass was estimated using an anthropometric equation; muscle strength was assessed using the handgrip strength test (HST); and physical performance was measured by gait velocity.
Muscle mass component: total muscle mass (TMM) was estimated using an equation originally proposed by Lee et al. 13 and later validated for use with elderly Brazilians 14 : TMM (kg) = (0.244 x body mass) + (7.8 x height) -(0.098 x age) + (6.6 x sex) + (ethnicity -3.3).The variables are represented by the following values: 1 = male and 0 = female; ethnicity: 0 = white (white, indigenous, and mixed race white with indigenous), -1.2 = Asian and 1.4 = African Brazilian (black and mixed race including black).Ethnicity was self-reported and the procedures and the instruments employed to measure body mass (kg) and height (m) have been described elsewhere 12 .
The TMM result was used to calculate a muscle mass index (MMI = muscle mass total/height 2 ), which was then classified according to the cutoff points proposed by Janssen et al. 15 : MMI ≤ 5.75 kg/m2 = high risk; 5.76 < MMI ≤ 6.75 kg/m2 = moderate risk; MMI > 6.75 = low risk.For the purposes of analysis, MMI was recategorized as a dichotomous variable: MMI ≤ 6.75 kg/m2 = insufficient muscle mass; MMI > 6.75 kg/m2 = adequate muscle mass.
Muscle strength component: Muscle strength was assessed using a handgrip strength test, the instruments and procedures for which have been described elsewhere 16 .Weakness was defined according to the body mass index [BMI = body mass (kg) / height 2 (m)], using a criterion adapted from work by Fried et al. 17 First BMI was classified into three categories 18 : < 22 kg/m 2 = underweight; 22.0 ≤ BMI ≤ 27 kg/m 2 = healthy weight; > 27 kg/m 2 = overweight.For each category the HST cutoff point (in kg) for weakness was fixed at the 25th percentile, as follows: underweight, HST = 11 kg; healthy weight, HST = 21 kg; and overweight, HST = 14 kg.Individuals who met the weakness criterion and those who were unable to perform the test because of physical limitations were considered to have insufficient muscle strength.
Physical performance component: Physical performance was assessed using a 2.44 m walking test, the procedures for which have been described elsewhere 16 .Poor performance was defined according to height, which was classified into one of two categories using a criterion adapted from work by Guralnik et al. 19 First the sample was divided according to median the height of 1.49 m (the 50th percentile, i.e. ≤ 1.49 m = less than or equal to the median; > 1.49 m = greater than the median).Next the 75th percentile of the distribution of gait results was calculated for each category (the third quartile) and used as the cutoff point, as follows: ≤ median height = 6 seconds; > median height = 4 seconds.Individuals who met the criterion for poor performance and those who were unable to perform the tests because of physical limitations were defined as having insufficient physical performance.
Outcome: After each of the three components had been measured, the elderly women were initially classified as follows 2 : free from sarcopenia = adequate muscle mass, adequate muscle strength and adequate physical performance; pre-sarcopenia = insufficient muscle mass, but adequate muscle strength and adequate physical performance; sarcopenia = insufficient muscle mass plus either insufficient muscle strength or insufficient physical performance; and severe sarcopenia = insufficient muscle mass plus both insufficient muscle strength and insufficient physical performance.For the purposes of analysis, sarcopenia was then recategorized as a dichotomous variable: free from sarcopenia + pre-sarcopenia = no sarcopenia; sarcopenia + severe sarcopenia = sarcopenia.
Independent variables
The sociodemographic characteristics collected included age (60-69, 70-79 and ≥ 80 years), literacy sufficient to read and write a message (yes or no), marital status (has partner or single) and participation in religious activities (yes or no).
Behavioral characteristics included consumption of alcoholic beverages (≤ 1 day/week or > 1 day/week), smoking (never smoked, ex-smoker or smoker) and habitual physical activity level, assessed by the International Physical Activity Questionnaire (IPAQ), long form 20 (≥ 150 minutes of moderate or vigorous physical activity per week = active and < 150 minutes per week = insufficiently active).
Health status was assessed using the following variables: self-reported number of chronic diseases (none, one or two or more) including hyper-tension, diabetes, cancer (except tumors of the skin), chronic pulmonary disease, heart disease, circulatory diseases, rheumatic diseases and osteoporosis; hospital admissions during the previous year (none or one or more); number of medications currently taking (one or none versus two or more); depressive symptoms, assessed using the short form, 15-item Geriatric Depression Scale (GDS) 21 (≤ 5 points = free from depressive symptoms and > 5 points = presence of depressive symptoms); falling episodes during the previous year (yes or no); and functional capacity, measured using the Katz et al. 22 scale, which assesses activities of daily living (ADLs) related to self-care such as feeding, washing and dressing oneself, grooming and toilet hygiene.The variable ADLs was dichotomized 23 , using the cutoff point 4/5, so that elderly people were defined as dependent in terms of ADLs if they scored four points or less and independent if they scored more than four points.
Statistical procedures
Variables were initially subjected to descriptive analysis.Associations between sarcopenia and explanatory (independent) variables were tested by calculating crude and adjusted odds ratios, by points and by 95% confidence intervals (95%CI), using logistic regression modeling.In the crude analyses, the prevalence of sarcopenia was calculated for each category of explanatory variables and the significance level was tested using the Wald test of heterogeneity.Variables that exhibited statistical significance to at least 20 % (p ≤ 0.20) in the crude analyses were included in the adjusted analysis, following the sequence of a hierarchical model for determination of the outcome (Figure 1).In this model, the higher level variables (distal) interact with and determine the lower level variables (proximal).The effect of each explanatory variable on the outcome was controlled by the other variables at the same level and by higher levels in the model.The statistical criterion for retention in the model was 20 % (p ≤ 0.20).The significance level adopted for the study was 5% (α = 0.05).Data were tabulated and analyzed using IBM SPSS Statistics for Windows (IBM SPSS.21.0, 2012, Armonk, NY: IBM Corp.).
RESUlTS
A total of 173 women with mean age of 74.8 ± 9.9 (range: 60 to 103 years) took part in the study.Of these women, 49.7% did not have healthy weight, 47.3% were sedentary and more than 60% were dependent for at least one activity of daily living.The other characteristics of the study population are shown in Table 1. Figure 2 illustrates the prevalence of elderly women with and without sarcopenia.The analyses of sarcopenia prevalence were conducted with the results for 146 elderly women (84.4% of the sample), which is the number of participants for whom all information needed to calculate the variable sarcopenia.It will be observed that there was a 17.8% prevalence of sarcopenia among these elderly women.Table 2 shows the prevalence of sarcopenia according to the explanatory variables investigated.It will be observed that elderly women aged ≥ 80 years, those who were insufficiently active, those who had been admitted to hospital at least once and those who exhibited functional dependence were all more likely to be sarcopenic.
The results of the crude regression analysis show that just four variables (age group, physical activity, hospital admissions and functional capacity) attained sufficient statistical significance (p ≤ 0.20) to be included in the multiple model.
After intralevel and interlevel adjustments (Table 3) according to the hierarchical model, the variables physical activity and functional capacity were not retained in the final model because they did not meet the criterion for significance (p ≤ 0.05).There was a positive relationship between sarcopenia and both advanced age and hospital admissions during the previous twelve months.Elderly women aged ≥ 80 years exhibited 4.5 times greater chance of sarcopenia than women in the age group 60 to 69 years (p = 0.005), and women who had been admitted to hospital at least once during the previous year exhibited a 3.49 times greater chance of sarcopenia than those who had not been admitted to hospital within 12 months (p = 0.009).
DISCUSSION
The results of this study show that sarcopenia was present in approximately 18% of the elderly women studied, and indicate that it was positively associated with age greater than or equal to 80 years and hospital admission during the previous 12 months.Other studies that have been conducted to estimate the prevalence of sarcopenia using the EWGSOP 2 criteria (the same criteria used in the present study) have reported prevalence rates that contrast with those observed here.A study conducted by Patel et al. 5 used skin folds to estimate body composition and estimated a prevalence of approximately 8% among elderly women, while Arango-Lopera et al. 10 found a prevalence of 48.5% using calf circumference.The differences in prevalence may be because an MMI was used in the present study to estimate body composition.A similar prevalence of sarcopenia was reported in the results of a study by Alexandre et al. 11 where prevalence was approximately 16%.In that study the same criteria were used, but the sample was larger and the design included intervention.
Prevalence rates vary from 33% among elderly Spanish women 24 , through 19.84% in elderly Italian women 4 to 7.9% among women from England 5 .Depending on the technique employed in the different studies and the cutoff values chosen, the proportion of muscle mass can vary considerably.As a result, comparison of prevalence rates is problematic because of the lack of consensus and because of population variations and methodological differences in the criteria used to diagnose sarcopenia 25 .
Another very important point is that the study population has a low Human Development Index (which was 0.599 in 2010 26 ), high mortality rates and low educational levels, which distinguishes it from the studies mentioned above that investigated populations in developed countries, particularly in terms of the unfavorable conditions of health and quality of life.
Sarcopenia was positively associated with age ≥ 80 years, which is in line with results that can be found in the literature, such as those from a study by Iannuzzi-Sucich et al. 27 , who found that sarcopenia prevalence increased from 22.6% to 31.0%among women over 80 years of age.There is also a similarity with a study by Alexandre et al. 11 , since age was associated with sarcopenia, but in that study in São Paulo the likelihood of sarcopenia increased from 70 years of age onwards, whereas in Lafaiete Coutinho it increased from 80 years of age onwards.
The association between sarcopenia and advanced age brings with it discussions related to the process of muscle degeneration as a consequence of senescence.During aging the muscle structure becomes disorganized and there is a substantial loss of lean mass, in terms of both number and size of muscle fibers 15 .This process takes place because skeletal muscle loses a large proportion of its fibers from 65 years of age onwards, reducing its mass, strength and contractile force 28 .
Analysis of the relationship between sarcopenia and hospital admissions detected a positive association.Alva et al. 29 conducted a study in which 27.2% of women classified as malnourished had been admitted to hospital more than three times during the previous year, whereas those with normal weight had not been admitted.Although the total length of hospital stays was not analyzed, these results suggest that bedridden people are at greater risk of the syndrome.There are few studies in the literature that show this association.However, a study published by Sayer explains that the link between inactivity and the consequent loss of muscle mass is predictive of sarcopenia 30 .
This study is subject to certain limitations that should be mentioned.These include the cross-sectional design, which means that measurement of these people's status in terms of exposure and effects over the long term was not possible.Additionally, while the methodology employing equations to accomplish measurements facilitates diagnosis it does not offer the same degree of accuracy as imaging exams.
CONClUSIONS
Based on the results of this study, it can be concluded that: (i) the prevalence of sarcopenia among elderly women resident in the community in a town in Northeast Brazil was 17.8%; (ii) age ≥ 80 years and hospital admission during the previous 12 months appear to be the most important determinants of sarcopenia in this population.
As such, the findings of this study provide a guide for identification of subgroups at risk of sarcopenia by means of analysis of the factors associated with it and offer a foundation for planning measures to prevent functional limitations or to help reverse them in elderly women, thereby helping to provide integrated care for these people.
I-ADL -Instrumental Activities of Daily Living; B-ADL -Basic Activities of Daily Living.
Table 2 .
Prevalence of sarcopenia and its relationship to the explanatory variables investigated.Lafaiete Coutinho, Brazil, 2011.
Table 3 .
Hierarchical logistic regression model of the relationship between sarcopenia and the explanatory variables investigated.Lafaiete Coutinho, Brazil, 2011.
|
v3-fos-license
|
2020-09-22T13:04:41.652Z
|
2020-09-18T00:00:00.000
|
221882884
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2020/09/20/2020.09.18.302901.full.pdf",
"pdf_hash": "49c3034fe1a0437345c71e0da05492cbc3691345",
"pdf_src": "BioRxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:550",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "701adbfa538db457386c2f689ab9c7397a1f6b68",
"year": 2020
}
|
pes2o/s2orc
|
SARS-CoV-2 Nsp1 suppresses host but not viral translation through a bipartite mechanism
SUMMARY The Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is a highly contagious virus that underlies the current COVID-19 pandemic. SARS-CoV-2 is thought to disable various features of host immunity and cellular defense. The SARS-CoV-2 nonstructural protein 1 (Nsp1) is known to inhibit host protein translation and could be a target for antiviral therapy against COVID-19. However, how SARS-CoV-2 circumvents this translational blockage for the production of its own proteins is an open question. Here, we report a bipartite mechanism of SARS-CoV-2 Nsp1 which operates by: (1) hijacking the host ribosome via direct interaction of its C-terminal domain (CT) with the 40S ribosomal subunit and (2) specifically lifting this inhibition for SARS-CoV-2 via a direct interaction of its N-terminal domain (NT) with the 5’ untranslated region (5’ UTR) of SARS-CoV-2 mRNA. We show that while Nsp1-CT is sufficient for binding to 40S and inhibition of host protein translation, the 5’ UTR of SARS-CoV-2 mRNA removes this inhibition by binding to Nsp1-NT, suggesting that the Nsp1-NT-UTR interaction is incompatible with the Nsp1-CT-40S interaction. Indeed, lengthening the linker between Nsp1-NT and Nsp1-CT of Nsp1 progressively reduced the ability of SARS-CoV-2 5’ UTR to escape the translational inhibition, supporting that the incompatibility is likely steric in nature. The short SL1 region of the 5’ UTR is required for viral mRNA translation in the presence of Nsp1. Thus, our data provide a comprehensive view on how Nsp1 switches infected cells from host mRNA translation to SARS-CoV-2 mRNA translation, and that Nsp1 and 5’ UTR may be targeted for anti-COVID-19 therapeutics.
INTRODUCTION
Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), the causative agent of the newly emerged infectious disease COVID-19, is a highly contagious and deadly virus with fast person-to-person transmission and potent pathogenicity (Chan et al., 2020b). It is an enveloped, single-stranded betacoronavirus that contains a positive-sense RNA genome of about 29.9 kb Wang et al., 2020;Wu et al., 2020;Zhu et al., 2020). The SARS-CoV-2 genome codes for two large overlapping open reading frames in gene 1 (ORF1a and ORF1b) and a variety of structural and nonstructural accessory proteins (Zhou et al., 2020). Upon infection, SARS-CoV-2 hijacks host cell translation machinery to synthesize ORF1a and ORF1b polyproteins that are subsequently proteolytically cleaved into 16 mature non-structural proteins, namely Nsp1 to Nsp16 (Chan et al., 2020a;Hartenian et al., 2020;Zhou et al., 2020).
Nsp1 is a critical virulence factor of coronaviruses and plays key roles in suppressing host gene expression, which facilitates viral replication and immune evasion (Jimenez-Guardeno et al., 2015;Kamitani et al., 2009;Tanaka et al., 2012). It has been shown that Nsp1 effectively suppresses the translation of host mRNAs by directly binding to the 40S small ribosomal subunit (Narayanan et al., 2008;Tohya et al., 2009). The recent cryo-electron microscopy (cryo-EM) structures indeed revealed the binding of the C-terminal domain (CT) of Nsp1 to the mRNA entry channel of the 40S (Schubert et al., 2020;Thoms et al., 2020), blocking translation. However, how SARS-CoV-2 overcomes the Nsp1-mediated translation suppression for its own replication remains an open question.
Previous studies on SARS-CoV suggest that the the 5' untranslated region (5' UTR) of coronavirus mRNA protects the virus against Nsp1-mediated mRNA translation inhibition (Huang et al., 2011;Kamitani et al., 2009;Thoms et al., 2020). Here, we investigated the mechanism of how Nsp1 suppresses translation and how SARS-CoV-2 escapes this suppression to effectively switch the translational machinery from synthesizing host proteins to making viral proteins during its infection.
SARS-CoV-2 Nsp1 Suppresses Translation
To investigate the function of SARS-CoV-2 Nsp1 in inhibiting mRNA translation, we cotransfected an mScarlet reporter construct with MBP-tagged Nsp1 or MBP alone in HeLa cells and imaged mScarlet fluorescence and anti-MBP immunofluorescence ( Figure 1A). The mScarlet reporter used an expression vector that contains the cytomegalovirus (CMV) promoter and 5' UTR (referred as control 5' UTR) and is commonly employed for mammalian cell expression. Compared to cells transfected with the MBP control, MBP-Nsp1-transfected cells showed markedly reduced mScarlet expression. Quantification of mScarlet reporter intensities showed that MBP-Nsp1 decreased mScarlet expression by over 4-fold (p < 0.001) ( Figure 1B). These data demonstrate that SARS-CoV-2 Nsp1 potently inhibits protein translation in cells, recapitulating previous reports (Narayanan et al., 2008;Schubert et al., 2020;Thoms et al., 2020;Tohya et al., 2009).
The C-terminal Helical Hairpin of Nsp1 Binds to the mRNA Channel of 40S Ribosome
To elucidate how Nsp1 inhibits protein translation, we set out to determine the structure of an Nsp1-40S complex using cryo-EM. We first expressed and purified His-MBP-tagged Nsp1 by Ni-NTA affinity and gel filtration chromatography ( Figure 1C). We then purified human ribosomes from Expi293 cells using an established protocol (Khatter et al., 2014), dissociated the large and small subunits (60S and 40S) by puromycin treatment, and further purified 60S and 40S away from mRNAs and other contaminants by sucrose cushion. Mixing the ribosome preparation with MBP-Nsp1 followed by amylose affinity and gel filtration chromatography yielded the Nsp1-40S complex ( Figure 1C -D). The sample displayed homogenous particles on cryo-EM micrographs ( Figure 1E).
We determined the cryo-EM structure of the Nsp1-40S complex at a resolution of at least 2.9 Å for both the body and the head regions of the 40S (Figure 2A, S1). The 40S structure was rigidly fit using the PDB entry 6g5h (Ameismeier et al., 2018). The Nsp1-CT could be traced in the cryo-EM map, which revealed a tight helical hairpin (Figure 2A-C). However, the MBP tag and the Nterminal domain (Nsp1-NT) are invisible in the cryo-EM map, suggesting that they are flexibly linked to the CT. Consistently, there is a predicted ~20 residue linker between Nsp1-NT and Nsp1-CT. Nsp1 interacts with the 40S at the mRNA channel located at the interface between the head and body of the 40S ( Figure 2B). As such, Nsp1 binding blocks mRNA entry to the ribosome to inhibit protein translation, in agreement with recent structural studies (Schubert et al., 2020;Thoms et al., 2020).
In more detail, Nsp1 forms interactions with three 40S protein subunits, uS3, uS5 and eS30, as well as the 18S rRNA ( Figure 2B -C). It is worth noting that three out of the four binding interfaces are between Nsp1 and the body of the 40S (uS5, eS30, 18S); only one interface, at the most proximal region of the Nsp1-CT, is with the head of the 40S (uS3) ( Figure 2D). The Nsp1-uS3 interface has a buried surface area of 283 Å 2 with mostly charge-charge interactions, and is mediated by Nsp1 residues, including D152, E155, E159 and D156 ( Figure 2E). In contrast, the Nsp1-uS5 interface has a larger buried surface area of 440 Å 2 , with mainly hydrophobic interactions ( Figure 2E). At the interface with the18S rRNA, Nsp1 residues H165 and K164 stack with the bases of U607, U630, and U631, while R171 and R175 form charge-charge interactions with the phosphate backbone of A604, G606 and U607 ( Figure 2E). For the Nsp1-eS30 interaction, negatively charged residues E172 and E176 of Nsp1 interact with positively charged residues K52 and K53 at one extended loop of eS30 ( Figure 2E). Overall, Nsp1 is highly buried, with close to 60% of its total surface area of 2,400 Å 2 interacting with the 40S.
We hypothesized that the single interface of Nsp1 with the 40S head is likely cannot restrict the rotation and the intrinsic flexibility of the 40S head relative to the 40S body, which is known to be necessary during protein translation (Hussain et al., 2014;Lomakin and Steitz, 2013). Consistent with this hypothesis, we observed certain degrees of movement between the head and body of the 40S during 3D classification and reconstruction; as a result, high-resolution cryo-EM maps were obtained using focused refinement ( Figure S1). We postulate that the flexibility of the head, as well as its limited interaction with Nsp1-CT, may be important for the ability of SARS-CoV-2 5' UTR to dislodge the Nsp1-40S interaction to evade the translation block (see DISCUSSION).
The 5' UTR of SARS-CoV-2 mRNA Bypasses the Translation Inhibition by Nsp1
To analyze the effect of SARS-CoV-2 Nsp1 on the translation of SARS-CoV-2 mRNA, we replaced the control 5' UTR in the mScarlet reporter construct with the SARS-CoV-2 5' UTR ( Figure 3A), and co-expressed it with MBP-tagged Nsp1 or MBP alone in HeLa cells ( Figure 3B -C). In contrast to the control 5' UTR, expression of the mScarlet reporter downstream from SARS-CoV-2 5' UTR in MBP-Nsp1 transfected cells was not significantly different from MBP transfected control cells. As expected, MBP-Nsp1-NT, which does not interact with the 40S, also did not affect mScarlet reporter expression. By contrast, a severe reduction in mScarlet expression was observed by MBP-Nsp1-CT (p < 0.001). Co-expression of MBP-Nsp1-NT did not remove the inhibitory effect of MBP-Nsp1-CT on mScarlet expression, suggesting that only covalently linked NT and CT could rescue mScarlet expression downstream the SARS-CoV-2 5' UTR. These results demonstrate that full-length Nsp1 selectively suppresses host but not SARS-CoV-2 protein translation, while the Nsp1-CT alone is a general inhibitor of both host and viral protein synthesis.
Because covalently linking Nsp1-NT and CT is required for the evasion of Nsp1-mediated translation inhibition by SARS-CoV-2 5' UTR, we wondered if the length of the linker between NT and CT in Nsp1 has any functional effect. To address this question, we inserted additional 20 residues (linker1) or 40 residues (linker2) at the Nsp1 linker ( Figure 3A). Remarkably, the linker extension dramatically reduced the ability of SARS-CoV-2 5' UTR to evade Nsp1-mediated translation inhibition as measured by mScarlet expression (p < 0.05) ( Figure 3C, E). These data were further validated in the SARS-CoV-2 5' UTR luciferase assay. Interestingly, compared with the 20-residue linker, the 40-residue linker was more effective in suppressing SARS-CoV-2 5' UTR-mediated evasion of translation inhibition ( Figure 3D).
SARS-CoV-2 5' UTR Directly Interacts with Nsp1-NT
Since Nsp1-CT is a general inhibitor of mRNA translation, clarifying the explicit function of Nsp1-NT is important for understanding how SARS-CoV-2 5' UTR evades inhibition mediated by Nsp1. We investigated whether SARS-CoV-2 5' UTR directly interacts with Nsp1-NT. For this purpose, we generated a construct with strep-tagged Nsp1-NT downstream of SARS-CoV-2 5' UTR and expressed it in Expi293F cells ( Figure 4A). Upon affinity purification using the strep tag followed by gel filtration chromatography, we immunoblotted the peak fractions using anti-strep to detect Nsp1-NT, and performed RT-PCR on the same fractions to detect SARS-CoV-2 5' UTR. Notably, Nsp1-NT and SARS-CoV-2 5' UTR co-eluted in the same peak ( Figure 4B), suggesting a direct interaction between them.
The SL1 Stem-Loop of the 5' UTR Is Required for the Translation of SARS-CoV-2 mRNA
The 5' UTR of coronaviruses comprises at least four stem-loop structures, among which stemloop 1 (SL1) has been shown to play critical roles in driving viral replication (Li et al., 2008). Thus we generated SARS-CoV-2 SL1 and ΔSL1 5' UTR mScarlet reporters to elucidate the function of SL1. Compared with mScarlet translation in control cells co-transfected with MBP, the translation of SARS-CoV-2 ΔSL1 5' UTR mScarlet reporter was markedly reduced in MBP-Nsp1 transfected cells (p < 0.001), suggesting that SL1 is required for evasion of Nsp1-mediated translation suppression ( Figure 4C -D). In comparison with SARS-CoV-2 ΔSL1 5' UTR, the 5' UTR containing only the SL1 region retained more mScarlet expression, suggesting that SL1 is partially active against the translation block by Nsp1; however, SL1-mediated mScarlet expression is significantly less in Nsp1-transfected cells than in MBP-transfected cells, suggesting that SL1 alone is not sufficient to evade the translation suppression ( Figure 4C-D).
The role of SL1 was further confirmed by high throughout analysis of mScarlet-expressing cells using an ImageXpress high content microscope. In contrast to the intact SARS-CoV-2 5' UTR, which showed no change in reporter activity in the presence or absence of Nsp1, cells expressing the SARS-CoV-2 ΔSL1 5' UTR reporter showed dramatically reduced mScarlet mean fluorescence intensity upon Nsp1 co-expression ( Figure 4E). We also generated SARS-CoV-2 SL1 or ΔSL1 5' UTR luciferase reporter and found that SL1 conferred partial antagonism to translation suppression by Nsp1, while ΔSL1 lost the ability to escape the translation suppression ( Figure 4F). Together, these data support the role of SL1 in SARS-CoV-2 5' UTR as an important element in the evasion of translation inhibition. Curiously, a previous study failed to show SARS-CoV-2 5' UTRmediated evasion of Nsp1-mediated translation inhibition (Schubert et al., 2020). However, judging from the primer used, the 5' UTR in the experiment did not contain SL1, consistent with our data on SARS-CoV-2 ΔSL1 5' UTR.
DISCUSSION
SARS-CoV-2 Nsp1 is a major virulence factor which suppresses host gene expression and immune defense (Jimenez-Guardeno et al., 2015;Kamitani et al., 2009;Tanaka et al., 2012). Consistent with recently published cryo-EM structures (Schubert et al., 2020;Thoms et al., 2020), our data showed that the helical hairpin at the CT of SARS-CoV-2 Nsp1 interacts with the 40S subunit of the ribosome to block mRNA entry ( Figure 5), confirming that Nsp1-CT is a general inhibitor of the cellular translation machinery. More importantly, we revealed that Nsp1-NT directly interacts with SARS-CoV-2 5' UTR, and that this interaction relieves translational inhibition imposed by Nsp1-CT to allow successful translation of the SARS-CoV-2 mRNA ( Figure 5). Thus, coronaviruses like SARS-CoV-2 have evolved a clever strategy to selectively hijack the host translation machinery to support their own replication while counteracting the host immune response.
We hypothesize that the Nsp1-CT-40S interaction is incompatible with the interaction between SARS-CoV-2 5' UTR and Nsp1-NT. When SARS-CoV-2 5' UTR is bound to Nsp1-NT, the covalently linked Nsp1-CT cannot bind 40S likely due to a steric factor in which Nsp1-CT is unable to reach 40S for a productive interaction. This in turn allows loading of the SARS-CoV-2 mRNA for translation initiation. Consistent with this hypothesis, increasing the linker length between Nsp1-NT and Nsp1-CT greatly reduced SARS-CoV-2 5' UTR-mediated translation in the presence of Nsp1 ( Figure 5). Even when an Nsp1 molecule is already bound to a ribosome, we postulate that the SARS-CoV-2 5' UTR can interact with Nsp1-NT to strip off the Nsp1-CT-40S interaction to proceed with viral mRNA translation. One possible scenario is that the binding of SARS-CoV-2 5' UTR to Nsp1-NT exerts a pulling force via the short linker to the N-terminus of Nsp1-CT, which is weakly bound at the flexible head of the 40S, to dislodge Nsp1-CT from the 40S. This mechanism may be analogous to how newly synthesized IκBα can remove the NF-κB transcription factor from the bound DNA (Oeckinghaus and Ghosh, 2009).
Thus, there is a tug of war between Nsp1-NT and Nsp1-CT, pulled by their respective binding partners, SARS-CoV-2 5' UTR and the 40S subunit of the ribosome. Our data reveal the importance of the SL1 stem-loop at the beginning of SARS-CoV-2 5' UTR in overcoming the inhibition of translation by Nsp1 and highlight a new potential therapeutic target for limiting SARS-CoV-2 replication. Blocking the 5' UTR-Nsp1 interaction, be it by a small molecule, an anti-sense RNA or other mechanisms, may resolve this tug of war to the detriment of the SARS-CoV-2 virus to help treat the devastating COVID-19.
HEK293T, Expi293F and HeLa cells were purchased from the American Type Culture Collection (ATCC), and cultured in Dulbecco's Modified Eagle medium (DMEM) or Expi293 Expression Medium (Gibco) supplemented with 10% fetal bovine serum and 1% penicillin/streptomycin (Gibco) at 37 °C with 5% CO2. Cells were transiently transfected with indicated plasmids using FuGENE Transfection Reagent (Promega) or Lipofectamine 2000 (Thermo Fischer Scientific) according to the manufacturer's instructions.
Preparation of Nsp1, Human Ribosome and the Nsp1-40S Complex
SARS-CoV-2 full-length Nsp1 (1-180 aa), Nsp1-NT (1-127 aa) and Nsp1-CT (128-180 aa) constructs carrying an N-terminal His-MBP tag were transformed into E. coli BL21-CodonPlus (DE3)-RIPL, and single colonies were picked and grown in LB medium supplemented with 50 μg/ml kanamycin at 37 °C. Protein expression was induced for 16 h at 18 °C with 0.5 mM IPTG after OD600 of the culture reached 0.8. Cells were harvested by centrifugation, resuspended in lysis buffer (20 mM Tris pH 7.4, 150 mM NaCl, 1 mM TCEP and protease inhibitors) and lysed using ultrasonic homogenizer (Constant Systems Ltd). The proteins were first purified by affinity chromatography using TALON metal affinity resin (TaKaRa), and further purified via gel filtration chromatography on a Superdex 200 column (GE Healthcare) in gel filtration buffer (20 mM Tris pH 7.4, 150 mM NaCl and 1 mM TCEP). The purified proteins were flash-frozen in liquid nitrogen, and stored until further use at -80 °C.
Human 80S ribosome was purified as described (Khatter et al., 2014), followed by incubation with 1 mM puromycin for 30 min at 4 °C in buffer R containing 20 mM Tris pH 7.5, 2 mM MgCl2, 150 mM KCl, and 1 mM TCEP. The mixture containing 40S and 60S subunits was loaded onto a 30% sucrose cushion and centrifuged to remove mRNAs and other contaminants. The pellets were resuspended in buffer R, flash-frozen in liquid nitrogen, and stored at -80 °C. Nsp1-ribosome complex formation was carried out by incubating recombinant His-MBP-Nsp1 protein with the ribosome preparation containing 40S and 60S subunits for 30 min. Nsp1-40S complex was then affinity-purified using amylose resin (NEB), followed by size-exclusion chromatography. The peak corresponding to the Nsp1-40S complex was collected, concentrated to 2 OD280, and subjected to cryo-EM analysis.
Cryo-EM Data Collection and Processing
A 3 μl drop of the Nsp1-40S complex was applied to glow-discharged copper grids with lacey carbon support and a 3 nm continuous carbon film (Ted Pella, Inc). The grids were blotted for 4 s in 100% humidity at 4 °C, and plunged-frozen using an FEI Vitrobot Mark IV. Cryo-EM data collection was performed on a 300 keV Titan Krios microscope (FEI) with a K3 direct electron detector (Gatan) at the National Cancer Institute's National Cryo-EM Facility. 16,119 movies were collected in counting mode, with 40 total frames per movie, 3.2 s exposure time, 50 electrons per Å 2 accumulated dose, and 1.08 Å physical pixel size. Movies were motion-corrected and doseweighted using MotionCor2 (Zheng et al., 2017). Patch contrast transfer function (CTF) estimation was performed using cryoSPARC (Punjani et al., 2017). Particle picking was initially carried out using blob picker in cryoSPARC, followed by neural network based particle picking using Topaz (Bepler et al., 2019). Particle extraction was carried out with a box size of 360 pixels, followed by 2D classification in cryoSPARC. Classes were manually selected. Particles from selected classes were then 3D classified into three classes using the three reference maps generated from ab-initio 3D reconstruction, resulting in one class with good orientation distribution. Particles from the selected 3D class were "polished" through a Bayesian polishing process in Relion (Zivanov et al., 2019). "Polished" particles were imported into cryoSPARC to perform non-uniform 3D refinement (Punjani et al., 2017), which gave a map at 2.86 Å resolution. Focused refinements of head and body regions of the 40S were performed using the local refinement in cryoSPARC, which resulted in a 2.86 Å map of the body region and a 2.89 Å map of the head region. The focused-refined maps were used to generate a composite map of the Nsp1-40S complex in UCSF Chimera (Pettersen et al., 2004).
The 40S subunit was modeled by rigid body fitting of a human 40S model (PDB ID:6g5h) (Ameismeier et al., 2018) into the cryo-EM maps using Chimera (Pettersen et al., 2004). Nsp1 was built de-novo based on the cryo-EM map. Inspection, model building, and manual adjustment were carried out in Coot (Emsley and Cowtan, 2004). Real-space refinement was performed using PHENIX (Adams et al., 2010). All representations of densities and structural models were generated using Chimera, ChimeraX (Goddard et al., 2018), andPymol (DeLano, 2002).
Semi-Quantitative Reverse Transcription-PCR and Western Blot Analysis
Reverse transcription-PCR (RT-PCR) was performed using the PrimeScript RT Reagent Kit (TaKaRa). Quantification of mRNA level was performed in a 20 μl mixture consisting of 10 μl Q5 High-Fidelity 2X Master Mix (NEB), 0.2 μl RT-PCR product, 1 μl primer set mix at a concentration of 5 pmol/ml for each primer and 8.8 μl sterile water. The forward primer of SARS-CoV-2 5′ UTR is 5' ATTAAAGGTTTATACCTTCCCAG 3', and the reverse primer is 5' CTTACCTTTCGGTCACAC 3'. Cells were collected and lysed in RIPA lysis buffer (Thermo Scientific) with complete protease inhibitor cocktail (Roche Applied Science). The protein levels were determined using Western blot analysis. Equal portions of the cell lysate were run on a 15% SDS-PAGE gel and blotted onto a PVDF membrane, which was subsequently probed with HRP Anti-Strep tag antibody (Abcam) and developed with an ECL substrate (Amersham Biosciences).
Statistics
All of the experiments were independently performed in triplicate. The data were presented as mean ± SEM, except where noted otherwise. All graphs were plotted and analyzed with GraphPad Prism 5 software. p > 0.05 was considered statistically not significant, and the following denotations were used: **p < 0.001 and *p < 0.05.
Data Availability
The cryo-EM maps included in this study have been deposited in the Electron Microscopy Data Bank with the accession code EMD-22681. The atomic coordinates have been deposited in the Protein Data Bank with the accession code 7K5I.
ACKNOWLEDGMENTS
The cryo-EM data collection was carried out by Thomas J. Edwards at the National Cancer Institute's National Cryo-EM Facility, Frederick National Laboratory for Cancer Research under contract HSSN261200800001E. L.W. was supported by funding from an NIH T32 grant (5T32AI007512-34). T.-M.F. was supported by funding from an NIH T32 grant (5T32HL066987-18 to L.E.S.) and by start-up funds from the Ohio State University Comprehensive Cancer Center.
Figure 5. Model of a Bipartite Mechanism for Nsp1-Mediated Translation Inhibition and Evasion by SARS-CoV-2 5' UTR
A schematic model depicting the bipartite roles of SARS-CoV-2 Nsp1 during infection. First, Nsp1 blocks host mRNA from binding to the 40S ribosomal subunit due to physical occlusion by the bound Nsp1-CT. Second, Nsp1 supports viral mRNA translation by interacting with SARS-CoV-2 5' UTR using Nsp1-NT, which results in dissociation of the Nsp1-CT-40S complex to overcome inhibition. This mechanism of evasion of Nsp1-mediated translation inhibition is illustrated by the failure of linker-lengthened Nsp1 to support viral mRNA translation. With the longer linker, the Nsp1-NT-5' UTR complex can co-exist with the Nsp1-CT-40S complex.
|
v3-fos-license
|
2021-03-12T14:14:31.306Z
|
2021-03-12T00:00:00.000
|
232200901
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2021.617443/pdf",
"pdf_hash": "4a765f032a7fe28ae558ab8cf1678c01f5cd849e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:552",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "4a765f032a7fe28ae558ab8cf1678c01f5cd849e",
"year": 2021
}
|
pes2o/s2orc
|
Compassion Is Not a Benzo: Distinctive Associations of Heart Rate Variability With Its Empathic and Action Components
Recent studies have linked compassion with higher vagally mediated heart rate variability (vmHRV), a measure of parasympathetic activity, and meta-analytic evidence confirmed significant and positive associations. Compassion, however, is not to be confused with soothing positive emotions: in order to engage in actions aimed to alleviate (self or others) suffering, the pain should resonate, and empathic sensitivity should be experienced first. The present study examined the association between vmHRV and the empathic sensitivity and action components of trait and state compassion. To do so, several dispositional questionnaires were administered and two videos inducing empathic sensitivity (video 1) and compassionate actions (video 2) were shown, while the ECG was continuously recorded, and momentary affect was assessed. Results showed that (i) scores on subscales assessing the empathic component of trait compassion were inversely related to resting vmHRV; (ii) vmHRV decreased after video 1 but significantly increased after video 2. As to momentary affect, video 1 was accompanied with an increase in sadness and a decrease in positive affect, whereas video 2 was characterized by an increase in anger, a parallel decrease in sadness, and an increase (although non-significant) in positive affect. Overall, present findings support the notion that it is simplistic to link compassion with higher vmHRV. Compassion encompasses increased sensitivity to emotional pain, which is naturally associated with lower vmHRV, and action to alleviate others’ suffering, which is ultimately associated with increased vmHRV. The importance of adopting a nuanced perspective on the complex physiological regulation that underlies compassionate responding to suffering is discussed.
INTRODUCTION
Over two decades, studies have shown the health promoting influences of compassion (Pace et al., 2009(Pace et al., , 2013Seppala et al., 2012;Slavich and Cole, 2013;Yarnell and Neff, 2013;Zessin et al., 2015), fostering the development of different compassion-focused interventions and scientific research on the nature of therapeutic process (Kirby, 2016;Ferrari et al., 2019;Gilbert, 2019). The aim of the present study was to investigate the physiological signature (measured by vagally mediated heart rate variability; vmHRV) of the specific components of trait and state compassion.
Compassion has been defined as a sensitivity to suffering in self and others with a commitment to try to alleviate and prevent it (Gilbert, 2019); it does not refer to a positive emotional experience but to a suite of concrete prosocial behaviors, and may include positive emotional states such as kindness, empathy, generosity, and acceptance (Weidman and Tracy, 2020). Conversely, kindness is intended to create the conditions for happiness and prosperity; it does not require any sensitivity to and analysis of suffering .
As an evolved prosocial motivation, compassion requires complex social processing systems and evolved competencies which allow approach and engagement with distress signals to help alleviate the distress, such as, a sensitive awareness of suffering, empathic awareness, distress tolerance, a nonjudgmental attitude, and the willingness to develop specific skills to enact the motive (knowing intentionality) (Gilbert, 2017a(Gilbert, , 2019. This set of competencies does not translate compassion into an automatic response to suffering, but into a complex motivational process which guides the individual to be sensitive and receptive to signals of suffering, as opposed to trying to avoid or suppress them, and understand what is the most helpful thing to do in the specific circumstance.
Over the past decade, the psychophysiological perspective has contributed significantly to our understanding of compassion in an effort to provide unique insights into its nature and processes. A specific role of the 10th cranial nerve, namely the vagus, has been highlighted in compassion-related processes (Thayer et al., 2012;Porges, 2017;Petrocchi and Cheli, 2019). Efferent vagal fibers exert parasympathetic control of the heart. An indirect and non-invasive measure of vagal modulation of the heart is vmHRV, a measure of the variability of the time periods between adjacent heartbeats, resulting from the dynamic interplay between the parasympathetic and the sympathetic nervous systems (Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology, 1996). In particular, high tonic vmHRV is a measure of robust parasympathetic control on the heart and appears to reflect the degree to which the prefrontal cortex provides context-appropriate control over the periphery. On the other hand, phasic vmHRV suppression represents the withdrawal of cardiac vagal control and the activation of the defensive systems (Thayer et al., 2009(Thayer et al., , 2012. Recent studies (Svendsen et al., 2016(Svendsen et al., , 2020Luo et al., 2018) and meta-analytic evidence (Di Bello et al., 2020) showed that both state (i.e., induced) and trait (i.e., dispositional) compassion, toward oneself and others, are related to higher vmHRV (Di Bello et al., 2020). Additionally, compassion-focused practices can improve vmHRV (Rockliff et al., 2008;Arch et al., 2014;Matos et al., 2017;Petrocchi et al., 2017).
Previous studies, however, ignored a crucial ingredient of compassion, that is whether the ability to pay sensitive attention to suffering and effectively engage with it is accompanied by the intention and capacity to start the response process (an appropriate action) to alleviate the suffering (Gilbert, 2017b;Kirby et al., 2019;Gilbert, 2020). If both these components are taken into account, a non-linear association between compassion and vmHRV is expected (Miller, 2018). For example, attentional sensitivity and tolerance to other's distress may be associated with a decrease in vmHRV. Indeed, individuals with lower vmHRV show reduced ability to inhibit attention to affectively significant cues (Park et al., 2013) and the anticipation of social stress has shown to reduce vmHRV in individuals with high self-compassion (Luo et al., 2018). Additionally, in response to observation of others' emotional expressions, lower resting vmHRV was associated with higher activation in the mirror neuron system (Miller et al., 2019).
Indeed, a phasic decrease in vmHRV could be considered as a signature of the empathic engagement with others' suffering and it is the prerequisite to be touched by others' suffering and subsequently act to alleviate it (Miller, 2018;Gilbert, 2019;Steffen et al., 2020). Heart rate (and not vmHRV) has shown to increase more during compassion meditation than during neutral states, and HR increases are correlated with BOLD signal in the right middle insula (Lutz et al., 2008), suggesting that compassion enhances the emotional and somatosensory brain representations of others' emotions, amplifying the saliency of emotional stimuli.
Based on such evidence, compassion is expected to be first associated with reduced vmHRV to reflect empathic engagement with suffering, and then by increased vmHRV when the appropriate helpful action is performed. Indeed, in a pilot uncontrolled study, self-compassionate writing has been associated with a significant decrease in vmHRV during the task, and a significant increase in vmHRV during recovery (Steffen et al., 2020).
To date, the physiological signatures of each of the different components of compassion have not been thoroughly investigated, neither when dispositional (trait) nor when induced (state) compassion was examined. The study aimed to fill this gap, assessing resting vmHRV, the specific components of trait compassion, and vmHRV responses to and recovery from videos evoking the different components of compassion (i.e., empathic sensitivity and compassionate action). We hypothesized: (i) the association between tonic vmHRV and trait compassion to be negative for the empathic component and positive for the engagement component of compassion; (ii) to see a vmHRV decrease in response to a video eliciting empathetic sensitivity toward others' suffering, and a vmHRV increase in response to a video depicting intentional actions of giving help. We expected vmHRV reactivity to and recovery from such videos to be moderated by self and other-oriented trait compassion.
Participants
The sample consisted of 45 students [88% female, mean age = 20.5 (1.05) years] of an American university based in Rome, Caucasian, and English speaking. Three subjects were not included in the analyses due to unreliable physiological measures (N = 42, 88% female). Exclusionary criteria were self-reported major psychiatric or cognitive problems, organic illnesses, substance abuse, cardiovascular disease, use of drugs/medications affecting cardiovascular function, body mass index >30 kg/m 2 , menopause, use of oral contraceptives during the previous 6 months, pregnancy or childbirth within the last 12 months. The protocol was approved by the local Ethics Committee.
Procedure and Design
The experiment was conducted as a repeated measures withinsubjects experimental design (Figure 1). Written informed consent was provided. Participants were asked to refrain from drinking caffeinated or alcoholic beverages or exercising in the 2 h prior to the session. Data on socio-demographic information, medical background and currently prescribed medications were first collected. Then, several self-report trait measures of compassion were administered. Participants were asked to rest for a 5-min baseline ECG assessment, during which they rated their momentary affect by visual analog scales (VAS, baseline). Next, a 2.30-min video representing an "empathic sensitivity" condition was shown. This was followed by a post-video recovery assessment (2 min), during which the VAS were administered again (time 1). To collect a second baseline, participants were then instructed to relax for another 1.25 min. Subsequently, a FIGURE 1 | Flowchart illustrating the experimental procedure.
Frontiers in Neuroscience | www.frontiersin.org second video representing a "compassionate action" condition was broadcast for 2.30 min, followed by a post-video recovery assessment (2 min), during which participants had to rate again their current affect by VAS (time 2). The videos are available from the last author on request.
Stimulus Materials
Instructions were displayed on a computer monitor, while a guide-voice signaled participants to relax at baseline and gave indications on how to rate the experienced feelings on the VAS, and how to watch the videos. A pilot study was conducted to make sure that each video elicited one of the two specific components of compassion: empathic sensitivity by video 1 and intention to perform helpful actions by video 2. In video 1, others' suffering scenes were presented along with brief sentences describing the thoughts of each suffering individuals, aimed to facilitate the empathic attunement with them. Video 2 depicted scenes of a person actively engaged in giving help and support to suffering others. The participants had to pay attention, allowing themselves to experience thoughts and feelings, while taking the perspective of the person who gives help.
Socio-Demographic and Personal Information
Socio-demographic and personal information, included sex, age, height, and weight, and physical activity habits.
Compassionate Engagement and Action Scales
Compassionate Engagement and Action Scales (CEAS; Gilbert et al., 2017) encompasses subscales assessing three "flows" of compassion (for others, from others, and self-compassion). Respondents are asked to think about distressing situations and rate how each sentence applies to them. For each scale, a total score and two subscales were calculated: Engagement and Actions. For the Compassion for Self scale, two subdimensions were analyzed in the Engagement subscale: Sensitivity to Suffering, and Engagement with Suffering.
Compassionate Love for Humanity Scale
Compassionate Love for Humanity Scale (Sprecher and Fehr, 2005) is a 21-item scale designed to measure the compassionate attitude for strangers when they are most in need.
Compassionate and Self-Image Goals Scale
Compassionate and Self-Image Goals Scale (Crocker and Canevello, 2008) consists of 13 items. Seven items assess compassionate goals, and six items assess self-image goals. The average for each subscale was calculated, with higher scores indicating higher interpersonal goals.
Visual Analog Scales -VAS
At baseline and after each video (time 1 and time 2), participants rated their current levels of affective states (sad, angry, happy, anxious, calm, strong, weak, content, relieved, self-critical) on several 5-point VAS.
Physiological Measures
Interbeat intervals were continuously recorded throughout the experimental session using the Firstbeat Bodyguard 2 with a standard electrode configuration. Time (root mean square successive difference; RMSSD) and frequency-domain (highfrequency HRV; HF-HRV) vmHRV measures were calculated. VmHRV analysis was performed using Kubios HRV software (Tarvainen et al., 2014). Artifacts and ectopic beats were corrected using a threshold-based correction.
Data Analyses
The data were analyzed using IBM SPSS Statistics version 25 and Mplus 5.1. To evaluate the effects of dispositional variables (socio-demographic factors and trait measures) on the dependent variables, Pearson correlations were performed between BMI, physical activity, trait questionnaires, and vmHRV. Sex differences were evaluated by Student's t-test. The variables that resulted to be significantly associated with vmHRV measures were included in the subsequent analyses as covariate.
Following existing recommendations (Laborde et al., 2018), to determine whether the two videos induced a different physiological response, we computed reactivity and recovery scores by subtracting the baseline from each video-phase and the video-phase from each recovery-phase, respectively. A 2 × 2 mixed-model ANOVA was conducted on vmHRV with time (Reactivity; Recovery) as within-subjects variable, and video (video 1, video 2) as the between-subjects factor. Next, post hoc comparisons were used to identify significant differences between means (Reactivity to video 1 and 2, Recovery from video 1 and 2).
Moderation analyses were executed using the Hayes (2012) PROCESS macro version 3.5 to test conditional effects by trait variables on physiological responses. Specifically, Selfcompassion and Compassion to Others scales of the CEAS were tested as moderators of the size of the effect of videos on physiological responses (Recovery and Reactivity). Both subscales (Engagement and Action) were tested for their moderating effects.
An exploratory factor analysis on VAS has been then conducted to identify associations within self-reported momentary affect variables. We used Principal Component Analysis (PCA) and Varimax orthogonal rotation as methods to extract the factors. Cattell's (1966) scree test was used as decision rule for identifying the number of factors to retain. Then, a structural equation model (SEM) was run using Mplus 5.1 (Muthén and Muthén, 2008) to conduct a set of confirmatory analyses to assess the relative fit of the three-factors model that emerged from the exploratory factor analysis (Positive Affect, low arousal Negative Affect, high arousal Negative Affect). A non-significant chi-square test was considered as evidence of good fit. Following the recommendations of Kline (1998), multiple indexes were used to evaluate the goodness of fit of the model. These included the comparative fit index (CFI), Tucker-Lewis index (TLI), and standardized root-mean-square residual (SRMR). Acceptable fit was defined as CFI and TLI values of 0.90 or greater and SRMR of 0.05 or less.
To determine if the two videos induced different emotional responses, a series of repeated-measure analysis of variance (ANOVA) was performed on factorial scores with time (baseline, video 1, video 2) as the within-subjects factor. Effect sizes were calculated using partial eta squares (η 2 p ), with η 2 p = 0.01 referring to a small effect size, 0.06 to a medium effect size and 0.14 to a large effect size (Tabachnick and Fidell, 2013).
In line with our main hypotheses, the mixed-model ANOVA revealed a main effect of Video for HF-HRV [F (1 , 42) = 7.465; p = 0.009; η 2 p = 0.151] and significant Time × Video interaction [F (1 , 42) = 6.733; p = 0.013; η 2 p = 0.138] (Figure 2). Reactivity did not differ between the two videos (t = 0.148; p = 0.883); however, recovery from video 2 showed a significant increase in HF-HRV compared to recovery from the video 1 (t = 2.553; p = 0.003). Given the lack of differences in reactivity to the two videos, moderation analyses were performed only on recovery values.
The Scree plot based on exploratory factor analysis of VAS scores at baseline, revealed that the first 3 factors explained 69% of the variance in the data. The items exhibited loading values of ≥0.5, suggesting significant contribution. Thus, we selected the three-factor solution for further analyses. A threefactor model composed by (a) POSITIVE AFFECT (happy, calm, strong, content, relieved); (b) low arousal NEGATIVE AFFECT (sad, weak, anxious); and (c) high arousal NEGATIVE FIGURE 2 | Significant Time by Video interaction showing that while reactivity to the two videos did not differ, high-frequency heart rate variability (HF-HRV) increased during recovery from Video 2 (action component of compassion) but decreased during recovery from Video 1 (empathic sensitivity component of compassion). AFFECT (angry, self-critical; Table 1) exhibited good fit. Coherently with modification indices, and according to the theoretical framework behind the present study (Gilbert, 2020), we have opted for a Model including the following three factors: (a) POSITIVE AFFECT (happy, calm, strong, content, relieved); (b) low arousal NEGATIVE AFFECT (sad, weak); (c) high arousal NEGATIVE AFFECT (angry, selfcritical, anxious).
DISCUSSION
The present investigation aimed to examine the association between vmHRV and the specific components of trait and state (induced) compassion, namely empathic sensitivity and compassionate action. When the dispositional tendency to engage in compassion was examined, data evidenced that only the Compassionate Engagement Actions Scale (CEAS) showed associations with vmHRV. Notably, significant negative associations emerged between resting HF-HRV and dispositional compassion. This finding is not completely unexpected. Indeed, most of the studies that investigated the connection between compassion and vmHRV, used different measurement tools, which reflect different definitions of compassion and do not take into account the subdivision of engagement and action as fundamental components of compassion motivation (Gilbert, 2020). In line with our view that distress sensitivity and awareness involve specific physiological competencies for enabling emotional resonance to emotional pain, HF-HRV showed distinct negative associations with the element of empathic sensitivity (engagement) for both self-and other-oriented compassion. Notably, the sensitivity to suffering is an essential attribute of compassion that involves being responsive to one's own suffering or to other people's emotions (rather than activate defense mechanisms and avoidance), and perceiving when they need help (Gilbert, 2019). Consistently, recent findings show that a more efficient shifting of attention from affective to non-affective aspects of negative information was related to lower resting vmHRV (Grol and De Raedt, 2020). In a compassionate approach, this may facilitate emotional pain awareness, and subsequent decisions for helpful actions. Thereby, the individual is able to learn that negative information does not always translate into an aversive outcome (Borkovec et al., 2004), when the compassionate motivation is active (Gilbert, 2020).
In line with our hypothesis, the data evidenced significant greater HF-HRV increase after watching the video depicting the intentional actions of giving help, compared to video eliciting empathic sensitivity toward others' suffering. Interestingly, most of the participants (88%) labeled the second video as more compassionate. This result replicates previous findings which identified increased vmHRV when individuals effectively engage in compassion interventions (Kim et al., 2020b), or improve their self-compassion competencies (Steffen et al., 2020). However, it also echoes recent evidence that both self-critical and selfcompassionate writing were associated with a significant decrease in vmHRV during the task, but that only self-compassionate task produced a significant increase in vmHRV during recovery (Steffen et al., 2020).
Present data highlight that the engagement in compassion enables an appropriate autonomic response after seeing compassionate actions, indicative of how efficiently selfregulatory resources have been mobilized and used to overcome the emotional challenge and then return to resting level (Laborde et al., 2018). This quick return to parasympathetic response, called "vagal rebound" (Nederend et al., 2016), is crucial for therapy because, across time, it promotes an expansion of personal potential to self-regulate and react effectively, loosening resistances and blocks that in turn induce helplessness or shutdown states (Gilbert, 2020).
Our current findings suggest that compassion at first magnifies the saliency of emotional stimuli, consistent with the traditional function of this meditation (The Dalai Lama, 2001). Indeed, one aim of compassion focused training is to increase one's sensitivity to the painful emotional experience of oneself and others, along with the courage and commitment to try to alleviate it (Gilbert, 2020). In fact, personal distress is supposed to surface during compassion focused training; that is why a key part of the training is to provide individuals with a grounding and soothing body routine (breathing and posture) and a psychoeducation that help them develop a de-shaming, non-judgmental, and selfreassuring stance toward suffering and one's habitual patterns of emotional reactivity.
As to moderation analyses, a lower sensitivity to and motivation to engage with other's suffering moderated the association between compassion for others and vmHRV recovery from video 1 (empathic stress condition), whereas this moderating role did not emerge for recovery from video 2 (compassionate action condition). Specifically, in the first condition compassion toward others was positively associated with HF-HRV recovery via lower self-compassionate engagement and action.
We explored changes in different affective states distinguishing between positive affect, low arousal negative affect, and high arousal negative affect. Results revealed a significant increase in low arousal negative affect and a parallel decrease in positive affect in response to video 1, whereas an increase in high arousal negative affect and parallel decrease in low arousal negative affect emerged in response to video 2. Recently, Gilbert and collaborators highlighted that kindness and compassion are associated with different emotions. Whereas kindness is generally associated with positive feelings, engaging in compassionate actions can give rise to a different emotional experience and affective states, mostly associated with anxiety, sadness, disgust, and anger . Consistently, the automatic analysis of spontaneous facial expressions in response to a short video eliciting compassion, showed that anger, disgust, sadness, and surprise occurred more often than fear, happiness, and contempt. In line with our results, anger occurred more often during compassion compared to baseline (Kanovský et al., 2020).
Several limitations must be considered when interpreting the current results. First, being the first to explicitly investigate vmHRV in association with the two subcomponents of compassion, the present study was intended to be preliminary and therefore conducted on a relatively small sample. Replication with a larger and more diverse sample is warranted before we can draw any conclusion on this issue. Moreover, there was an unequal sex distribution in our sample comprising of mostly females, and this may have biased the results. Indeed, differences in resting vmHRV have been well-documented (Koenig and Thayer, 2016), with females showing greater vagal activity, despite lower RR intervals. Likewise, sex differences emerge in the stronger negative relationship between resting vmHRV and empathic concern toward another in pain in women than in men (Tracy and Giummarra, 2017). Indeed, this disparity may be the consequence of different evolutionary selective pressures on females, fostering the evolution of mutual physiological connection between oxytocin and vagal functioning consistently with models of parental investment (Carter, 2014). However, recent meta-analytic results argue against the role of sex as a moderator of the association between (self-and other-oriented) compassion and vmHRV (Di Bello et al., 2020).
Lastly, the two videos were not randomly presented. In order to exclude the role of carry-over effects, we statistically compared resting 1 with resting 2 and we found that full recovery occurred before the beginning of video 2. However, we are aware that this is a serious methodological limitation that future studies should avoid.
Limitations notwithstanding, current results have potential clinical importance, contributing to inform our comprehension of the processes that are active when one engages in compassion, and although preliminary, highlight the importance of adopting a nuanced perspective on the complex physiological regulation that underlies compassionate responding to suffering. Encouraging finding suggests that this issue could be fruitfully explored by a time series analysis of the vmHRV signal, to disentangle how it fluctuates over the course of the empathic sensitivity condition and across the course of recovery (Kim et al., 2020a).
To conclude, compassion should not be seen as an antidote for negative affect, as it requires a dosage of personal suffering and pain before reaching its emotional and health benefits.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the IRB of Sapienza University of Rome. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
NP conceptualized and conducted the study. MD analyzed the data and wrote the initial draft of the manuscript. All authors contributed to the interpretation of the results, provided critical feedback, helped shape the analysis and manuscript, and approved the submitted manuscript.
|
v3-fos-license
|
2023-01-06T22:12:28.845Z
|
2023-01-06T00:00:00.000
|
255462191
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "a61a5245a552402387ebd2c0d475b49c881a18f5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:553",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "d9896527b91cad6de559451c1cf0206971580372",
"year": 2023
}
|
pes2o/s2orc
|
Development and validation of a clinical rule for the diagnosis of chikungunya fever in a dengue-endemic area
Rio de Janeiro is a dengue-endemic city that experienced Zika and chikungunya epidemics between 2015 and 2019. Differential diagnosis is crucial for indicating adequate treatment and assessing prognosis and risk of death. This study aims to derive and validate a clinical rule for diagnosing chikungunya based on 3,214 suspected cases consecutively treated at primary and secondary health units of the sentinel surveillance system (up to 7 days from onset of symptoms) in Rio de Janeiro, Brazil. Of the total sample, 624 were chikungunya, 88 Zika, 51 dengue, and 2,451 were negative for all these arboviruses according to real-time polymerase chain reaction (RT-qPCR). The derived rule included fever (1 point), exanthema (1 point), myalgia (2 points), arthralgia or arthritis (2 points), and joint edema (2 points), providing an AUC (area under the receiver operator curve) = 0.695 (95% CI: 0.662–0.725). Scores of 4 points or more (validation sample) showed 74.3% sensitivity (69.0% - 79.2%) and 51.5% specificity (48.8% - 54.3%). Adding more symptoms improved the specificity at the expense of a lower sensitivity compared to definitions proposed by government agencies based on fever alone (European Center for Disease Control) or in combination with arthralgia (World Health Organization) or arthritis (Pan American Health Organization, Brazilian Ministry of Health). The proposed clinical rule offers a rapid, low-cost, easy-to-apply strategy to differentiate chikungunya fever from other arbovirus infections during epidemics.
Introduction
Chikungunya fever is a neglected arbovirus infection that continues to spread throughout the world, affecting up to 1 billion people [1]. The clinical presentation of chikungunya is similar to that of other arbovirus infections such as dengue or Zika [2,3]. Symptomatic chikungunya (CHIK) infections present mostly with high fever, headache, exanthema, myalgia, and severe joint pain [4][5][6]. The disease can evolve in three phases, namely acute, subacute, and chronic [7], the latter accounting for 59% of cases [8]. The high burden of chikungunya fever varies from 427 to 1,407 years with disability, and 385,835 to 429,058 individuals can develop chronic inflammatory rheumatism after CHIK infection in endemic areas of Latin American countries [7,9]. Chikungunya virus was detected in 2013 in Latin America [10,11] and has predominated mainly in urban areas of dengue-endemic countries such as Rio de Janeiro, the second largest Brazilian city [12][13][14][15]. Brazil reported more than 1.3 million probable cases from 2016 to 2019, most of which in the Southeast, with an incidence of 511.5 and 104.6 per 100 thousand inhabitants, respectively, in 2016 and 2019 [16,17].
The reported cumulative annual incidence rates for chikungunya in Rio de Janeiro state, in Southeast Brazil, were 105.1/100,000 in 2016 and 492.8 in 2019 [15], mainly in the state capital [12]. Spatial overlap between dengue, Zika, and chikungunya [12], detected in the city of Rio de Janeiro between 2015 and 2019, poses a challenge for differential diagnosis, especially during outbreaks. Such differential diagnosis is crucial for promptly determining adequate clinical management and prognosis as well as for monitoring the effectiveness of potential preventive and therapeutic interventions [18]. Clinical prediction rules based on two or more clinical or unspecific laboratory predictors are useful for guiding daily decisions by health professionals [19,20], thereby improving prognosis. The rules can also be used as a diagnostic tool to detect cases promptly for surveillance purposes.
The Brazilian Health Surveillance Guidelines of 2017 [21] proposed a differential clinical diagnosis between chikungunya, dengue, and Zika to orient health professionals. However, although many studies proposed clinical rules for diagnosing dengue [22][23][24][25][26][27] and Zika [23,28], almost none investigated chikungunya fever [29]. The current study thus aimed to derive and validate a clinical rule for chikungunya diagnosis based on a large sample of outpatients seen in the public healthcare system in the city of Rio de Janeiro, where dengue and Zika are endemic.
Materials and methods
This was a cross-sectional diagnostic study of all adult patients consecutively seen for arbovirus infections in healthcare units of the RT-qPCR sentinel surveillance system in the city of Rio de Janeiro. The study followed the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) Statement for clinical prediction models [19], complemented by the Standards for Reporting of Diagnostic Accuracy Studies (STARD) [30]. These guidelines propose a checklist to improve the transparency of reports on prediction and accuracy studies, allowing an appraisal of risks of bias and applicability of study results [19,30].
From January 2016 to September 2019, trained nurses recruited patients with clinical suspicion of arbovirus infections consecutively seen at 23 public healthcare units (primary and urgent/emergency care) in the sentinel surveillance system. Cases were eligible if they reported or presented fever (axillary temperature > 38ºC) or exanthema up to 7 days with at least two of the following symptoms: headache, retro-orbital pain, myalgia, arthralgia, prostration, conjunctivitis, nausea, vomiting, and limb edema, after ruling out bacterial infections such as tonsillitis, sinusitis, or pneumonia. After evaluation by a physician, patients provided urine and blood samples, the latter centrifuged and stored at 2˚C to 8˚C at the local level. Biological samples were referred within 24 hours to reference laboratories of two institutions, the Evandro Chagas National Institute of Infectious Diseases of the Oswaldo Cruz Foundation (2016-2018) and the Noel Nutels Central Laboratory of Rio de Janeiro State (2018-2019). One-step realtime polymerase chain reaction (RT-PCR) was the gold standard for defining chikungunya, dengue (serotypes DENV-1 to DENV-4), Zika (in serum and urine), or negative status, following the manufacturer's instructions (ZDC Molecular Kit, BioManguinhos, Fiocruz) [31].
Individual data were reported to the Rio de Janeiro Information System on Diseases of Notification (SINAN-Rio), a database available upon formal authorization and ethical approval. Clinical and sociodemographic data were collected. New variables were generated based on the case definitions by the following agencies: a) World Health Organization WHO 2015 [32]: fever and arthralgia; b) Pan American Health Organization or Centers for Disease Control PAHO/CDC 2011 [7]: fever and severe arthralgia or arthritis; c) Brazilian Ministry of Health, 2017 [21]: fever and arthralgia or arthritis; and the European Centre for Disease Control ECDC 2018 [33]: fever in persons living in or traveling to endemic regions.
A complete case data analysis was performed in R software, version 3.6.1 [34]. Descriptive statistics were absolute and relative frequencies (with the respective 95% confidence intervals for proportions) of categorical variables according to chikungunya status. The study used random split-half samples. In the first random half (sample 1), single covariate and multiple binary logistic regression models were performed to derive a clinical rule for diagnosing chikungunya. The first multiple regression model included all clinical predictors with p < 0.2 in the simple regressions. The final multiple regression model included variables with p < 0.05 in Wald test statistic, adjusted by days since onset of symptoms (� 3 and 3-7 days). Crude and adjusted odds ratios (OR) with 95% confidence intervals (95% CI) were reported. Goodnessof-fit of the final logistic regression models was compared with the Hosmer and Lemeshow test, the regression influence plot of studentized residuals, and hat values (Cook´s distance) [35,36], besides the log-likelihood ratios between the full and null models. The Hosmer and Lemeshow test did not show lack of fit of the final multiple model and Cook´s distance did not show influence from observations. The score of the derived rule ("Rio rule") was the weighted sum based on the beta coefficients (β) of the predictors included in the final model, rounded to the nearest upper integer value. We calculated the area under the receiver operating characteristic (AUC) curve (95% CI), and the optimal cut-off point was defined by the Youden index.
In the second random split-half sample (sample 2), we validated the derived clinical rule using the following accuracy parameters (95% CI): sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, and diagnostic odds ratios. Additionally, we compared the accuracy parameters of the validated clinical rule (Rio rule) with the parameters of the four case definitions mentioned previously (WHO 2015 [32], PAHO/CDC 2011 [7], ECDC 2018 [33] and Brazil 2017 [21]).
Ethics statement
The study was approved by the Institutional Review Board (IRB) of the Brazilian National School of Public Health, Oswaldo Cruz Foundation, and authorized by the Rio de Janeiro Municipal Health Department (CAAE nº 16646719.6.3001.5279). This observational retrospective study used routinely collected surveillance data, fully anonymized before analysis. The IRB waived the need for informed consent.
Results
Of 4,406 patients with suspected arbovirus infections seen at the healthcare units, 3,242 (73.6%) met the eligibility criteria and provided serum or urine samples. This sample had similar age and gender distribution but a higher percentage of confirmed chikungunya cases (19.2% versus 14.2%) compared to the initial patient population. After excluding 28 (0.9%) patients with missing data for clinical predictors, the final study sample included 3,214 patients (Fig 1), most of whom living in the city of Rio de Janeiro, with black or brown race/skin color, low schooling, and up to 3 days since onset of symptoms ( Table 1).
The main clinical symptoms were fever, headache, arthralgia, and myalgia. Confirmed chikungunya cases were older on average than patients with the other arbovirus infections and other febrile illnesses (OFI) ( Table 1). Chikungunya was the most frequent arbovirus infection, but most patients had other febrile illnesses. More than three-fourths of the chikungunya patients arrived at sentinel health units within three days of the onset of the disease with myalgia and arthralgia, while two-thirds of Zika cases arrived in the same time frame. Exanthema, myalgia, arthralgia, joint edema, and limb edema were more frequent in chikungunya cases than in other arbovirus infections or other febrile illnesses. Although common in chikungunya, the frequency of exanthema differed by less than 10% compared to dengue and other febrile illnesses (Fig 2 and S1 Table). Aphtha, lymphadenopathy, and neurological manifestations were rare in our sample (� 2%) (S1 Table).
The random split-half samples showed similar distribution of clinical predictors (Fig 3 and S2 Table). In the first random split-half sample (n = 1,608), the first multiple regression model The final multiple model included five predictors: fever, exanthema, myalgia, joint edema, and arthralgia or arthritis, adjusted by days since onset of symptoms. Item weights varied from 1 to 2, and the score was obtained by weighted sum ( Table 2). The Hosmer and Lemeshow test did not show lack of fit for the final multiple model, and Cook´s distance did not show influence from observations.
The area under the ROC curve of the derived clinical rule (Rio rule) was 69.5% (95% CI: 66.5−72.5), with an optimal cut-off point of 4 or higher, with 79.9% sensitivity and 51.0% specificity. The random split-half sample 2 (n = 1,606) showed 74.3% sensitivity and 51.5% specificity. Compared to estimates of previous clinical rules proposed by public agencies, the Rio rule included more symptoms and had higher specificity and positive likelihood ratio but lower sensitivity. The negative predictive value and negative likelihood ratio were similar to the estimates of the probable case definition adopted by the Brazilian Ministry of Health [21] and PAHO/CDC 2011 [7] (Table 3).
Discussion
This study developed and validated a clinical rule for diagnosing chikungunya in a complex epidemiological scenario. Rio de Janeiro is the second largest Brazilian city and the fourth largest in Latin America, with 20% of its inhabitants living in slums with inadequate housing and sanitation. The city has a tropical climate (annual average temperature of 23.7˚C) conducive to the proliferation of Aedes aegypti, with simultaneous circulation of dengue and Zika.
The best clinical criteria for diagnosing chikungunya include the presence of fever, exanthema, myalgia, arthralgia or arthritis, and joint edema. According to this rule, the presence of two joint symptoms suffices for clinical diagnosis of chikungunya, with a lower false-positive rate compared to the definitions proposed by WHO 2015 [32], PAHO/CDC 2011 [7], BRAZIL 2017 [21], and ECDC 2018 [33]. Adding more symptoms to the Rio rule improved specificity and positive likelihood ratio at the expense of lower sensitivity compared to definitions based on fever [33] or the combination of fever with arthralgia or arthritis [7,21,32]. The negative predictive value was around 90%, similar to the other definitions. Fever and arthralgia were the most frequent symptoms in chikungunya cases in our study. Consistent with our findings, both predictors were included in a clinical rule derived and validated in a sample of patients 65 years or older (n = 687) from Martinique [37] The ECDC definition [33] would not be helpful in scenarios of arbovirus cocirculation since it is based exclusively on fever and could lead to high false-positive rates [23,38].
Arthralgia and joint edema were the best predictors of chikungunya, consistent with other studies [6,[38][39][40][41]. The case definitions that include arthralgia, more subjective than joint edema, had the best sensitivity and were adequate to rule out chikungunya, with a negative
PLOS ONE
Clinical rule for diagnosing chikungunya fever in a dengue-endemic area predictive value of approximately 90%. This parameter was better than that obtained in a sample of 200 suspected cases in Jamaica (2014), of which 137 were serologically confirmed as chikungunya, showing a negative predictive value of 76.2% [42]. A study conducted in Southeast Africa also found 84% sensitivity with the WHO definition, with more promising specificity than ours [43].
In our study, myalgia and exanthema were more frequent in chikungunya cases compared to dengue and Zika. Although not included in national and international case definitions [7,21,32,33], these symptoms were also statistically associated with chikungunya diagnosis in other studies in the Caribbean [44] and Brazil, the latter conducted in a proven scenario of dengue and Zika cocirculation [45].
Exanthema occurred in about one in three chikungunya cases, compared to one in five for dengue and one in six for Zika. This finding may be related to the fact that one-third of Zika cases sought health care after the third day since onset of symptoms. In Puerto Rico, where dengue is endemic, skin rash was also more frequent in adults with chikungunya compared to other febrile illnesses [40].
To our knowledge, this is the first study that derived and validated a clinical rule for chikungunya diagnosis in a large consecutive sample (3,214 patients seen in 23 primary and secondary healthcare facilities). The methodology followed the recommendations for validation studies [19,30,46].
In a sample of 687 patients admitted to acute healthcare services in Martinique, the derived clinical score included fever (3 points), ankle pain (2 points), lymphopenia (6 points), and absence of neutrophilia (10 points), where a score of 12 points or higher showed 87% sensitivity (83-90%) and 70% specificity (63-76%) [37]. However, the rule used non-specific laboratory parameters, which can hinder its use in resource-scarce settings. In a case-control derivation study [29] comparing 168 chikungunya and 452 dengue patients from French Guyana, joint (+5) and back pain (+1) were independently associated with chikungunya, while headache (-1), myalgia (-2), nausea/vomiting (-1), diarrhea (-1), and bleeding (-3) were associated with dengue. Another strength of this study was the RT-qPCR gold standard applied to all suspected cases, with most serum samples collected within three days of onset of symptoms and after clinical evaluation. The RT-qPCR of the ZDC Molecular kit has shown 100% sensitivity and specificity [31], similar to the Trioplex kit of CDC [47]. Consistent with our results, joint pain, joint edema, skin rash, and muscle, bone, or back pain were significant predictors of chikungunya when compared to dengue or other acute febrile illnesses in a large sample from Puerto Rico using the same gold standard [40].
The study´s limitations include the sentinel surveillance data obtained from the Rio de Janeiro Municipal Health Department. A previous seroprevalence study estimated that the number of chikungunya cases could be at least 45 times higher than those reported to the surveillance system [13]. Although trained health professionals collected the data using standardized forms, they did not record bleeding manifestations or hematologic laboratory parameters, which are important for differential diagnosis with dengue. To deal with potential errors in medical evaluation, we combined arthritis and arthralgia in the analysis.
This study used the split-half internal validation approach. The large sample size allowed to derive and validate a clinical rule for diagnosing chikungunya in healthcare services normally used by patients with suspected arbovirus infections, such as primary care and urgent/emergency services. The samples did not show substantial imbalances in predictors or outcome distributions.
Our findings suggest that the best diagnostic clinical rule for acute-phase chikungunya diagnosis includes not only fever and joint symptoms such as pain and edema, but also exanthema and myalgia. This rule may lead the physician to order a confirmatory RT-qPCR for chikungunya diagnosis, which can be helpful in arbovirus surveillance in urban areas of dengue-endemic countries. Further studies should confirm the proposed diagnostic rule´s performance in other urban settings and evaluate bleeding as well as relevant hematologic parameters for differential diagnosis with dengue.
Supporting information S1
|
v3-fos-license
|
2020-03-12T10:23:45.096Z
|
2020-03-10T00:00:00.000
|
214707864
|
{
"extfieldsofstudy": [
"Medicine",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/ijfs/2020/9816204.pdf",
"pdf_hash": "cf3be41f34d9a6f51bf89bcfd5960817fc8e214d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:555",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "aa7c07838172c1b01b02dfe96050fa9f3cb50dd4",
"year": 2020
}
|
pes2o/s2orc
|
Mass Transfer and Colour Analysis during Vacuum Frying of Colombian Coastal Carimañola
This study is aimed at analysing the effect of vacuum frying on the kinetic parameters of mass transfer, the CIE L∗a∗b∗ colour parameters of the Carimañola. For the kinetic analysis, the moisture and oil content were measured by means of an experimental design consisting of two factors: frying time with seven levels (60, 120, 180, 240, 300, 420, and 540 s) and frying temperature with three levels (120, 130, and 140°C). The diffusivity coefficient, the moisture transfer rate, and the oil adsorption rate, with their respective activation energies, were calculated. For the colour analysis, the reflectance technique was used to determine the colour coordinates of the CIE L∗a∗b∗ space, and the general colour change was calculated (ΔE). Concerning the kinetics, the increase in temperature and frying time reduced the moisture content, while the oil content decreased with the increase in temperature and increases with frying time. The diffusivity ranged from 1, 238 × 10−6 m2/s at 120°C to 2, 84 × 10−6 m2/s at 140°C. The mass transfer coefficients for moisture ranged from 2 × 10−4 m/s at 120°C to 4 × 10−4 m/s at 140°C. The values of the oil uptake rate were from 0.0022 s−1 at 120°C to 0.0018 s−1 at 140°C. Finally, the luminosity parameter shows a decrease with the increase in temperature, although the first 240 s shows a rise and then begins to decrease. Vacuum frying allowed Carimañolas to be obtained with a lower oil and moisture content, with an appropriate colouring, eye-catching and visually attractive to consumers.
Introduction
Cassava (Manihot esculenta Crantz) is one of the main foods of the family basket, and in turn, its cultivation is of great importance to the socioeconomic level since this plant can be developed in different climatological conditions. The root is the part of the cassava plant that is mostly consumed by humans, taking advantage of its high nutritional value; in many cases, it is used as raw material for the development of new products for food [1,2]. However, in many communities, it is only prepared in a homemade way, either boiled, toasted, fried, or converted into intermediate prod-ucts such as flour and starch; that is to say, a transformation process is not applied that allows obtaining higher added value [3]. Carimañola is a fried by-product made mainly from cassava dough stuffed with meat or cheese, which is cooked in hot oil, giving it sensory characteristics that are attractive to consumers.
Vacuum frying is a healthier frying technique, due to the fact that apart from preserving the sensory qualities of the food, it also reduces the amount of oil uptake [4]. In this operation, the product is placed in a completely closed system and subjected to reduced pressure (subatmospheric), reducing the boiling point of the water and, therefore, the frying temperature. Thus, the water contained in the food is quickly removed when the oil reaches the boiling temperature of the water, so this is a process that requires less time [5].
Transformation processes, such as frying, cause different changes in the food that determine the final characteristics of the product, some of them influenced by the simultaneous heat and mass transfer mechanisms involved in frying [6].
When food is submerged in hot oil, heat transfer occurs by two mechanisms: convection from the oil to the food surface and conduction from the surface to the interior of the food [3]. This heat transfer evaporates part of the water present in the food, which escapes to the surface due to concentration and pressure gradients. Moisture vaporization influences physical and chemical changes, such as starch gelatinization, protein denaturation, darkening, pore and crust formation, and shrinkage/swelling [7,8].
On the other hand, the absorption of oil during frying depends on a number of factors such as the quality and composition of the oil, the frying time and temperature, the shape and composition of the food, moisture content, surface roughness, and porosity (Manjunatha, Mathews, and Patki, 2019). The absorption of oil in food consists of three parts: the internal oil, which is the oil absorbed by the food during frying; the surface oil absorbed, which is the oil absorbed by the food immediately after removing it from the hot oil; and the surface oil, which is the oil that adheres to the food surface during the cooling stage [9]. It has been proven that during the cooling stage, the food absorbs the greatest amount of oil, because cooling leads to condensation of water vapour in the pores, generating under pressure, and, therefore, oil suction in the pores of the food [10].
The main sensory characteristics of foods are texture, aroma, flavour, and colour, which determine consumer acceptance and perception of quality. Colour is the first attribute evaluated by the consumer, through the sense of sight [11]. The colour of the fried product is an important parameter and must be controlled during the transformation process and is the result of moisture loss, oil migration, and Maillard reaction, which depends on the amount of reducing sugar and amino acids on the surface, temperature, and frying time [12]. The common problem with the processing of Carimañola frying is the formation of dark colour; therefore, during frying, it must be properly controlled.
The traditional Caribbean Carimañola in the frying process undergoes different changes in its sensory, physicochemical, and structural characteristics. The objective of this work was to study the mass transfer through the kinetic parameters and the changes on the chromatic coordinates in the colour space CIE L * a * b * of the Coastal Carimañola during vacuum frying at different time intervals and temperatures.
Materials and Methods
2.1. Raw Material. The Carimañolas were made with precooked cassava dough of the variety MCOL 2215; this was supplied by a point of sale of fried products located in the municipality of Turbaco (Bolivar) on 14 February 2019; prior to processing the doughs, these were kept refrigerated at a temperature of 12°C. The ground beef, vegetables, and vegetable oil mixture were purchased at a local supermarket in the city of Cartagena de Indias (Colombia) on 13 February 2019. Ground beef and vegetables were refrigerated at 7°C, while oil was stored at room temperature.
2.2. Preparation of the Product. Carimañolas were elaborated taking into account the indications given by local fried sellers of the municipality of Turbaco (Bolivar). Byproduct 60 g of dough and 10 g of ground meat were taken. The portions were kneaded until obtaining circular forms; then, they were pressed in the centre and filled with the meat. Finally, it was proceeded to unite towards the centre to obtain the Carimañolas.
Vacuum Frying
Conditions. The cooking process of the Carimañolas was carried out in the GASTROVAC® equipment, whose measures are 40 * 26 * 46 cm. The maximum capacity of this equipment is 10.5 L with a voltage of 220 V. For the process, a maximum vacuum pressure of 30 kPa was considered, pressure at which the boiling temperature of the water is considered to be 70°C. The temperatures of the frying process were defined using three deltas ΔT 1 = 50°C , ΔT 2 = 60°C, and ΔT 3 = 70°C. Therefore, the oil temperatures used were 120°C, 130°C, and 140°C, and the frying times used ranged from 180 to 300 s, which were established by preliminary tests. To start the process, Carimañolas were placed in the metal basket that is attached to the lid of the fryer. The fryer was closed, and the equipment was expected to reach the desired pressure. The knob was lowered (submerge the basket in the hot oil). The frying process began; after the frying time, the basket was removed and held with the knob; then, the vacuum chamber was opened, and the hose was disconnected to balance the atmospheric pressure; after this stage, the product is removed from the equipment.
Moisture Loss Kinetics.
To analyse the kinetics of moisture loss, an experimental design was made that consisted of two factors: frying time with seven (7) levels (60, 120, 180, 240, 300, 420, and 540 s) and frying temperature with three (3) levels (120, 130, and 140°C) obtaining a total of twenty-one (21) basic combinations. For each one of the treatments after the frying process, moisture was determined at 105°C up to constant weight [13]. Each of these determinations was carried out in triplicate, and the data were expressed as averages on a moisture basis.
Calculation of the Diffusion Coefficients (D a ) of Moisture.
A first-order kinetic model was chosen for the determination of the diffusivity coefficients of the Carimañolas considering the one-dimensional moisture diffusion described by Fick's second law taking into account the following assumptions: (I) the moisture content is uniform, (II) the geometry is kept constant during the frying process, (III) and the frying temperature is kept constant. Assume the International Journal of Food Science geometry of the Carimañola as a homogeneous cylinder of initial moisture (M o ). Fick's law is described as follows: With the initial condition at t = 0, the contour conditions at x = 0 and the external surface in contact with the oil x = +r, where D a is the diffusion coefficient of moisture, r is the radius of the cylinder, and t is the frying time (s): where M t is the moisture content at time t, M o is the initial moisture content, and M e is the equilibrium moisture content. Therefore, the analytical solution of the differential equation obtained by the method of separation of variables for cylindrical bodies was applied, used in the first term of the solution series in a similar way to that carried out by [14] in carrots and [15] in peas: It was assumed that the moisture content is insignificant when equilibrium is reached in the frying process of the Carimañola, so M e = 0, rearranging the equation as follows: To determine the moisture diffusion coefficient (D a ), the linearization of equation (4) was performed by applying logarithms and adjusting it to a straight line y = mt + b: From the graph LnðM t /M o Þ vs. t, D a was estimated from the slope value for the three frying temperatures (120, 130, and 140°C).
The activation energy of the process was then calculated from the diffusion coefficient that fits the Arrhenius equation because of its dependence on temperature [16]: where D o is a preexponential factor, E a is the activation energy (kJ/mol), R is the gas constant (8,314 J/g mol K), and T is the absolute temperature (K). By linearizing, the equation was obtained: When graphing LnD a vs. 1/T, the value of the slope with which the value of E a was estimated was obtained.
Calculation of Mass Transfer Coefficient (k c ) or Moisture Loss Rate
In order to determine the mass transfer coefficient of the Coastal Carimañola, the first-order Lewis model was used, exposed by [17][18][19][20]: where MR is described in equation (8), k c is the moisture loss rate constant (m/s), and t is moisture loss time in frying (s). It then replaces MR, and the equation was linearized to obtain the k c value of the expression m = k: 3.1. Oil Uptake Kinetics. Experimental factorial design of two factors was implemented: temperature (120, 130, and 140°C) and frying time (60, 120, 180, 240, 300, 420, and 540 s), obtaining 21 base treatments. To get the most accurate results, we perform the measurements in triplicate for each treatment. The data collected was presented as the average of the percentage of the gain of dry base oil with its respective standard deviations.
Calculation of Oil Uptake Rate (k).
A first-order kinetic model was used to determine the rate of oil uptake [11]: where O * is the oil content at time t (dry basis) and O eq is the oil content at equilibrium (dry basis) when t = ∞. Considering that experimentally we obtained the oil content data for frying times, equation (10) was adjusted to a straight line, and when plotting Lnð1 − ðO * /O eq ÞÞ vs. t, we got the slope of the linear section, necessary to calculate the k value, which represents the specific oil uptake velocity of the Carimañola (s -1 ). The relationship of oil content variation in equilibrium, O eq , with frying temperature T was evaluated using an Arrhenius type relationship to obtain activation energy [21]: where A is a reaction rate of the model parameters, A 0 is the preexponential factor, E a is the activation energy, R is the universal gas constant (8,309 J/mol), and T is the absolute temperature. With the linearization of equation (11) and the LnA eq vs. 1/T graph, we obtained the linear slope, which allowed us to calculate the activation energy. The parameters of the equations were estimated by nonlinear regression, where N is the number of data points, V exp is the experimental value, and V fitted is the calculated value. Models based on Fick's law will conform to experimental data [22].
Colour Analysis.
For colour analysis, the vacuum frying process of the Carimañolas was performed using a Nonrandom Rotating Composite Core Design (DCC-R) consisting of four factorial points, four axial points, and five central points, for a total of 13 experimental treatments (see Table 1). The colour changes on the outside of the Carimañolas were analysed, using a CR-5 laboratory reflectance colourimeter (Konica Minolta Sensing), with D65 illuminant and a 10°tone angle for the observer. The parameters were evaluated in CIE L * a * b * space, in terms of luminosity L * (light 100°and dark 0°) and chromaticity a * (red (+) and green (-)) and b * (yellow (+) and blue (-)), where L * a * b * are the values of each treatment and L, a, b corresponds to the product before frying. The general colour change (ΔE) was also calculated for a standard or commercial product using the Euclidean distance, as shown in where ΔL * = difference in light and dark (+ = lighter, -= darker), Δa * = difference in red and green (+ = redder, -= greener), and Δb * = difference in yellow and blue (+ = more yellow, -= blue) [23].
Statistical Analyses.
A two-way ANOVA and Tukey's HSD multiple comparisons test with a significance level of 5% were used to find statistical differences between the response variable data. A correlation was also made from the r Pearson test, with a significance level of 0.01. The data were processed in Statgraphics Centurion 16 (Keygen, U.S.A.).
Results and Discussion
4.1. Moisture Loss Kinetics. Figure 1 shows the variation in the moisture content of the Carimañola (as dimensionless moisture M t /M 0 ) for 120, 130, and 140°C as a function of the frying time under vacuum conditions. From the kinetic behaviour, the loss of moisture establishes a directly proportional relation with the temperature, and the time of frying was observed; obtaining this way with a time of 540 s for each one of the temperatures (in ascending order), the final content of moisture is 57,23% (±0,79), 55,5% (±0,56), and 51,14% (±0,86); that is to say, to a temperature of 140°C, the Carimañolas presented greater loss of moisture.
This behaviour was also reported by Hase and Linares [3] in the analysis of the water loss kinetics of fried cassava snacks, who state that the increase in temperature produces an increase in the diffusion of moisture from the inside of the food to reach equilibrium after a long frying period. In International Journal of Food Science addition, Figure 1 also shows that high temperatures require less time for the moisture content of the food to evaporate, because the low pressures applied to the process can reduce the boiling point of water and thus frying temperatures. Therefore, when the temperature increases, the water is quickly transformed into steam and then leaves the food through the pores of the same, as indicated by [24] in the vacuum frying of arepas with egg using the same pressure and frying temperatures as in this study. On the other hand, [25] analysed the loss of moisture content of vacuum crisps for a temperature of 125°C reporting a decrease in moisture over time, coinciding with the behaviour analysed for the Coastal Carimañola.
In the frying process, there are other parameters that influence the loss of moisture, within which the shape of the food is highlighted, and the relationship between the size of the product and the surface exposed to the surrounding medium, because if the thickness of the food is greater, there is a smaller specific area and therefore a smaller relative area available to lose water, in addition to the fact that the path that the water particles have to travel towards the external part of the food is longer, so more heat is required to evaporate the water present [26,27].
Calculation of Diffusion Coefficients (D a ) and Mass
Transfer Coefficients (k c ) or Moisture Loss Rate. The moisture diffusion coefficient (D a ) was calculated using Fick's second law described for the geometry of a cylinder, from which a D a estimate of the value of the slopes for temperatures of 120, 130, and 140°C, respectively, was made, as shown in Figure 2, in which we can analyse the rate of loss of moisture of the Carimañolas for each temperature with respect to the time of frying according to the inclination of each one of the slopes; according to this, the straight of 140°C presents a greater inclination and therefore presents a greater rate of loss of moisture, followed by 130 and 120°C; that is to say, the increase of the temperature and the time of frying increases the rate of loss of moisture. Table 2 shows the moisture diffusivity values for the Coastal Carimañola, at the respective temperatures of 120, 130, and 140°C, obtaining values of 1,238 × 10 −6 , 2,099 × 10 −6 , and 2, 84 × 10 −6 m 2 /s. The experimental results show that the D a values are higher for high temperatures.
The effective diffusivity values found for fried Carimañola in vacuum conditions exceed the general range of 10-8 m 2 /s and 10-11 m 2 /s for food dehydration, according to Osorio et al. [28], who in their study when making a comparison of the diffusivity values at different pressures showed greater diffusivity values at decreased pressures and elevated temperatures.
Ortega and Montes [29] in a similar study of fried cassava slices under atmospheric conditions reported diffusivity coefficients of 10:44 × 10 −9 , 17:02 × 10 −9 , and 27:62 × 10 −9 m 2 /s with temperatures of 140, 160, and 180°C, respectively, evidencing a linear behaviour with the frying temperature, in the same way that was observed in the present study for the Coastal Carimañola but exceeding the magnitude of the values obtained, which can be explained by the difference in pressures used in each study and the shape of the food, in the case of cassava slices which was considered as a flat plate and in the Carimañolas as a cylinder.
On the other side, Alvis et al. [30] reported values of diffusivities similar to the Colombian Coastal Carimañola, 5 International Journal of Food Science of 0:92 × 10 −6 , 1:07 × 10 −6 , and 1:39 × 10 −6 in pieces of sweet potatoes fried by immersion at temperatures of 150, 170, and 190°C; the authors express that these results exceed the intervals considered by other authors for dehydrated foods attributing these differences to the nature of the product, the temperature, and methods of determination of the frying process.
In starchy foods, the frying process tends to increase diffusivity with porosity and initial moisture content, where the latter two factors establish a relationship with each other, so that food products with a higher degree of porosity have a higher rate of moisture evaporation and therefore a higher coefficient of diffusivity [31]. Carimañolas presented an initial moisture content, before frying, of 65.60% (data not shown) which can be considered an intermediate moisture food, and therefore, the water can be transported by capillary flow and vapour diffusion, which represents an increase in the effective values of diffusivity, facilitating the transport of water through the pores and channels of the food [32].
Vacuum frying causes a hydrodynamic gradient in foods that directly affects their microstructure and consequently their transport properties, as described by Troncoso and Pedreschi [33] in the analysis of the diffusivity of moisture of vacuum-fried potatoes, considering different parameters that may influence it. In addition, they explain that the formation of the external crust or crust in fried food, due to the dehydration process involved in frying, causes a porosity in the crust determined by the frying time, which together with the gelatinization of starch has an effect on the transport properties of water.
The values for k c for the Carimañolas are shown in the same way in Table 2, varying increasingly with the increase in temperature and frying time, these being 2 × 10 −4 (120°C), 3 × 10 −4 (130°C), and 4 × 10 −4 (140°C) m/s, with R 2 of 0.93, 0.86, and 0.95, respectively, which evidences a good fit of the Lewis kinetic model used to describe the convective coefficient of mass transfer for moisture. [22] found the moisture loss coefficients of yellow pulp cassava slices fried under vacuum and atmospheric conditions using a first-order kinetic model; these authors reported values for vacuum conditions of 0:3177 × 10 −1 , 0:4147 × 10 −1 , and 0:2981 × 10 −1 m/s for temperatures of 108, 118, and 128°C, respectively, in contrast to the data for atmospheric conditions at 160, 170, and 180°C which were 0:2703 × 10 −1 , 0:2785 × 10 −1 , and 0:2860 × 10 −1 m/s correspondingly. This shows that vacuum frying allows the food to be processed to obtain the desired characteristics in a period of time similar to that obtained in atmospheric frying. On the other hand, these results obtained under vacuum conditions are greater than those obtained for the Coastal Carimañola.
Oil Uptake Kinetics.
Oil uptake is a complex phenomenon that has been studied in frying processes. Uptake occurs mainly when the product is removed from the hot oil [33]. Figure 3 shows the oil uptake of Carimañolas fried in vacuum conditions (expressed as the average percentage of oil obtained from repetitions) at temperatures of 120, 130, and 140°C, with respect to a time t, which is taken from zero time to 540 s, considering a Carimañola without frying at zero time. In the graph, it is observed that at the same time of frying, as the temperature increases, the fat content decreases; that is to say, the high frying temperatures lead to a lower oil uptake, with results similar to those of [35]. It is also observed that parallel to the increase in time, the uptake of oil increases; the same trend was observed by [32] for vacuum-fried potato slices. This result is in accordance with what was reported by [4], in whose study they evaluated the behaviour of cassava chips in vacuum frying processes. The results found by the authors indicate that the highest fat uptake occurs at the lowest temperature (100°C).
There are several factors that justify the correlation between frying temperature and fat uptake, among which the higher the temperature, the faster the starches gelatinize, and the percentage of free water in the product decreases, creating a barrier for the escape of steam that causes an abrupt expansion in the capillary pores, which as their size increases decrease capillary pressure and oil uptake [36,37]; it is important that the boiling point of the water is equal to or higher than the gelatinization temperature of Carimañola starch, which has a range of 55-65°C; otherwise, the uptake of oil would not be reduced [22].
When mentioning the expansion in the pores (a phenomenon that occurs mainly in the pressurization stage, i.e., the vacuum is eliminated and the pressure in the pores increases rapidly until it reaches the atmospheric stage), it is normal to think that there is more room for the Carimañola to absorb more oil; however, due to the fact that there is a decrease in pressure, the air spreads much more quickly, which avoids the passage of oil and reduces its uptake [4, 38. 4.4. Oil Uptake Rate and Activation Energy. Figure 4 shows the oil uptake behaviour of the Carimañolas during the vacuum frying process. From the slopes obtained from the linear 6 International Journal of Food Science sections represented in this graph, the oil uptake rate for each temperature was determined, being m = −k. Table 3 shows the parameters describing the oil uptake rate and activation energy. The values of the mean square root (%RMS) and the determination coefficient R 2 (close to 1) indicated that there is a good fit of the regression model, with the values observed, i.e., that the regression line was significantly close to these values; these results are similar to those reported by [16], in whose study R 2 corresponding to the temperature of 140°C exceeds the value of 0.95.
In the table, k-values indicate a tendency for the oil uptake rate to decrease with increasing temperatures. This trend is consistent with that reported by [22] who studied the kinetics of the uptake of oil from fried cassava slices . The values of this research are comparatively higher than those found in Carimañolas, a difference that could be attributed to the fact that Carimañola is not composed only of cassava and its thickness. The same trend was also observed in the study by [39] about the determination of the coefficients of thermal diffusivity of moisture and oil in breadfruit, whose values were 0.001, 0.001, and 0.0009 s -1 at temperatures of 120, 130, and 140°C. The same trend was observed in the study by [39] about the determination of the coefficients of thermal diffusivity of moisture and oil in breadfruit, whose values were 0.001, 0.001, and 0.0009 s -1 at temperatures of 120, 130, and 140°C. It should be noted that, although there is a tendency to increase the k parameter with temperature, it does not have a significant effect on the uptake of oil [33].
The values of A eq represent the maximum amount of oil that the Carimañolas could absorb; in Table 3, it can be observed that this parameter increases as the frying temperature is reduced; these results agree with those reported by [35,39].
Thermodynamically, the activation energy is the energy required by water molecules for their migration or movement within a product; it is considered that this comes initially from the thermal energy supplied by the oil and is independent of moisture content. E a was calculated using the slope indicated in Table 3 (m = 1439:2), being E a = m * R ("m" as the slope and "R" as the universal gas constant); in Table 3, it is observed that the value obtained is -11.96 kJ/mol. This result is similar to the one reported by [39], who pointed out that for temperatures of 120, 130, and 140°C, the value of E a was -24.78 kJ/mol; the authors stressed that the negative value is due to the fact that the oil uptake index decreases with the increase in temperature, as occurring in the present study. According to [28], another factor that justifies the value of E a being so low is that it depends on the vacuum pressure of the medium, since the greater the vacuum, the less E a is required to start the process. Table 4 shows that time caused a greater effect than temperature on parameter L * , evidencing that this one decreased as the temperature increased; on the other hand, if the time is analysed, the luminosity increased until the first 240 s; from then on, it decreased linearly. The L * parameter did not differ significantly at lower temperatures (115.85 at 120°C) or when this factor increased. Parameter a * did not show significant differences at shorter frying times; the highest values were observed at temperatures of 130°C. Mariscal and Bouchon [40] reported that there were no significant differences in the luminosity of the precooked and vacuum-fried apple slices at 10 minutes of frying. They also reported that the rate of reduction of values for parameter L * at higher frying times (i.e., 10 to 12 min) was slower compared to the rate observed at a lower rate. Since luminosity is a very important colour quality at lower frying temperatures (especially under vacuum conditions), with a lower boiling point of water, they are preferable to preserve the luminosity and therefore the attractiveness of fried products. In contrast, redness is an undesirable quality factor in fried foods [41]; the increase in redness shows an increase in crust development, resulting in lower acceptability. Increased redness for all frying treatments may mean that all treatments (e.g., potatoes) experience an increase in gold with increasing frying temperature and time. This could be due to Maillard's reaction resulting from the use of available reducing sugars.
Colour Analysis.
The colour development of the product during frying depends on the drying rate (moisture loss), oil uptake, and heat transfer coefficient in the different frying stages. The brightness value is a critical parameter in the frying process and is considered as a primary quality factor evaluated by the consumer. Results of low luminance values are dark in colour and are formed due to nonenzymatic darkening reactions. Brightness values for potato chips greater than 60 were termed as excellent, 56-60 as acceptable, and less than 50-55 as slightly acceptable [41,42] confirmed for Gethi frying that redness values a * increased significantly (p < 0:05) with increasing frying time and temperature. The redness value was 2.48, and after 15 minutes of frying, it increased to 5.77, 7.52, 12.71, and 18.53 at 120, 140, 160, and 180°C frying temperature, respectively. The increase in Hunter redness (a) value may be due to moisture loss, oil impact, and the formation of Maillard reaction products during frying of Gethi strips. Similar results were reported in the case of frying potato discs by immersion [43]. Redness values were significantly higher for atmospheric pressure chips than for vacuum chips due to a marked increase in Maillard reaction products [38].
The highest colour differences during the frying of Carimañolas were evident at lower temperatures (115.858°C) and 180 s. High temperatures and longer times obtained smaller differences with the standard, which could be considered an option to recommend the frying of the Carimañola to these conditions. The increase in the magnitude of total colour difference values could be attributed to high temperature and low moisture content, which initiated nonenzymatic browning, such as Maillard's reaction and sugar caramelization. Mariscal and Bouchon [40] observed that vacuum potato chips had the smallest overall colour change compared to atmospheric potato chips. This implies that the colour difference between these two types of frying may be due to the act that the processing conditions are also a reflection of the degree of degradation of total carotene, which could further establish that vacuum frying has the highest levels of total carotene retention.
Conclusions
The vacuum frying process of the Colombian Caribbean Carimañola allows us to obtain a product with excellent bromatological and nutritional characteristics, as well as a lower oil content. Kinetic and transfer parameters showed that temperature and frying time factors have a significant effect on Carimañola characteristics and composition. The increase in temperature and frying time indicates a reduction in moisture content, while oil uptake decreasing with 8 International Journal of Food Science increasing temperature increases with frying time. In addition, high temperatures also cause darker colourations. Therefore, vacuum frying is a viable alternative for the processing of Colombian Coastal Carimañola.
Data Availability
The data used as references in this study have been indicated in the part of the reference repository (name of authors, year of publication, title of the document, journal and number of pages).
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
We thank the IDAA research group and the University of Cartagena for their support and facilities. This study was supported by the Eighth Call for Research Projects, For Visible Research Groups (Categorized or Recognized) in the Scienti Platform of the Administrative Department of
|
v3-fos-license
|
2024-04-21T05:06:12.286Z
|
2024-04-19T00:00:00.000
|
269246076
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "22403ccefa8bdd2705f35b910cb6cfd982e8dcb4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:557",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"sha1": "22403ccefa8bdd2705f35b910cb6cfd982e8dcb4",
"year": 2024
}
|
pes2o/s2orc
|
Racial/ethnic differences in the associations between trust in the U.S. healthcare system and willingness to test for and vaccinate against COVID-19
Background Trust in the healthcare system may impact adherence to recommended healthcare practices, including willingness to test for and vaccinate against COVID-19. This study examined racial/ethnic differences in the associations between trust in the U.S. healthcare system and willingness to test for and vaccinate against COVID-19 during the first year of the pandemic. Methods This cross-sectional study used data from the REACH-US study, a nationally representative online survey conducted among a diverse sample of U.S. adults from January 26, 2021-March 3, 2021 (N = 5,121). Multivariable logistic regression estimated the associations between trust in the U.S. healthcare system (measured as “Always”, “Most of the time”, “Sometimes/Almost Never”, and “Never”) and willingness to test for COVID-19, and willingness to receive the COVID-19 vaccine. Racial/ethnic differences in these associations were examined using interaction terms and multigroup analyses. Results Always trusting the U.S. healthcare system was highest among Hispanic/Latino Spanish Language Preference (24.9%) and Asian (16.7%) adults and lowest among Multiracial (8.7%) and Black/African American (10.7%) adults. Always trusting the U.S. healthcare system, compared to never, was associated with greater willingness to test for COVID-19 (AOR: 3.20, 95% CI: 2.38–4.30) and greater willingness to receive the COVID-19 vaccine (AOR: 2.68, 95% CI: 1.97–3.65). Conclusions Trust in the U.S. healthcare system was associated with greater willingness to test for COVID-19 and receive the COVID-19 vaccine, however, trust in the U.S. healthcare system was lower among most marginalized racial/ethnic groups. Efforts to establish a more equitable healthcare system that increases trust may encourage COVID-19 preventive behaviors. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-024-18526-6.
Barriers to COVID-19 testing and vaccination disparities in the first year of the COVID-19 pandemic included both structural (i.e., disparities in access and distribution to COVID-19 testing and vaccination sites) and attitudinal (i.e., beliefs/perceptions that reduced individuals' willingness to seek out COVID-19 testing and vaccination) barriers [6][7][8].Trust in the healthcare system was identified as a particularly important attitudinal barrier of racial/ethnic disparities in COVID-19 testing and vaccination [8,9].Both COVID-19 testing and vaccination required engagement with the healthcare system (i.e., COVID-19 testing was only available at health clinics prior to April 2021 [10]), and trust in the healthcare system was associated with COVID-19 testing and vaccination [11][12][13][14][15].
There is longstanding evidence that trust in the healthcare system varies between racial/ethnic groups.Prior to the pandemic, medical mistrust and distrust in the healthcare system (e.g., a cautious attitude towards the healthcare system or believing the healthcare system to be untrustworthy [16]) were more prevalent among marginalized racial/ethnic groups (e.g., medical mistrust and physician distrust were higher among Non-Hispanic Black and Hispanic adults compared to Non-Hispanic White adults [17,18]).Non-Latino Black adults also reported higher general medical mistrust compared to Asian, Non-Latino White, and White-Latino adults [17][18][19].These instances of higher medical mistrust and distrust in the healthcare system reflect historical and ongoing injustices experienced by marginalized racial/ ethnic groups, including systemic racism and discrimination [13,17,19,20].
Researchers suggest that trust in the healthcare system may partially explain racial/ethnic disparities in COVID-19 testing and vaccination rates [21,22].However, the literature largely focuses on COVID-19 vaccination [23].Trust in the U.S. healthcare system, its association with COVID-19 testing, and whether associations differ across racial/ethnic groups remain underexplored.COVID-19 testing and vaccination require different degrees of invasiveness and engagement with the healthcare system, and trust in the healthcare system may differentially impact these practices.Further, invasive procedures may raise concerns related to historical medical mistreatment that may uniquely influence COVID-19 vaccination behaviors [24].Yet to our knowledge, no studies examined COVID-19 testing and vaccination in the same study.
The present study addressed these gaps by examining the association between trust in the U.S. healthcare system and willingness to test for COVID-19 and receive the COVID-19 vaccine, and whether these associations varied across racial/ethnic groups in a large, nationally representative study.
Data source and study population
The Race-Related Experiences Associated with COVID-19 and Health in the United States (REACH-US) study is an online, nationally representative survey of U.S. adults that was conducted between January 26, 2021 and March 3, 2021.After informed consent was obtained from participants, the survey was administered by the nonpartisan YouGov, Inc. research firm, which uses an existing opt-in participant panel to conduct nationally representative surveys.Panel members were recruited and received panel rewards/incentives for their participation.
The REACH-US study included 5,500 adults from seven racial/ethnic subgroups (i.e., 500 American Indian/ Alaska Native, 1000 Asian, 1000 Black/African American, 1000 Hispanic/Latino, 500 Multiracial, 500 Native Hawaiian/Pacific Islander, and 1000 White adults) living in the U.S. YouGov panel members were proximity matched to a target sample of U.S. adults generated from the 2018 American Community Survey 1-year data.Additional details on the recruitment methods, the stratified sampling approach to include a diverse set of racial/ ethnic groups, the matching process, and propensity scoring to generate sample weights and nationally representative estimates within racial/ethnic groups are previously published [25].
Participants who already received at least one dose of the COVID-19 vaccine (n = 419) as well as those who did not provide responses for the trust item and/or the sociodemographic covariates were excluded from the present study analysis.The final sample size included 5,121 (weighted) participants (5,054 unweighted) (Figure S1).De-identified data were provided to the research team and the study was considered exempt, non-human subjects research by the Institutional Review Board of the National Institutes of Health.
Measures
The study outcomes, willingness to test for COVID-19 and receive the COVID-19 vaccine, were captured by two separate items.At the time of data collection (January-March 2021), COVID-19 tests were largely available whereas COVID-19 vaccines were not yet available to all U.S. adults [26,27].Therefore, the items asked participants whether they had or would plan to get tested (if they developed symptoms) and whether they had or planned to receive the COVID-19 vaccine (once it was available to them).To capture willingness to receive the COVID-19 vaccine, participants who had already received at least one dose of the COVID-19 vaccine were not included in the analysis.Behavioral intentions were conceptualized as willingness to test for COVID-19 or receive the COVID-19 vaccine as described below.
Willingness to test for COVID-19
Participants were asked "Did you get tested for COVID-19?"Response options included: (1) Yes, I tested positive, (2) Yes, I tested negative, (3) Yes, I don't know the results, (4) No, but I plan on getting tested soon, (5) No, but I would get tested in the future if I develop symptoms or come into contact with someone who has tested positive for COVID-19, and (6) No, and I don't plan on getting tested now or in the future.Willingness to test for COVID-19 was dichotomized into whether participants had been tested or were willing to test for COVID-19 (responses 1 to 5 coded as 0) versus unwilling to test for COVID-19 (response 6 coded as 1).
Willingness to receive the COVID-19 vaccine
Participants were asked "Do you plan to get the COVID-19 vaccine once it becomes available?"Response options included (1) Definitely not, (2) Probably not, (3) Probably yes, (4) Definitely yes, (5) I have received one dose of the COVID-19 vaccine, and (6) I have received two doses of the COVID-19 vaccine.Willingness to receive the COVID-19 vaccine was treated as an ordinal variable: (1) "Definitely not", (2) Probably not", (3) "Probably yes", and (4) "Definitely yes".Given that willingness to receive the COVID-19 vaccine was captured as plans to receive the COVID-19 vaccine when it became available, participants who had already received at least one dose of the COVID-19 vaccine were excluded.
Race/ethnicity and sociodemographic covariates
Racial/ethnic group membership was self-identified by participants and included American Indian/Alaska Native, Asian, Black/African American, Hispanic/Latino, Multiracial, Native Hawaiian/Pacific Islander, and White.Hispanic/Latino participants were further stratified into English Language Preference (ELP) and Spanish Language Preference (SLP) depending on their survey language.
Statistical analyses
Descriptive analyses were used to assess the distribution of sociodemographic characteristics and trust in the U.S. healthcare system, overall and stratified by race/ethnicity, as well as willingness to test for and vaccinate against COVID-19, overall and stratified by trust in the U.S. healthcare system.
In the overall models using adjusted logistic regression, trust in the U.S. healthcare system was regressed on willingness to test for and willingness to receive the COVID-19 vaccine (in two separate models) to produce adjusted odds ratio estimates (AORs).Adjusted models included race/ethnicity, age, gender, annual household income, educational attainment, health insurance, political ideology, and high-risk chronic health condition.
Racial/ethnic differences in the associations between trust in the U.S. healthcare system and willingness to test for and vaccinate against COVID-19 were each assessed using an interaction model and multigroup analyses (see conceptual model, Figure S1).An interaction term for racial/ethnic group and trust in the U.S. healthcare system was added to the fully adjusted logistic regression models for willingness to test for and vaccinate against COVID-19.Multigroup analyses were used to further examine the interaction models by generating estimates of the associations between trust in the U.S. healthcare system and each of the outcomes within each racial/ethnic group, adjusting for the covariates across racial/ethnic groups.
Regression analyses were conducted using R Version 4.1.2and multigroup analyses were conducted using Mplus Version 8.6 [29].All analyses were weighted to be nationally representative within each racial/ethnic group.
The direction of the associations between trust in the U.S. healthcare system and willingness to test for COVID-19 remained consistent across racial/ethnic groups (i.e., those who reported higher levels of trust versus "Never" trusting the U.S. healthcare system were more willing to test for COVID-19), however the magnitude of associations varied across racial/ethnic groups (global significance test in overall interaction model < 0.01) (see Table S2 for interactions between trust in the U.S. healthcare system and racial/ethnic group).The AORs for "Always" trusting the U.S. healthcare system (versus "Never") and willingness to test for COVID-19 were particularly high among Hispanic/Latino SLP (AOR: 16.28, 95% CI: 6.24-42.47)and Hispanic/Latino ELP adults (AOR: 12.25, 95% CI: 4.05-37.07),compared to the other racial/ethnic groups (AORs: 1.04-3.10)(Table 3; unadjusted ORs in Table S3).
The AORs for trusting the U.S. healthcare system "Most of the time" versus "Never" were also high among Hispanic/Latino SLP (AOR: 7.46, 95% CI: 3.64-15.30)and Native Hawaiian/Pacific Islander adults (AOR: 5.01, 95% CI: 1.98-12.67).There was some variation in the confidence intervals estimated across racial/ethnic groups, with wide confidence intervals observed for some of the estimated AORs (Table 3).The associations between trust in the U.S. healthcare system and willingness to receive the COVID-19 vaccine also varied across racial/ethnic groups (global significance test in overall interaction model < 0.01) (see Table S4 for interactions between trust in the U.S. healthcare system and racial/ethnic group).Similarly, the AORs for "Always" trusting the U.S. healthcare system (versus "Never") and willingness to receive the COVID-19 vaccine were high among Hispanic/Latino SLP (AOR: 5.63, 95% CI: 3.39-9.36)and Hispanic/Latino ELP adults (AOR: 3.89, 95% CI: 2.13-7.13),compared to the other racial/ethnic groups (AORs: 1.33-2.21)(Table 4; unadjusted ORs in Table S5).However, the associations between trust in the U.S. healthcare system and willingness to receive the COVID-19 vaccine were less consistent across racial/ethnic groups compared to willingness to test for COVID-19 (i.e., wide confidence intervals for many estimated AORs).
Discussion
Trust in the U.S. healthcare system was associated with greater willingness to test for and receive the COVID-19 vaccine in the overall study population.Those who trusted the U.S. healthcare system were two to three times as likely to be willing to test for COVID-19 and up to two times as likely to be willing to receive the COVID-19 vaccine.The associations between trust in the U.S. healthcare system and willingness to test for and receive the COVID-19 vaccine were consistent when assessing different levels of trust (i.e., comparing always, most of time, sometimes/almost never versus never trusting the U.S. healthcare system).These findings generally reflect prior literature on trust in the U.S. healthcare system and other preventive health behaviors (e.g., breast, cervical, and prostate cancer screening services [30,31]).
Multigroup analysis revealed two important racial/ ethnic differences in the associations between trust in the U.S. healthcare system and willingness to test for and receive the COVID-19 vaccine.The associations between a Weighted to be nationally representative within each racial/ethnic group b AOR = Adjusted Odds Ratio.Adjusted for age, gender, annual household income, education level, political ideology, health insurance coverage, and high-risk chronic health condition c Reference group = Never trusting the U.S. healthcare system d AI/AN = American Indian/Alaska Native; Black/AA = Black/African American; Hispanic/Latino ELP = Hispanic/Latino English Language Preference; Hispanic/Latino SLP = Hispanic/Latino Spanish Language Preference; NH/PI = Native Hawaiian/Pacific Islander trust in the U.S. healthcare system and willingness to test for and receive the COVID-19 vaccine were high among Hispanic/Latino adults.This is consistent with previous literature that found higher levels of trust in medical providers were significantly associated with higher levels of healthcare utilization among Hispanic/Latino adults [32].The authors suggested that Latino cultural values may facilitate relationship building and influence how Hispanic/Latino adults interact with the healthcare system.In addition, a recent study among Hispanic Americans found most (56%) reported positive ratings for the quality of their recent healthcare and most (51%) thought health outcomes for Hispanic people have improved in the past 20 years [33].These positive attitudes toward the U.S. healthcare system may partially explain the strong relationship between trust in the U.S. healthcare system and willingness to test for and receive the COVID-19 vaccine among Hispanic/Latino adults.
A second important finding from the multigroup analysis revealed that for American Indian/Alaska Native and Black/African American adults, trust in the U.S. healthcare system did not impact willingness to receive the COVID-19 vaccine.Among the remaining racial/ethnic groups in the present study, participants were more willing to receive the COVID-19 vaccine when they had higher levels of trust (always or most of the time) in the U.S. healthcare system.These findings may be attributed to historical injustices by the U.S. government and healthcare system toward American Indian/Alaska Native and Black/African American adults [34][35][36] that may complicate the relationship between trust in the healthcare system and preventive health behaviors.
Several limitations should be noted.The study population was matched and weighted to obtain a nationally representative sample, yet selection bias may still exist.The survey was conducted online, which may have created technology barriers for low-income individuals and individuals that live in rural places with broadband connectivity issues.The survey was also only conducted in English and Spanish (Hispanic/Latino only).Given that limited English proficiency has been associated with lower education and income [37,38], excluding adults that do not speak English may have resulted in a sample population with a higher level of education, higher income, and better access to healthcare.Given that less education has been associated with greater healthcare distrust and COVID-19 vaccine hesitancy, the associations between trust in the U.S. healthcare system and willingness to test for and receive the COVID-19 vaccine may have been stronger in the present study sample than in the full U.S. population.This limitation may have especially impacted results for Asian participants given that an estimated 31.9% of Asian adults in the U.S. have limited English proficiency [39].Additionally, the study was cross-sectional which limited the ability to make causal inferences as well as examine possible changes in behavioral intentions throughout the pandemic.For example, this survey was administered shortly after the Pfizer and Moderna COVID-19 vaccines were approved for emergency use authorization.Willingness to vaccinate may have increased over the study period as more people were allowed to receive the vaccine, people saw others get vaccinated, or vaccination became mandated.Moreover, trust in the U.S. healthcare system may not have been fully captured as it was measured using a single survey question.Future studies could consider additional measures of trust (e.g., trust in healthcare providers and specific COVID-19 health services).
Furthermore, because inferring causality was not feasible, potential barriers may impede the determination of causal relationships.Amidst the societal and political conflicts arising from divergent perspectives on COVID-19, COVID-19 testing, and COVID-19 vaccination, discerning whether trust in the U.S. healthcare system, as measured in the REACH-US study ("How often do you trust the healthcare system (e.g., doctors, nurses?), " was cultivated or diminished before or after the pandemic proves challenging.In essence, determining whether respondents began distrusting the healthcare system due to the pandemic introduces the potential obstacle of reverse causation.
Despite the complexities in causal inference, it is crucial to report associations for several reasons.Associations between trust in the U.S. healthcare system and willingness to test for and vaccinate against COVID-19 offer valuable insights that can inform decision-making across various domains, including healthcare, education, and public policy.Moreover, these associations can serve as a foundation for formulating hypotheses that can be tested to explore potential causal relationships between the variables.Furthermore, the identified associations in our study can pinpoint areas warranting deeper investigation, guiding researchers to study our specific variables more closely to establish causation.Given the significance of the COVID-19 topic and its potential implications for racial/ethnic inequities, reporting cross-sectional associations contribute to public awareness and communicates findings to a wide audience.
Despite these limitations, this study had several strengths.This study included a nationally representative study population of racially/ethnically diverse participants in the U.S. In addition, this study explored both the association between trust in the U.S. healthcare system and COVID-19 testing, as well as COVID-19 vaccination which allowed for assessing trends for two types of COVID-19 preventive behaviors.Trust in the U.S. healthcare system may play a more important role in COVID-19 testing behaviors compared to receiving the COVID-19 vaccine.These findings are valuable to mitigation efforts of the COVID-19 pandemic, since testing and vaccination are two COVID-19-related preventive behaviors that differ in the degree of engagement in the healthcare system and degree of invasiveness (i.e., a nasal swab/saliva test versus injections).Furthermore, this study contributes to a growing body of literature on trust in the U.S. healthcare system and COVID-19 preventive behaviors which has been mixed [19,40]).
This study has important implications for COVID-19 health services disparities.Lower trust in the U.S. healthcare system among many marginalized racial/ ethnic groups is likely attributed to systemic inequities related to COVID-19 outcomes, spread of misinformation, and changing guidelines by public health officials.Increasing trust in the U.S. healthcare system may be needed to encourage COVID-19 testing and vaccination among U.S. adults from marginalized racial/ethnic groups and may be especially impactful among Hispanic/ Latino adults.In addition, the relationship between trust in the U.S. healthcare system and willingness to receive the COVID-19 vaccine among American Indian/Alaska Native and Black/African American adults may require further investigation.Future studies should examine whether increasing trust in the U.S. healthcare system increases the likelihood of testing for and vaccinating against COVID-19 over time, thus impacting the spread and health burden of COVID-19.
Table 1
Study population characteristics across racial/ethnic groups (weighted)
Table 2
Trust in the U.S. healthcare system and willingness to test for COVID-19 and receive the COVID-19 vaccine American Indian/Alaska Native; Black/AA = Black/African American; Hispanic/Latino ELP = Hispanic/Latino English Language Preference; Hispanic/Latino SLP = Hispanic/Latino Spanish Language Preference; NH/PI = Native Hawaiian/Pacific Islander c The high-risk chronic health condition covariate was created based on the February 2021 Centers for Disease Control (CDC) list of medical conditions that increase risk of severe illness due to COVID-19 (Kates, J., L. Dawson, and J. Tolbert.The Next Phase of Vaccine Distribution: High-Risk Medical Conditions.2021; Available from: https://www.kff.org/policy-watch/the-next-phase-of-vaccine-distribution-high-risk-medicalconditions/).High-risk chronic health condition was coded as 1 if participants reported at least one medical condition that was considered high-risk according to the CDC.d All categories of sociodemographic characteristics varied significantly between racial/ethnic group (p < 0.05) a Weighted to be nationally representative within each racial/ethnic group b AOR = Adjusted Odds Ratio.Adjusted for race/ethnicity, age, gender, annual household income, education level, political ideology, health insurance coverage, and high-risk chronic health condition c Reference group = Never trusting the U.S. healthcare system a Weighted to be nationally representative within each racial/ethnic group b AI/AN =
Table 3
Trust in the U.S. healthcare system and willingness to test for COVID-19 across racial/ethnic groups (multigroup analysis)
Table 4
Trust in the U.S. healthcare system and willingness to receive the COVID-19 vaccine across racial/ethnic groups (multigroup analysis)
|
v3-fos-license
|
2015-03-20T15:25:33.000Z
|
2007-06-28T00:00:00.000
|
11695889
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://jnrbm.biomedcentral.com/track/pdf/10.1186/1477-5751-6-7",
"pdf_hash": "1fa23d3763d9ceeefd2b9b607f18359f4d704c84",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:558",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "e04bcf0f67466b02445ca4b87184ac00f4a0e636",
"year": 2007
}
|
pes2o/s2orc
|
Vitamin D and oestrogen receptor polymorphisms in developmental dysplasia of the hip and primary protrusio acetabuli – A preliminary study
We investigated the association of developmental dysplasia of the hip (DDH) and primary protrusion acetabuli (PPA) with Vitamin D receptor polymorphisms Taq I and Fok I and oestrogen receptor polymorphisms Pvu II and Xba I. 45 patients with DDH and 20 patients with PPA were included in the study. Healthy controls (n = 101) aged 18–60 years were recruited from the same geographical area. The control subjects had a normal acetabular morphology based on a recent pelvic radiograph performed for an unrelated cause. DNA was obtained from all the subjects from peripheral blood. Genotype frequencies were compared in the three groups. The relationship between the genotype and morphology of the hip joint, severity of the disease, age at onset of disease and gender were examined. The oestrogen receptor Xba I wild-type genotype (XX, compared with Xx and xx combined) was more common in the DDH group (55.8%) than controls (37.9%), though this just failed to achieve statistical significance (p = 0.053, odds ratio = 2.1, 95% CI = 0.9–4.6). In the DDH group, homozygosity for the mutant Taq I Vitamin D receptor t allele was associated with higher acetabular index (Mann-Whitney U-test, p = 0.03). Pvu II pp oestrogen receptor genotype was associated with low centre edge angle (p = 0.07). This study suggests a possible correlation between gene polymorphism in the oestrogen and vitamin D receptors and susceptibility to, and severity of DDH. The Taq I vitamin D receptor polymorphisms may be associated with abnormal acetabular morphology leading to DDH while the Xba I oestrogen receptor XX genotype may be associated with increased risk of developing DDH. No such correlations were found in the group with PPA.
Background
Developmental Dysplasia of the hip (DDH) and Primary Protrusio Acetabuli (PPA) encompass the spectrum of acetabular development from a shallow acetabulum in DDH to a deep acetabulum in PPA. Both have, as yet, an indeterminate aetiology, variable clinical presentation, and result in early onset osteoarthritis of the hip [1]. A genetic aetiology has been proposed in DDH [2], while the aetiology of PPA is widely debated. Eppinger believed that PPA results from a failure of normal ossification of the tri-radiate cartilage [3]. The possible genetic nature of transmission of this disorder was noted by D'Arcy et al [4]. Idiopathic PPA may represent a hitherto unidentified metabolic defect.
Hypothetically, it may be conceived that small numbers of genomic polymorphisms may well affect acetabular morphology and capsular laxity, and provide a spectrum of morphology across the population with the major dysmorphism leading to clinical apparent disease.
Genetic variation in hormone-related genes may represent a possible significant determinant of risk or severity, especially when considering the proposed effect of joint laxity on DDH [5,6]. The human ESR1 gene is located on chromosome 6q25. It comprises eight exons separated by seven intronic regions and spans more than 140 kilobases. The most widely studied polymorphic regions are the Pvu II and Xba I restriction fragment length polymorphisms in intron 1 and the (TA) n variable number of tandem repeats (VNTR) within the promoter region of the gene. The ESR1 is a ligand-activated transcription factor composed of several domains important for hormone binding, DNA binding, and activation of transcription. Alternative splicing results in several ESR1 mRNA transcripts, which differ primarily in their 5' untranslated regions.
The VDR and the ESR1 genes are interesting because they encode proteins that are important transcription factors as key players in the respective signal transduction pathways. Several interactions between the vitamin D and oestrogen endocrine system have been described. 1,25-Dihydroxyvitamin D 3 (1,25-(OH) 2 D 3 ) and 17β-estradiol (E 2 ) have a mutual effect on their biosynthesis [19,20] and receptor expression [21]. Also, some genetic studies found an interaction between ESR1 and VDR genotypes with respect to bone density [22]. Suarez et al. found an interactive effect of ESR1 and VDR gene polymorphisms on growth in infants [23]. Although oestrogen receptor polymorphisms have been studied in relation to DDH [24], there has not yet been a definite evidence of their role in causation. Also, to our knowledge this approach has not been used in the study of PPA. We therefore performed a study to identify the possible association between genetic polymorphism at these loci and the presence of DDH and PPA. We also explored the effect of genotype on acetabular morphology and severity of the conditions.
Patients and methods
We recruited 45 patient with DDH and 20 patients with PPA. In all patients, a diagnosis of DDH or PPA was made on the basis of clinical and radiographic examination. A control group of 101 subjects (age 18-60) was recruited from the same geographical region and the same ethnic group from the hospital radiology database. These subjects had normal hip joints on the basis of recently performed pelvic radiographs for unrelated causes. All patients and control subjects were Caucasian. The demographic data on the study population is shown in Table 1. All participants in this investigation were interviewed and examined to obtain clinical history, family history, and a peripheral blood sample through venipuncture. Informed written consent was obtained from all the subjects prior to their participation in the study. Ethics approval was obtained from the Local Research Ethics Committee.
Radiographic measurements
Pelvic radiographs were obtained, and radiographical variables (acetabular index and centre edge angle) of the hip joint were measured by a single individual using a uniform technique (BK). The researcher had been specially trained in radiographic measurement techniques, and reached an intra-observer variation of less than two degrees.
The centre-edge angle was measured as described by Wiberg, between a vertical line drawn from the centre of the femoral head to a line drawn from the centre of the head to the lateral edge of the acetabulum on antero-posterior radiographs of the pelvis [25]. The acetabular index was measured as described by Sharp, between the inter-tear drop line and the weight bearing dome [26]. The radiographic data of the study population are presented in Table 2.
Genotype assays
All genotype assays were performed at the Human Genomics Research Group, University Hospital of North Staffordshire by two individuals (BK, CD), and the results validated by an independent, blinded observer examining the agarose gels (AF). 10% of the assays were repeated and analysed by an independent observer. The PCR assays were performed with at least one known DNA genotype (positive control), one negative control (no DNA), and known molecular weight markers. At least 15% of the samples were re-assayed, and the relevant genotype confirmed. DNA was extracted from the peripheral blood samples collected into EDTA using the phenol-chloroform extraction method. PCR RFLP-based assays were performed to identify alleles containing Xba I and Pvu II polymorphisms on the oestrogen receptor 1 (ESR1). Taq I and Fok I polymorphisms on the vitamin D receptor gene or VDR were also analysed. The PCR products were then digested with their respective restriction enzymes and were examined after electrophoresis on 2% agarose gels. The wild-type X, P, T, F and mutant x, p, t and f alleles were identified by the expected fragment sizes following restriction enzyme digestion.
Statistical analysis
Data were analysed using Stata software (version 8, Stata-Corp, Texas, US). Differences in genotype frequencies of the ESR1 (Xba I, Pvu II) and VDR (Taq I, Fok I) polymorphisms were examined in the three groups using chisquare tests. Association of genotypes with acetabular index and centre edge angles was performed using the Mann-Whitney U test. Table 3 shows the genotype distribution for the four polymorphic sites. The oestrogen receptor Xba I wild-type genotype (XX, compared with Xx and xx combined) was more common in the DDH group (55.8%) than controls (37.9%), though this failed to achieve statistical significance after Bonferroni correction (p = 0.106). Similarly, the VDR Fok I ff genotype (compared with FF and Ff combined) was more common in DDH patients than controls but was not statistically significant (p = 0.18 after Bonferroni correction). No other significant associations were identified.
Results
The most relevant radiographic variables of both these conditions (i.e. centre edge angle and acetabular index) on the affected and non-affected sides were also compared with genotype. In the DDH group, homozygosity for the Taq I Vitamin D receptor t allele was associated with higher acetabular index on the affected side This was demonstrated using the Mann-Whitney U-test. However, this again was not statistically significant following Bonferroni correction. (p = 0.06). In this group, the Pvu II pp oestrogen receptor genotype was associated with low centre edge angle on the affected side though this did not achieve statistical significance (p = 0.14). No other significant or near-significant associations were identified.
Discussion
Developmental dysplasia of the hip and primary protrusio acetabuli are two common developmental disorders of the hip joint, with significant associated morbidity. Efforts have been made for the last 40 years for early identification of developmental hip dysplasia, as early correction of anatomy can produce a hip which has greater chances will last to late adulthood without major reconstructive surgery [27]. If detected early, secondary osteoarthritis can be partially prevented or at least delayed. Currently, screening for DDH in the UK is performed by clinical examination and ultrasound scanning of patients at risk. Blanket ultrasound screening has been proposed, but is not significantly better than at risk or selective screening [28][29][30]. Therefore, it would be highly desirable to identify predictors to a high risk population.
Aetiological factors for both DDH and PPA are obscure. There are definite pointers towards a genetic basis, but no concrete evidence to support it. Some recent studies have investigated the genetic basis of DDH. Granchi et al found an association between osteoarthritis secondary to developmental hip dysplasia and vitamin D receptor polymorphism Bsm I. To our knowledge, there are no similar studies performed for PPA. PPA and DDH may actually represent two ends of a spectrum in the phenotypic outcome of a genotypic variation. This formed the basis of studying same gene polymorphisms for these two conditions.
A detailed study of 589 index patients and 1897 first degree relatives in the 1960's established a familial transmission in non-syndromic hip dysplasia [31]. There was significant shallowing of the acetabulum in parents of children with DDH, and a higher proportion of children with DDH and their first degree relatives were lax-jointed. Based on this study, Wynne-Davis proposed two different gene systems, one affecting joint laxity and the other affecting the shape of acetabulum to be responsible for the causation of DDH. Carter and Wilkinson reported increased incidence of joint laxity with DDH in 1964 [32].
With the recent advances in genetic techniques, there has been a renewed interest to explore the inheritance of this disorder. Solazzo et al performed a complex segregational analysis on 171 pedigrees collected through probands affected by non-syndromic DDH, and reiterate a two locus theory [33], against the previous hypothesis that disease inheritance in familial non-syndromic DDH is polygenic.
The oestrogen receptor Xba I wild-type genotype (XX, compared with Xx and xx combined) was more common in the DDH group (55.8%) than controls (37.9%).
Though this failed to achieve significance, it may warrant further investigation. In the DDH group, homozygosity for the mutant Taq I Vitamin D receptor t allele was associated with higher acetabular index. This may represent an important aetiological association with DDH. Similarly, the Pvu II oestrogen receptor was associated with a low centre edge angle. Though this did not reach significance it may represent an association with severity of DDH. Our results are indicative, rather than conclusive, of the association between developmental dysplasia of the hip and oestrogen and vitamin D receptor polymorphisms in the studied population groups. Our study population was kept homogenous, and is representative of Caucasian population in a well defined region of the UK. This was also disadvantageous in the fact that that the total number of patients recruited was low. However, the total number of cases does compare well with other recent genetic studies on DDH. We are aware of much larger case series of patients with DDH from tertiary referral centres. However, these have not been characterised in genetic studies. Indeed, population homogeneity would be an issue in larger series due to the current known prevalence of DDH and PPA in our population.
The present study has shown possible genetic associations between DDH and vitamin D and oestrogen receptor polymorphisms. Further work with a larger series of patients and possibly more candidate gene polymorphisms may well shed more light on these associations. We hope that the genetic associations identified in the present study may lead to more accurate means to identify at risk populations. The associations, whether positive or negative may help us to understand the mode of transmission of this condition.
|
v3-fos-license
|
2023-11-08T05:05:31.818Z
|
2023-11-01T00:00:00.000
|
265040341
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "833af628d1c392953eca5e82460fa52d74cd9199",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:561",
"s2fieldsofstudy": [
"Computer Science",
"Education",
"Medicine"
],
"sha1": "833af628d1c392953eca5e82460fa52d74cd9199",
"year": 2023
}
|
pes2o/s2orc
|
Flipped Classroom With Artificial Intelligence: Educational Effectiveness of Combining Voice-Over Presentations and AI
Background: Most theorists and medical educators agree that a curriculum rich in active learning (AL) strategies, such as a flipped classroom, is superior to passive listening to promote better retention and application of new knowledge. Although AL multimodal teaching strategies have been considered the most effective, including online virtual teaching, voice-over pre-recorded lectures, and, more recently, the addition of artificial intelligence (AI), data on the effectiveness of these methods in medical education is scarce. The present educational research study examined the effectiveness of voice-over-style lectures and AI in facilitating learning outcomes as assessed by test scores after participating in basic science lectures in a medical school setting. Methods: Participating students were divided equally into two educational strategy groups: slide decks only traditional way (PPT) or PPT plus AI (PPT+AI) platform (edYOU; Los Angeles, CA, USA). The PPT+AI group comprised the PPT with narration and real-time interaction with an AI being personalized, which leverages natural language processing to tailor customized conversations to each student’s current knowledge. Students in the two groups were asked to participate in a formative quiz (not reflective of their academic evaluations) to answer questions relevant to voice-over lectures (PPT and PPT+AI). The statistical strategy for conducting quiz item analysis included item difficulty, item discrimination, and point-biserial correlation R. A student's T-test was conducted to compare the two strategies’ effectiveness via test scores. A priori, an alpha level of 0.05 was considered significant. Results: Data are presented as mean ± s.e.m.; Cohen’s d. A total of 42 (n=21 in each group) students participated in the study. Students using PPT+AI obtained statistically significant (P <0.043; d = .54) higher quiz scores under challenging questions and less time spent in lectures (54.1 ± 14.3 hrs.) in the PPT+AI group (P <0.001; d = 1.17) compared with the PPT group. Conclusions: The PPT+AI strategy could be the difference between a pass and a fail, as the PPT+AI strategy is particularly efficient in improving difficult question test scores. At the same time, students may learn the material in less time (efficiency). Research on the application of AI as part of educational strategies for improving satirized test scores, including boards, is warranted. The present study is part of the necessary early steps to better understand the impact of AI as an educational strategy for improving educational outcomes.
Introduction
Although multimodal teaching strategies have been considered the most effective, most theorists and medical educators agree that a curriculum rich in active learning (AL) strategies is superior to passively listening to the information in lectures in promoting better retention and application of new knowledge [1].AL comprises various learning activities such as flipped classroom, think-pair-share, turn and talk, and bulleted breaks during lectures requiring learners to construct, understand, and comprehend the knowledge derived from their educational experience "while simultaneously improving knowledge gain and recall abilities" [1,2].First introduced in the late 90s, the flipped classroom educational strategy has been accentuated since, owing to its proven effectiveness in educational settings.Characterized by using allocated didactic time for active learning, the flipped classroom is ideal for focusing on developing the application of material, a better understanding of concepts, and improving standardized testing scores.For this reason, it has been recently recognized that adopting a multimodal approach when selecting instructional methods is the best strategy to increase the chances of success of the educational activity [3,4].Hence, educational research suggests that the best way to teach modern is by combining multiple pedagogical resources to complement one another, where students learn more effectively when multimodal and system-based approaches are integrated or supplement each other [5].Despite being developed in 1955, the use of artificial intelligence (AI) applications has increased exponentially over the past few years, a revolution that is posited to add tools to enhance medical education [6].Educational research suggests that AI's primary uses in medical education are learning support, assessment of students' learning, and, to a minimal degree, curriculum review [6].Providing constant (24/7) feedback and a guided learning pathway might be possible by adding AI technologies as part of the educational tools.Subgroup analysis revealed that medical undergraduates are one of the primary target audiences for AI use.However, educational research to assess the usefulness of this technology is lacking.Accordingly, the goal of the present educational research study was to test the effectiveness of both voice-over style lectures and AI technology to facilitate learning.It was hypothesized that adding AI to the flipped classroom would improve academic performance.
Study population
The
Design
Participating students were randomly assigned into two educational strategy groups: slide decks only the traditional way (PPT) or PPT plus AI (PPT+AI) platforms (edYOU; Los Angeles, CA, USA).Randomization and matching were performed by someone not associated with assessing the students using a computergenerated random number table (with a 20% random element) using an allocation ratio of 1:1, as previously described [7].The PPT+AI group comprised the PPT with narration and real-time interaction with an AI being.Students in the two groups were asked to participate in a formative quiz (a 15-item multiple-choice instrument not reflective of their academic evaluations) and answer usability-related questions relevant to voice-over lectures (PPT and PPT+AI).In addition, surveys were administered electronically to gauge feedback and general satisfaction or dissatisfaction with the PPT and PPT+AI sessions.The survey included questions to account for confounding variables, such as attention paid while using voice-overs or outside resources for the content in question.The surveys were administered after they had experienced PPT or PPT+AI.In the present study, the learning outcomes were divided into theoretical (quiz scores) and satisfaction contexts (usability of the learning platform).
Artificial intelligence platform
The AI learning experience was delivered by proprietary technologies designed to be personal, ethical, and educationally effective (edYOU; Los Angeles, CA, USA).using a personalized ingestion engine (PIE), the platform curates diverse learning materials from expert sources worldwide.A personalized AI (PAI) then leverages natural language processing to tailor customized conversations to each student's current knowledge.In addition, an intelligent curation engine ensures that the AI interacts safely using techniques like content flagging, toxicity blocking, and data verification.This combination enables AI tutors on the platform to build long-term mentoring relationships by adapting to each learner's evolving needs.
Quiz item analysis
The statistical strategy for conducting item analysis included item difficulty, item discrimination (ID), and point-biserial correlation R (PBI) [7].Briefly, the item difficulty was calculated by looking into the proportion of the total learners answering correctly; this metric's target was 50%-70% (valid difficult questions).The item discrimination accounts for the difference between the upper quartile (of total scores) who answered correctly vs. the lower quartile of those also answering the item correctly; the main goal was to avoid negative scores (invalid questions).Lastly, the PBI was performed via the correlation between test and item scores; the desired target was over 0.20.Items not meeting the criteria described above were considered non-valid and hence nullified.
Statistical analysis
Student's T-test and Cohen's d calculation ([M2 -M1]⁄SDpooled) were performed to compare the effectiveness of the different educational strategies (PPT vs. PPT+AI) and estimate the effect size, respectively.A priori, an alpha level of 0.05 was considered significant.The interpretation of effect sizes as small (d = 0.2), medium (d = 0.5), and large (d = 0.8) was based on benchmarks suggested by Cohen [8].
Results
Usability surveys All of the students in the PPT+AI group were naive about using an educational platform for AI.Most of the students in the PPT+AI group found that the application was helpful, with interactions generally accurate in the context of their questions (~60%).User satisfaction and usability were 4.11 and 4.33 on a scale from 1 (low) to 7 (high), respectively.
Quiz scores
Data are presented as mean ± s.e.m.; Cohen's d.A total of 42 (n=21 in each group) students aged 18-28 participated in the study.Students using PPT+AI obtained statistically significant (P <0.043; d = .54)higher quiz scores (13.6 ± 7.7%; Figure 1) under challenging questions and statistically significant (P <0.001; d = 1.17) less time spent in lectures (54.1 ± 14.3 min; Figure 2) in the PPT+AI group compared with the PPT group.While the students in the PPT+AI, on average, scored a passing grade on the quiz (75.2 ± 3.3%), there were no statistical differences (P <0.062; d = .48)compared with the PPT (68.3 ± 2.9%) (Figure 1).A similar trend in score differences was observed when only the valid questions per item analysis were considered, such that the PPT+AI score was not statistically (P <0.127; d = .36)higher (7.1 ± 6.2%).
FIGURE 1: Test scores comparison between the two educational strategies.
Note: PPT, slide decks only, is the traditional way; AI, artificial intelligence platform.
FIGURE 2: Time spent in lectures.
Note: PPT, slide decks only, is the traditional way; AI, artificial intelligence platform.
Discussion
The present educational research examined the effectiveness of both voice-over-style lectures using AI technology to facilitate learning in basic science courses.The PPT+AI was superior to the traditional PPT for improving academic performance in medical students undertaking basic sciences courses.AI applications may provide the means to promote a more engaging teaching and educational environment conducive to improving academic performance.
Research suggests that AI's primary uses in medical education are learning support, assessment of students' learning, and, to a minimal degree, curriculum review [6].The use of artificial intelligence (AI) in medical education has the potential to facilitate what would be complicated tasks and improve overall teaching efficiency [9].For example, AI could help automate written response assessment, or provide reliable feedback on medical management plans and imaging, provide constant (24/7) feedback, and provide an improved guided learning pathway, among other applications.In addition, AI education-driven platforms may prepare learners one-on-one in a simulated classroom or clinical setting to study and prepare them for tests, clinical encounters, and specific clinical skills.
The effectiveness of AI as part of educational strategies to improve academic performance has been recently highlighted in a systematic review conducted by Varma et al. ( 2023) [10].While AI applications in medical education are categorized into teaching, assessing, and trend spotting, the review demonstrated that AI is a feasible supplement to undergraduate medical curricula.Interestingly, studies directly comparing AI to current teaching methods have documented favorable results [11,12].Moreover, AI-enhanced lectures have increased positive feedback between 18%-21% in pre-clinical microbiology [11].Similarly, a computer tutor computer program (CIRCSIM-Tutor) designed to conduct a natural language dialogue with a medical student demonstrated that significant learning occurs during a 1-hour interaction with the program [12].In their study, Michael et al. used a pretest/post-test assessment strategy to test for program educational efficacy, while students indicated considerable satisfaction with the CIRCSIM-Tutor [12].If taken together, the results of the prior studies suggest that interactive strategies, including AI, can seamlessly be added to the teaching and lecture delivery to improve academic performance and satisfaction, which are in line with the present study's main observations.
As with any study, the present educational research work is not exempt from limitations.First, a limited sample size affected the statistical power as total test scores and valid question scores were not statistically different despite moderate effect sizes (d = .36-.48).It is worth mentioning that the students in the PPT+AI scored passing grades.Another limitation is that the study did not use a pretest-posttest paradigm that could be a superior design to demonstrate educational strategy efficacy.Additionally, the lectures were limited to basic science and not clinical content, limiting the generalization.Lastly, in this study (proof-ofconcept), the machine learning capabilities of the AI program were not ultimately used as it was the first cohort of studies using the system.Hence, part of the prospective studies and cohorts of students would benefit from faster reaction times from the AI, better interactions, and more direct answers to users' doubts or questions about the main concepts studied.
Conclusions
In sum, multimodal educational strategies involving AL with the addition of AI were superior to the traditional PPT for improving academic performance in medical students.Many teachings methodological improvements could be obtained by AI adoption, as it is in the medical profession and medical education.AI applications may provide the means to promote a more engaging teaching and educational environment as part of AL strategies.Educational research aimed at introducing AI into the medical school curriculum for medical professionals to better understand AI algorithms and maximize their use in medical education is warranted.
following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work.Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work.Intellectual property info: The AI learning experience was delivered by proprietary technologies designed to be personal, ethical, and educationally effective (edYOU; Los Angeles, CA, USA). .Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
v3-fos-license
|
2018-03-09T13:33:02.989Z
|
2014-07-15T00:00:00.000
|
3740346
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://ccsenet.org/journal/index.php/jas/article/download/33581/21451",
"pdf_hash": "2d73ad115615dc7a115cd80e7f9cce9a3ed66790",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:572",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "f419bf8de915492aa76a374f327eaa5f1671eab2",
"year": 2014
}
|
pes2o/s2orc
|
Effects of Sample Storage on Spectral Reflectance Changes in Corn Leaves Excised From the Field
Characterization of leaf spectra is useful for estimating spectra at the canopy level when viewed from airborne or space-borne sensors. When excising and transporting leaves to the laboratory, care must be taken so that degradation does not take place and alter the spectral signature. We compared the effect of elapsed time on leaf reflectance when excised corn leaves were stored inside and outside a cooler. Hyperspectral measurements were obtained 15 minutes after excision, then again 1, 2, 3, 4, 5, 6, and 24 hours after excision. Each hyperspectral band was modeled independently using a piecewise function with a linear portion for the first hour after plant excision and exponential portion after the first hour. Results showed that for the first hour, storage technique did not influence the signature. After the first hour, the leaves stored outside the cooler showed less change than leaves stored in the cooler. Furthermore, after approximately 30 minutes a large percentage of hyperspectral bands drifted beyond the level of significance (as determined by the mean plus or minus two standard deviations). These findings are valuable for developing methods for storage and analysis to support field studies and collection of ground-truth data to support remote sensing.
Introduction
Precision agriculture seeks to reduce costs by precisely controlling the agricultural inputs at subfield spatial resolution.This requires techniques that can efficiently determine the needs (water, fertilizers, and pesticides) of plants and precisely provide for them (Yao et al., 2012).Often, remotely sensing the needs of the plants is accomplished using optical sensors because of their ability to collect useful data on several plants within a relatively short period.
Hyperspectral sensors work by subdividing the optical portion of the electromagnetic spectrum, which includes ultraviolet, visible, and infrared, into many contiguous narrow bandwidth channels, or "bands."Since molecules and atoms absorb and reflect light at particular wavelengths based on their chemical formula, hyperspectral sensors can theoretically differentiate the structural and chemical composition of surfaces within their field of view.The capabilities of hyperspectral sensors degrade when they are used in remote sensing mode because of interference caused by the atmosphere, which the light interacts with as it traverses between the source (typically, the sun in remote sensing), object, and sensor (Gao et al., 2009).To minimize this interference, it may be useful to move the object to a laboratory setting, where the light source and sensor are closer to the object.In the case of plants, this typically involves excising samples such as leaves, stalks, branches, or stems from the plants because it is not feasible to transport intact plants to the laboratory.
Once a sample (such as a leaf or stalk) is excised from a plant, the sample experiences progressively increasing stress.The stress is present because the source of nutrition and moisture is cut off from the sample.Stress typically causes detectable changes in the reflectance spectrum of the samples, and these changes are often used to diagnose stress in crops caused by water, nitrogen, herbicides, or pesticides (Barnes et al., 2000).The presence of detectable stress in the samples could lead to incorrect conclusions about crops in the field.Thus, it is important to know how much time can be allowed to elapse after excision before the stress levels distort readings and thus prevent the samples from properly characterizing crops in the field.
Previous studies have attempted to determine how much time should be allowed to elapse between excision of samples and measurement of their hyperspectral reflectance.However, most studies that the authors are aware of focused on common commercially grown plants.Thomasson and Sui (2009) modeled the spectrum of wilting cotton leaves using the SAS PROC MIXED procedure (SAS/STAT 9.2 User's Guide, Online), using a linear model.The authors rejected the hypothesis that the hyperspectral curve was significantly correlated with time.However if the spectral change is non-linear, it may be highly correlated with time.Lee et al. (2014) evaluated cotton and soybean leaves for two storage techniques: (1) storage of leaves inside paper bags placed inside a cooler with ice and (2) storage of the leaves inside paper, closable bags left outside a cooler.The study showed that leaves inside the cooler were preserved slightly better over long periods than the leaves outside the cooler.Over a short period, however, preservation was similar to leaves left outside the cooler.In both techniques, enough change to prevent the leaf samples from characterizing the crops was observed, so the long term advantage was not useful.Foley et al. (2006) preserved the leaves of common guava (Psidium guajava), purple guava (Psidium littorale), weeping fig (Ficus benjamina), floss silk (Chorisia speciosa), and coffee (Coffea arabica) by wrapping moist gauze around the petiole.The sample size used for each plant was one leaf for control and one leaf for treatment, but the study showed that response of leaves from different plant species varied significantly.The leaves also retained their general spectral curve when preserved with a moist paper towel wrapped around the petiole and placed in plastic bags better than when no preservation technique was used.Summy et al. (2011) examined many different storage combinations of giant reed (Arundo donax) involving different types of bags and refrigeration.The authors concluded that for most field campaigns where hyperspectral measurements are typically collected 24-48 hours after excision, storage within closable containers inside cooled ice chests is suitable for preserving leaf samples.However, the threshold of significant change was extremely wide at approximately four standard deviations.Such a threshold of significance may not be appropriate in difficult classification problems.Combined with the studies from Lee et al. (2014) and Foley et al. (2006), the research conducted by Summy et al. (2011) further reinforces the need to study spectral signatures of samples after they are excised, since every plant species has a different response.The objective of this research is to model the spectrum of corn leaves as a function of storage duration to provide guidance for field sample studies performed on corn crops.
Experiment and Leaf Samples
In the experiment, ten corn leaves were excised from plants in the farm of the USDA-ARS Crop Production Systems Research Unit located in Stoneville, MS, USA, and transported to a nearby lab.Five of the leaves were placed in paper bags and stored in a cooler at a measured temperature of 17.2 degrees Celsius.The cooler was kept cold using ice, with several layers of masking paper separating the leaf samples from the ice.The remaining five leaves were placed in similar paper bags not stored in a cooler.The ambient temperature outside the cooler was 22.9 degrees Celsius.The leaves were imaged using a hyperspectral camera (described herein) 15 minutes, 1 hour, 2 hours, 3 hours, 4 hours, 5 hours, 6 hours, and 24 hours after excision.After being imaged in the lab, the leaves were immediately placed back in their respective storage.
Hyperspectral Imaging
The leaves were imaged using a custom hyperspectral camera with an effective spectral range of 400-900 nm with 240 spectral bands.The camera is a 14-bit PCO1600 CCD (charge-coupled device) high resolution camera (Cooke Corporation, Romulus, MI, USA) that was integrated with an ImSpector V10E spectrograph (Spectral Imaging Ltd., Oulu, Finland) with a 30 µm entrance slit and 23 mm Schneider lens.White and dark references were measured before data were collected so that reflectance images could be computed from images of the leaves.The white reference was measured using a Spectralon panel (Labsphere Inc., North Sutton, NH, USA), and the dark reference was measured by placing the lens cap on the camera.Reflectance (R) was computed for each pixel using the formula below.
The central wavelength of the hyperspectral band is λ, and the digital numbers for the pixel and white reference are represented by DN P and DN R , respectively, and the dark current is DC.The dark current and white references were collected once per day.
Image Processing
In order to isolate spectral curves of the corn leaves, the leaves were first segmented from the background.Segmentation was accomplished by thresholding Normalized Difference Vegetation Index (NDVI), a widely used parameter to represent plant vigor from remotely sensed data (Rouse et al., 1974).This produced a rough segmentation, which was cleaned up by visually comparing the segmentation to the image and manually correcting mislabeled pixels.The segmentation result was then used to identify pixels that were part of the leaf, which were then used to compute a mean spectral signature for the leaf.The spectral curves were then normalized by dividing by the value at 450 nm (Thomasson & Sui, 2009).The mean spectral curves for the corn leaves stored in the cooler and outside the cooler are shown in Figure 1.
A.
B. Figure 1.Mean normalized reflectance signatures (n=5) of corn leaves after various storage intervals: (A) stored inside a cooler (B) stored in a room outside the cooler.The signatures were normalized by dividing by the value at 450 nm
Data Modeling and Analysis
The goal of the analysis was to estimate the time needed for each hyperspectral band to drift two standard deviations from its mean at first measurement (15 minutes after excision).When two classes have identical standard deviations and the means of both classes are separated by two standard deviations, the band will have an area under the receiver operating characteristic of 0.922 (Green & Swets, 1966), a Bhattacharyya distance of 0.500 (Bhattacharyya, 1943), and result in classification accuracy of 84% using nearest mean classification (Duda et al., 2001) without any additional bands or features.Based on experience, classification problems with bands that have less separation are often difficult and require advanced classifiers (such as artificial neural networks and support vector machines) (Duda et al., 2001), while problems with bands of equal or better separation often produce very good classification results using simple techniques.
After normalized spectral signatures were obtained for each leaf at every time interval, analysis began by fitting the change in each hyperspectral band with respect to time to an appropriate model.Exponential models accurately described the change in each band for cotton and soybean leaves in a previous study (Lee et al., 2014).
The corn leaves, however, consistently showed a spike in change occurring approximately one hour after excision, dissipating by two hours after excision (Figure 2).This spike prevented the change from fitting an exponential model well for most of the hyperspectral bands (as determined using T-test) (Montgomery & Runger, 2003).However, when the data obtained one hour after excision are omitted, the remaining data does fit an exponential model.This situation led to the use of both a linear model (to model the change from 15 minutes to 1 hour) and an exponential model (to model the remaining time).Figure 2. Plot of average reflectance with respect to time for hyperspectral band centered at 468.5 nm.The average reflectance at 15 min., 2 hr., 3 hr., 4 hr., 5 hr., 6 hr., and 24 hr. is a good fit to the exponential curve (blue line), but the average reflectance at 1 hr.does not fit well.The two standard deviation range is between the black dashed lines The linear model was fit to the data points recorded 15 minutes and 1 hour after excision.Since only two measurement times were considered here, fitting the linear model only requires fitting a line through the means of both measurements.This is described by the equation where t is the time in hours, and NR 1 and NR 0.25 are the average normalized reflectance at 1 hour and 15 minutes (0.25 hours).Each normalized hyperspectral band (NR) was modeled independently of other bands.In the analysis, we were only concerned with the amount of time needed to drift outside the plus or minus two standard deviation range.This means we did not need to model the "back end" of the spike.It would be safe to assume that the spike dissipates linearly or in any overall decreasing manner.
The exponential model is described by the equation, Where NR is the normalized reflectance for a single hyperspectral band, t is the time in hours, and c 1 , c 2 , and c 3 are the model parameters.The parameters were chosen with non-linear least squares using data from 15 minutes, 2 hours, 3 hours, 4 hours, 5 hours, 6 hours, and 24 hours (1 hour was omitted).The accuracy of model parameters was evaluated using an F-test (Montgomery & Runger, 2003).
The time required for the sample to drift outside the two standard deviation range was estimated by finding the intersection point between the model and the values determined by the 15 minute mean plus or minus two standard deviations.The critical time was determined by the amount of time elapsed between excision and the first time the model crossed this threshold.There are two equations in the model (the linear model for the interval from 15 minutes to 1 hour and the exponential model from 15 minutes to 24 hours without 1 hour).If the linear equation drifts outside the critical range within the first hour, the time the exponential equation drifts outside the normal range is insignificant since the linear equation will typically reach the threshold sooner.There is a possibility of bands deviating more than two standard deviations only to later return to the critical range.This scenario is not considered by the analysis since it is unlikely that all the bands will return to critical range at the same time.It is important to note that the model will only cross one critical value because the equations for the models are monotonic.
Results and Discussion
The data show that corn leaves cannot be described by an exponential model alone.This is apparent when examining the mean plots of the hyperspectral data for each time (Figure 1) because at 1 hour, the spectral signature is visually different in shape from any other time and because the T-test for the exponential model fails at 1 hour (Figure 3).Initially instrument error was the suspected cause, but this is not likely to have occurred because data collected after and before 1 hour after excision fits the exponential model also shown to model cotton and soybeans well (Lee et al., 2014).For the problem to be instrument error, the error would have to be present only for the 10 measurements at 1 hour, and no other time.This seems improbable since the anomaly was present in only part of the spectrum, laboratory conditions were consistent throughout the data collection process, and the same calibration was used at each imaging time.Furthermore, the F-test shows that the exponential model fits the data well if 1-hour data is removed (Figure 4).
A.
B. Figure 3. T-test for exponential model using data collected at 1 hour after excision.The test uses a two sided 95% confidence level.Corn leaves in A were stored inside a cooler, while the corn leaves in B were stored in a room outside the cooler A. B. Figure 4. F-test for exponential model using all of the corn data except that collected at 1 hour after excision.The confidence level for this test was 97.5%.The leaves in A were stored inside a cooler, while the leaves in B were stored in a room outside the cooler Out of a total of 175 bands, the number of bands eventually removed from the normal range between the time interval of 15 minutes to 24 hours was 148 for leaves stored in the cooler and 114 for leaves stored outside the cooler.Among the bands that drifted outside the normal range when stored in the cooler, 95 drifted outside the normal range within the first hour.For the leaves stored outside the cooler, 96 bands drifted outside the normal range within the first hour.Thus, both storage techniques performed virtually identically within the first hour.This observation is further emphasized by the plot of the drift time per band (Figure 5) and the cumulative results (Figure 6).Most of the hyperspectral bands drift outside their normal ranges more than 30 minutes after excision.After the first hour there is an advantage for leaves preserved outside the cooler.The majority of the difference is accounted for in the spectral range between 530-710 nm.This region of the spectrum is affected by photosynthesis in the plant, which may explain why placing leaves in a dark cooler affects reflectance more than leaving the leaves outside the cooler where there is light present.
Conclusion
This study shows that the exponential model used in previous research for soybean and cotton leaves does not fit the spectral decay of corn leaves well because the spectrum between 15 minutes and 1 hour after excision indicates a spike change that disappears 2 hours after excision.Corn leaves can thus be modeled using a combination of a linear model and exponential model.Results in Figure 5 reveal no advantage to storing corn leaves in a refrigerated cooler across the entire spectrum, and this implies that leaves should be processed within 30 minutes after excision.This does not mean that field collection campaigns must be limited to 30 minutes, but it simply means that steps must be taken to limit the time between excision and measurement for each leaf.Lab equipment might require close proximity to the field, and small quantities of leaves could be processed immediately.Figure 5 shows a window near both 600 nm and 900 nm where the spectrum of corn leaves is preserved very well.If these portions of the spectrum are the only portions of concern for a study, then the 30 minute limit does not apply.
Future work should entail repeating this experiment for a larger sample size and measuring the effects of storage duration on other agriculturally important plants.Effectiveness is limited by the spectral range of the camera system used (about 400-900 nm).It may also be useful to study decay rate of the spectrum more closely to determine if (and how accurately) the original spectrum can be reconstructed based on time elapsed between excision and measurement within the measured spectrum.
Figure 5 .
Figure 5. Plot of time required for spectral bands to drift outside the normal range for corn leaves.The blue and red lines represent corn leaves stored inside a cooler and in the room, outside the cooler, respectively
|
v3-fos-license
|
2019-10-30T17:01:27.490Z
|
2019-10-29T00:00:00.000
|
204939929
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://biosignaling.biomedcentral.com/track/pdf/10.1186/s12964-019-0437-0",
"pdf_hash": "068dafe4dddf05e4f38f0106142e6c2f88594c10",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:574",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "1d811a5e14d0d220f270c9865f1f5be2167ab681",
"year": 2019
}
|
pes2o/s2orc
|
Flipping the dogma – phosphatidylserine in non-apoptotic cell death
Abstract The exposure of phosphatidylserine (PS) on the outer plasma membrane has long been considered a unique feature of apoptotic cells. Together with other “eat me” signals, it enables the recognition and phagocytosis of dying cells (efferocytosis), helping to explain the immunologically-silent nature of apoptosis. Recently, however, PS exposure has also been reported in non-apoptotic forms of regulated inflammatory cell death, such as necroptosis, challenging previous dogma. In this review, we outline the evidence for PS exposure in non-apoptotic cells and extracellular vesicles (EVs), and discuss possible mechanisms based on our knowledge of apoptotic-PS exposure. In addition, we examine the outcomes of non-apoptotic PS exposure, including the reversibility of cell death, efferocytosis, and consequent inflammation. By examining PS biology, we challenge the established approach of distinguishing apoptosis from other cell death pathways by AnnexinV staining of PS externalization. Finally, we re-evaluate how PS exposure is thought to define apoptosis as an immunologically silent process distinct from other non-apoptotic and inflammatory cell death pathways. Ultimately, we suggest that a complete understanding of how regulated cell death processes affect the immune system is far from being fully elucidated. Graphical abstract
For a long time, it has been considered that when cells are programed to die via a mechanism known as apoptosis, they alarm neighboring cells using "eat me" signals to facilitate their clearance from our body. Recently, it has been reported that even when cells die via a regulated but non-apoptotic pathway (termed necroptosis) they still possess similar "eat me" signals to apoptotic cells. In this review, we outline the evidence for these "eat me" signals in non-apoptotic cell death, and discuss the possible mechanisms and implications of such signals.
Background
Cell death is central to physiological homeostasis; the balance between cellular differentiation, proliferation, and death underpins all aspects of biology, including embryogenesis, organ function, immune responsivity, and tumorigenesis [1]. Originally, cell death was divided into two basic forms, termed apoptosis (programmed cell death) and necrosis (accidental cell death), which were distinguished primarily by their morphology as observed by pathologists. In the last two decades, however, the cell death field has expanded to include upwards of 10 distinct, although sometimes overlapping, pathways [2].
Apoptosis
Defined in 1972, apoptosis was the first form of regulated cell death (RCD) to be discovered [3]. Apoptosis is executed either by intrinsic or extrinsic pathways, which ultimately lead to the activation of a family of cysteinedependent aspartate-specific proteases called caspases [4][5][6]. In the extrinsic pathway, ligation of death ligands (e.g., TNF-related apoptosis-inducing ligand (TRAIL) [7], tumor necrosis factor (TNF) [8], or Fas ligand (FASL) [9]) to their respective death receptors recruits and activates the initiator caspases-8 and -10 in an interaction mediated by death domain-containing adaptor proteins, e.g., Fas-associated protein with death domain, FADD [10]. In the intrinsic, or mitochondrial, pathway, cellular stress modifies the balance between pro-and anti-apoptotic B-cell lymphoma-2 (Bcl-2) family members, releasing pro-apoptotic BAX and BAK to induce mitochondrial outer membrane permeabilization (MOMP). Cytochrome-c release following mitochondrial damage activates the initiator caspase-9 [11,12], which then cleaves the effector caspases-3, − 6, and − 7 to execute apoptosis [13,14]. Hallmarks of apoptotic cell death are cell shrinkage, chromatin condensation (pyknosis) [15], DNA fragmentation [16], plasma membrane blebbing [17], and the shedding of apoptotic bodies [18][19][20]. Another main feature is the exposure of phosphatidylserine (PS) on the outer plasma membrane, which, among other "eat me" signals, results in the phagocytosis and clearance of apoptotic cells and bodies without the release of pro-inflammatory molecules [21]. Hence, apoptosis has always been classified as an immunologically silent form of cell death [22].
Necrosis
The term necrosis was originally used by Rudolf Virchow to describe tissue breakdown while configuration was conserved [23]. Necrosis is now considered to be a traumainduced form of accidental cell death (ACD) [2]. Morphologically, necrosis is characterized by the swelling of the cell (oncosis) and its organelles, as well as by permeabilization of the plasma membrane that releases cellular contents into the extracellular space to trigger inflammation [20]. While originally considered to be unprogrammed, necrosis is now understood to also be a regulated process that can be genetically and chemically manipulated. Many pathways of regulated necrosis have now been discovered, including necroptosis, pyroptosis, mitochondrial permeability transition (MPT)-driven necrosis, ferroptosis, parthanatos, and NETosis [2]. While these pathways represent a huge and ongoing field of investigation, this review will focus primarily on necroptosis within the context of PS biology.
Cell death and inflammation
While the Roman Cornelius Celsus defined the four cardinal signs of inflammation (heat, redness, swelling, and pain) in the first century AD, it was not until the nineteenth century that advances in histopathology enabled Rudolf Virchow to describe the association between inflammation and tissue damage seen in necrosis. Developing technologies have now shed light on the underlying mechanism, involving cytokine and chemokine secretion, immune cell recruitment, and increased blood vessel permeability [76][77][78]. Inflammation is now understood to facilitate pathogen elimination and wound healing [79]. However, when not properly controlled, an excessive immune response may result in inflammatory pathology and tissue damage [80].
The inflammation-provoking agent may be either foreign or endogenous. Foreign agents are usually non-self Fig. 1 Necroptosis molecular pathway. Necroptotic cell death can be triggered by numerous factors, including death receptors, TLRs, and intracellular receptors. The ligation of TNF to its receptor (TNFR1) recruits TNFR type 1-associated via death domain (TRADD) and RIPK1 via their death domain (DD) (pink ellipse). TRADD recruits TNF receptor associated factor 2 (TRAF2) and cellular inhibitors of apoptosis (cIAPs) to collectively form complex I, together with the linear ubiquitin chain assembly complex (LUBAC). In complex I, RIPK1 is ubiquitylated to induce nuclear factor kappa-light-chain enhancer of activated B cells (NF-kB) nuclear translocation and signaling. This signaling results in the expression of inflammatory cytokines and pro-survival proteins, such as c-FLIP. When complex I activity is impaired, or following TNFR1 endocytosis, the assembly of a RIPK1/caspase-8/FADD/c-FLIP cytosolic complex, complex II, can occur. Caspase-8, in complex with c-FLIP, cleaves and inactivates RIPK1 and RIPK3. When caspase-8 activity is blocked, phosphorylation and oligomerization of RIPK3 leads to necroptosis by inducing phosphorylation of MLKL followed by its translocation to the cell membrane. The cellular contents released from necroptotic cells can serve as DAMPs to further induce inflammation. Similarly, when caspase-8 activity is blocked, necroptosis can also be induced by interferons (IFNs) (green ellipse), TLRs (blue ellipse), and DNA-dependent activator of IFN-regulatory factors (DAI) (purple ellipse). IFNs stimulate Janus kinase (JAK)-signal transducer and activator of transcription (STAT) signaling upon ligation of IFN receptors (IFNRs) resulting in RIPK1 and/or RIPK3 activation. TLRs can recruit RIPK3 via TIR-domain-containing adaptor-inducing interferon-β (TRIF) upon ligation by lipopolysaccharides (LPS) (for TLR4) or dsRNA (for TLR3). DAI directly interacts with RIPK3 via a RHIM-RHIM interaction upon sensing of dsDNA molecules associated with a pathogen and are referred to as pathogen associated molecular patterns (PAMPs). In contrast, endogenous agents are intracellular molecules released by damaged cells and are thus referred to as danger associated molecular patterns (DAMPs). Polly Matzinger challenged the long-lived self/non-self model of immunity by proposing that the immune system is context specific, recognizing and responding to danger, rather than pathogens alone [28,80]. Cell death and the release of cellular contents are now known to be major drivers of inflammation [81][82][83].
Non-apoptotic PS exposure
The plasma membrane of viable cells exhibits phospholipid asymmetry, as phosphatidylcholine and sphingomyelin are predominantly on the outer leaflet and most phosphatidylethanolamine (PE) and phosphatidylserine (PS) are in the inner leaflet [84]. The exposure of PS on the outer leaflet of early apoptotic cells was reported back in 1992 [21]. As it was already known that the anticoagulant AnnexinV binds to negatively charged phospholipids like PS [85], it became a tool for the detection of PS-exposing apoptosing cells [86][87][88][89][90][91]. Today, it is still used as a marker for early apoptosis and is commercially distributed as a definitive tool to distinguish apoptotic from necrotic cells, mainly by flow cytometry [92][93][94][95][96].
Relying on this method to define apoptotic cells is problematic, however, as many groups have now also reported PS exposure in non-apoptotic cells. Krysko et al. have used immunogold labeling to detect PS on the outer plasma membrane during oncosis, the early stage of primary necrosis in which cells swell [97], while Ferraro-Peyret et al. have reported that apoptotic peripheral blood lymphocytes can expose PS in a caspaseindependent manner [98]. In support, Sawai and Domae have shown that the pan-caspase inhibitor, z-VAD-fmk (zVAD), does not prevent AnnexinV staining and cell death in U937 cells treated with the apoptotic stimuli, TNF-α and the protein translation inhibitor cycloheximide. Together, these reports indicate that necrotic cells cannot be distinguished from apoptotic cells using AnnexinV staining alone [99].
With advancements in our understanding of caspaseindependent RCD, many of these models might now be recognized as regulated necroptosis, rather than simple necrosis. For example, Krysko et al. induced death by treating a caspase-8-deficient, bcl-2 overexpressing cell line with dsRNA. Ferraro-Peyret et al. also used zVAD prior to adding an intrinsic apoptotic stimulus, either etoposide, staurosporine, or IL-2 withdrawal. Sawai and Domae added the RIPK1 inhibitor necrostatin-1 to block PS exposure and cell death in the zVAD-, TNF-α-, and cycloheximide-treated U937 cells, strongly implying RIPK1 involvement. Consistent with this, Brouckaert et al. showed that TNF-α treated-, i.e., necrotic, L929 cells are also phagocytosed in a PS-dependent manner [100], while in the nematode Caenorhabditis elegans, necrotic touch neurons have also been shown to expose PS [101].
Recently, we and others have demonstrated and characterized PS exposure in well-established models of necroptosis that are currently in use. Gong et al. used either RIPK3 or MLKL fused into the binding domain of FKBP-12 (Fv). These dimerizable proteins rapidly aggregate upon addition of a dimerizer, resulting in a coordinate activation and necroptosis without the need for caspase inhibition. Using this system in NIH 3T3 cells and mouse embryonic fibroblasts (MEFs), they have shown that necroptotic PS externalization occurs prior to the loss of plasma membrane integrity [102]. In our lab, we induce necroptosis in L929, HaCaT, and U937 cells using a combination of TNF-α, a second mitochondriaderived activator of caspases (SMAC) mimetic and zVAD (denoted here as TSZ) and observe the same phenomenon [103]. PS exposure has also been observed shortly before plasma membrane rupture during pyroptosis, an inflammasome−/gasdermin-D-dependent RCD that results in the cleavage and release of IL-1β and IL-18 [104]. In agreement, Jurkat cells were recently shown to expose PS and be phagocytosed following death by either Fas-induced apoptosis, TNF-α-induced necroptosis, or RSL3 (a glutathione peroxidase 4, GPX4, inhibitor)-induced ferroptosis [105]. In addition, it was very recently reported that necroptosis induction by IFN-γ in caspase-8 deficient MEFs also resulted in a long-term PS exposure before cell death execution [106]. Overall, these findings challenge the canonical approach of distinguishing apoptosis from other cell death pathways by AnnexinV staining of PS externalization before membrane rupture [107].
Machinery of apoptotic vs non-apoptotic PS exposure
While the externalization of PS during apoptosis has long been known, the underlying molecular mechanism was elucidated only in the last decade. In a healthy cell, plasma membrane asymmetry is maintained by ATPdependent aminophospholipid translocases or flippases that transport PS and PE to the inner leaflet of the lipid bilayer against a concentration gradient. Among various candidates, the type IV P-type ATPase (P4-ATPase) family members ATP11C and ATP11A, and their chaperone CDC50A, were found to be important for this flip [108]. While ATP11A and ATP11C deficiency decreased flippase activity without abolishing the asymmetry, CDC50A-deficient cells continually expose PS, suggesting that other molecules might also contribute. Given the established asymmetry, flippase inactivation is inadequate for rapid PS exposure, as passive translocation is too slow. Specific molecules, including transmembrane protein 16F (TMEM16F) and XKrelated protein 8 (XKR8), have been found to nonspecifically transport phospholipids between the lipid bilayer, and are therefore defined as phospholipid scramblases [109,110].
PS exposure is blocked in the presence of a caspase inhibitor in anti-FAS-treated Jurkat cells, indicating PS externalization during apoptosis is caspase-dependent in these cells [111]. Indeed, the phospholipid scramblase, XKR8, is cleaved by caspase-3 during apoptosis, resulting in its dimerization and irreversible activation [112]. Cells that express caspase-resistant XKR8, or totally lack it, do not expose PS during apoptosis. Interestingly, the flippases, ATP11A and ATP11C, also contain caspase recognition sites. Cells with caspase-resistant ATP11A/C do not expose PS during apoptosis, indicating a requirement for their irreversible inactivation by caspases [108].
In contrast, TMEM16F scramblase activity is calciumdependent, and is dispensable for lipid scrambling during apoptosis [113]. Activated platelets and lymphocytes expose PS in a Ca 2+ -dependent manner, for which TMEM16F is also essential. High Ca 2+ levels inhibit P4-ATPase, hence flippase inhibition might also contribute in this setting [114]. Taken together, these findings distinguish the caspase-dependent mechanism of apoptotic PS exposure in which ATP11A/C are inactivated and XKR8 is activated, from PS-exposure mediated by Ca 2+ influx.
The key players in PS exposure during necroptosis have not yet been elucidated. Using the dimerizable RIPK3 and MLKL systems described above, Gong et al. have shown that MLKL activation leads to PS exposure independently of RIPK3 and caspase activity [102]. In support of this, blocking the translocation of human pMLKL to the plasma membrane using necrosulfonamide (NSA) prevents necroptotic-PS exposure and cell death [103]. Necroptosis induces a minor and transient oscillatory rise in intracellular Ca 2+ that is accompanied by a rectifying Cl − efflux downstream of TMEM16F activation. However, neither TMEM16F knockdown, nor inhibition, affect necroptotic cell death [115]. The elevation in intracellular Ca 2+ levels was shown to be a consequence, rather than a requirement, of MLKL activation. Although PS exposure follows the MLKL-dependent Ca 2+ influx, it is not prevented in the absence of extracellular Ca 2+ [116]. In addition, TMEM16F is not necessary for this PS exposure [102]. However, extracellular Ca 2+ depletion inhibits plasma membrane breakdown, suggesting that these cells are primed to die but are "trapped" without a concomitant increase in intracellular Ca 2+ . Interestingly, intracellular Ca 2+ levels also eventually increase when cells are cultured in Ca 2+ -free medium, suggesting that intracellular pools of Ca 2+ , in the endoplasmic reticulum (ER) for example, might ultimately supply the Ca 2+ ions. In support, although in some cell lines it appears that cell death is totally blocked in the absence of extracellular Ca 2+ within the time-frame examined, in others it is only delayed [116].
In agreement, Ousingsawat et al. have demonstrated that, during necroptosis, intracellular Ca 2+ influx originates from the ER, and is thus independent of extracellular Ca 2+ levels [115]. These data suggest that TMEM16F is being activated by the increase in intracellular Ca 2+ during necroptosis and, hence, may have some redundant role in necroptotic PS exposure together with one, or more, as-yet unknown scramblases. However, this mechanism is not essential for subsequent cell death. Nevertheless, simultaneous staining with the Ca 2+ sensor, GCaMP3, and MFG-E8, which does not require Ca 2+ for PS staining, might confirm whether intracellular Ca 2+ is needed, or not, for necroptotic PSexposure. In addition, since PS exposure immediately follows MLKL activation and pMLKL is directly associated with the plasma membrane, MLKL might possess the ability to directly effect scramblase [102,117] (Fig. 2). In support, Mlkl D139V/D139V neonates, which carry a missense mutation results in spontaneously activated MLKL, were recently reported to demonstrate increased Annex-inV binding in some hematopoietic progenitor populations [118].
Of note, when cell death is induced by overexpression of gasdermin-D (the terminal, pore-forming executer of pyroptosis), knockdown of TMEM16F inhibits Ca 2+ -mediated PS exposure and cell death [119]. Similarly, in Caenorhabditis elegans, the nematode homolog of TMEM16F, anoctamin homolog-1 (ANOH-1), was found to be essential for PS exposure and phagocytosis of necrotic, but not apoptotic, cells. These results suggest a role for TMEM16F in non-apoptotic PS exposure. To add to the complexity, ANOH-1 acts in parallel to CED-7, a member of the ATPbinding cassette (ABC) transporter family, which is also required for PS exposure in apoptosis [101]. Taken together, these observations highlight that the role of Ca 2+ , caspases, flippases, and scramblases in PS exposure is specific to the type of cell death, and that new discoveries regarding the machinery and mechanism of non-apoptotic PS exposure are yet to come.
Not just the cells -PS positive necroptotic extracellular vesicles
Focusing in on PS exposure during necroptosis, we and others have realized that this phenomenon is not restricted to necroptotic cells alone. As with apoptotic cells that form PS-exposing apoptotic bodies to facilitate their recognition and phagocytosis [95], necroptotic cells also release PSexposing extracellular vesicles (EVs), here referred to as "necroptotic bodies". Necroptotic bodies are smaller in size than their apoptotic counterparts (0.1-0.8 μm versus 0.5-2 μm, respectively), contain pMLKL, endosomal sorting complexes required for transport (ESCRT) family members and other proteins, and have less DNA content than the apoptotic bodies [103,120,121].
Using dimerizable RIPK3 and MLKL, the formation of AnnexinV+ necroptotic bodies has been reported to be rapid and dependent on MLKL activation. The fact that these bodies did not contain proteins, in this experimental system, might arise from the rapid and exogenous activation of necroptosis using the dimerizer, which bypasses the full molecular signaling pathway [102]. The ESCRT machinery comprises a group of proteins that assembles to facilitate the transportation of proteins in endosomes, multivesicular body formation, and budding [122]. The ESCRTIII components, CHMP2A and CHMP4B, translocate from the cytosol and colocalize with active MLKL near the plasma membrane during necroptosis, suggesting that they may have a role in the shedding of PS-exposing necroptotic bodies. In support, silencing of CHMP2A and CHMP4B reduced the formation and release of necroptotic bodies in both human and murine cells [102,116,121].
Commitment issuesare PS-exposing necroptotic cells committed to die?
As discussed above, PS exposure during apoptosis is caspase-dependent. With more than 500 substrates, activated effector caspases are responsible for nuclear and Fig. 2 Mechanism of phosphatidylserine (PS) exposure during apoptosis and necroptosis. In live cells, the flippases, ATP11A and ATP11C, transport PS and phosphatidylethanolamine (PE) to the inner leaflet of the lipid bilayer against a concentration gradient. In apoptotic cells, active caspase-3 cleaves the phospholipid scramblase, XKR8, resulting in its dimerization and irreversible activation. In addition, caspase-3 cleaves ATP11A/C into an irreversible inactive state. The mechanism of PS exposure during necroptosis has not been elucidated. We hypothesized that pMLKL translocation-mediated increase in intracellular Ca 2+ , from either the extracellular space or the endoplasmic reticulum (ER), activates the calcium-dependent scramblase, TMEM16F, and irreversibly inactivates the flippases, ATP11A/C. pMLKL, when directly associated with the plasma membrane, might also possess the ability to directly effect TMEM16F activity, as well as other yet unknown scramblases Golgi fragmentation, chromatin condensation, DNA cleavage and degradation, and plasma membrane blebbing, all of which together promote irreversible cell death [123,124]. Despite this, immortalized cells can be rescued from very late apoptosis, even though they expose PS [125]. This phenomenon is called anastasis, or apoptotic recovery [126]. Similarly, and maybe even more privileged by their caspase-independency, PSexposing necroptotic cells are also not obliged to die. For example, the addition of NSA to isolated PS-exposing necroptotic cells (sorted AnnexinV-single positive U937, Jurkat, or HT-29 cells) resulted in an increase in the live cell population (AnnexinV-) over 24 h [102,103].
Facilitating study of this phenomenon, necroptosis induced in the dimerizable RIPK3-or MLKL-expressing cells can be rapidly deactivated by the addition of a competitive inhibitor, termed a "washout ligand". Isolated PS-exposing necroptotic cells in which RIPK3 or MLKL were inactivated by this method exhibit dephosphorylated MLKL, re-established PS asymmetry, basal intracellular Ca 2+ levels, normal morphology, culture surface reattachment, and robust growth. These recovered cells are as susceptible to a new necroptotic stimulus as their parent cells, but appear to have a unique pattern of gene regulation, with enrichment in the fibroblast growth factor receptor (FGFR) and Gap junction pathways [116,126].
The necroptosis survivors also show higher expression of several ESCRT components. The ESCRTIII machinery functions by shedding wounded membrane components as 'bubbles' in an intracellular Ca 2+ -dependent manner to maintain plasma membrane integrity [127][128][129], and is important for plasma membrane repair in response to diverse stimuli. Loss of ESCRT machinery components appears to compromise the recovery of PS-exposing necroptotic cells. For example, silencing of CHMP2A decreased the ability of resuscitated cells to form tumors when injected into mice. In addition, a specific clone of dimerizable RIPK3-expressing immortalized macrophages that was resistant to RIPK3 activation showed pMLKL and extensive formation of Annex-inV+ bubbles upon dimerizer treatment. Silencing of the ESCRTIII member, CHMP2A, drastically increased the susceptibility of these cells to necroptosis [102]. Overall, these data strongly indicate that the ESCRTIII machinery is essential for necroptosis recovery.
In support, bone marrow-derived dendritic cells (BMDCs) demonstrate slower and reduced cell death in response to RIPK3 activation in comparison to bone marrow-derived macrophages (BMDMs) and HT-29 cells. In alignment with the concept of shedding damaged membrane components to delay or prevent necroptosis, pMLKL under these conditions was detectable only in the secreted EVs, but not inside the BMDCs themselves. In addition, silencing of two proteins required for EVs release (Rab27a and Rab27b) increased the sensitivity of BMDCs to RIPK3mediated cell death [121]. Hence, MLKL-mediated Ca 2+ influx might promote PS exposure and recruit ESCRTIII, leading to the shedding of damaged PS-exposing membrane as bubbles and allowing the cell to change its fate [126].
Phagocytosis of non-apoptotic cells
Efferocytosis is defined as the engulfment and digestion of dying cells by phagocytes [130]. It has been shown that, while phagocytosis is PS-dependent in both apoptotic and necrotic cells, the later are phagocytosed less quickly and efficiently [100]. Recently, our group has shown that AnnexinV+ necroptotic U937 cells are phagocytosed by BMDMs and peritoneal macrophages more efficiently than live cells [103]. In support, phagocytosis of necroptotic Jurkat cells was observed while their plasma membrane was still intact [116]. Budai et al. recently reported that apoptotic and necrotic cells are equally engulfed. The phagocytosis in both cases is still PS-dependent, as it was reduced by masking PS, or by deficiency in the PS-receptors: T-cell immunoglobulin mucin protein-4 (TIM4), Mer receptor tyrosine kinase (MerTK), integrin β 3, and tissue transglutaminase (TG2) [131]. The type of engulfed and engulfing cells, as well as the molecular mechanisms or duration of PS exposure, might all contribute to these observations.
As mentioned above, CDC50A-deficient cells constitutively expose PS. These cells, although live, are engulfed by wild-type, but not MerTK-deficient, macrophages, indicating that PS is sufficient to induce phagocytosis. Interestingly, 3% of the engulfed live cells are released intact, a phenomenon that is not seen in apoptotic cells with active capsases [108]. In contrast, the same group has reported that live cells continually exposing PS due to constitutively active TMEM16F are not engulfed by macrophages, suggesting that the mechanism of PS exposure might influence the consequent phagocytosis [132].
A metabolically stressed cell uses classical autophagy, an evolutionarily conserved pathway, as a source of nutrients. MAPPLC3A (LC3), which has an essential role in the classical autophagy pathway, was found to have a key role in a similar, but distinct, pathway -LC3-associated phagocytosis, or LAP. Uptake of either apoptotic, necrotic, or necroptotic cells was shown to promote LAP, characterized by the translocation of LC3 to the phagosome. This consequently facilitates phagosome maturation and the degradation of the engulfed dead cells. LAP was mediated by PS recognition by the receptor TIM4, as TIM4-deficient macrophages failed to undergo LAP [133]. LAP-deficient mice exhibit normal engulfment, but defective degradation, of apoptotic cells. Upon repeated injection of apoptotic cells, these mice developed a systemic lupus erythematosus (SLE)-like disease, with increased levels of pro-inflammatory cytokines, such as IL-6, IL-1β, IL-12, autoantibodies, and a decreased level of the anti-inflammatory cytokine, IL-10. These data are consistent with the notion that defects in the clearance of dying cells underlie the pathogenesis of SLE [134]. In addition, LAP-deficiency in tumor-associated macrophages (TAM) triggers pro-inflammatory and stimulator of interferon gene (STING)-mediated type I interferon gene expression in response to phagocytosis of apoptotic cells, in contrast to an M2 phenotype seen in the wildtype TAMs. In support, defects in LAP in the myeloid compartment induce a type I interferon response and suppression of tumor growth [135]. This suggests that phagocytosis can be regulated downstream of PS-mediated engulfment, leading to different effects. Taken together, these reports have implications for how we define apoptosis as an immunologically silent process in contrast to other non-apoptotic forms of cell death, and strongly suggest our current model for PS exposure during cell death is overly simplistic. Overall, these studies highlight how much is yet to be uncovered regarding the contribution of PS to downstream signaling in cell death.
The role of PS-positive non-apoptotic cells and EVs
Given that non-apoptotic cells are known to expose PS and be phagocytosed, albeit via a not-yet-fully-defined mechanism, the immunological consequences for non-apoptotic cell death should be re-examined. As discussed, death of PS-exposing necroptotic cells can be leashed by the ESCRTIII-mediated shedding of PS-exposing bubbles to maintain plasma membrane integrity [102,103,116,120,121,126]. In support, during pyroptosis the ESCRT machinery, in association with gasdermin-D, is seen to be recruited to damaged membranes to induce the budding of AnnexinV+ vesicles and negatively regulate death [136]. Hence, the phase in which cells expose PS could be viewed as a 'window of opportunity' for the cell to manipulate inflammatory cell death pathways, and potentially control the release of pro-inflammatory DAMPs and cytokines, such as IL-1β in pyroptosis [137] and IL-33 in necroptosis [138]. Additional support for the immuno-regulatory role of PS exposure is that mice lacking the phospholipid scramblase, XKR8, exhibited reduced clearance of apoptotic lymphocytes and neutrophils, and an SLE-like autoimmune disease [139]. However, XKR8 activity is caspase-dependent and, thus, most likely inactive during necroptosis [140]. Deficiency of TMEM16F has not been reported to induce the same autoimmune disease, but does result in a mild bleeding disorder associated with the role of PS in activated platelets. This fits with a splice mutation in TMEM16F found in patients with a similar bleeding disorder, named Scott's syndrome [141,142]. Filling in the gaps in our understanding of the biology of PS exposure by non-apoptotic cells might reveal how this system is modulated under different conditions to fine-tune the downstream immune response.
The necroptotic factors, RIPK1, RIPK3, and MLKL, induce expression of inflammatory cytokines and chemokines [143][144][145][146][147][148]. PS-exposing necroptotic cells lacking ESCRTIII components have reduced expression and release of these cytokines and chemokines. In addition, while necroptotic cells potently induce cross-priming of CD8 + T cells via RIPK1 and NF-kB [149], this is reduced in ESCRTIII-deficient cells [102]. In support, Kearney et al. have reported that necroptotic death attenuates production of pro-inflammatory cytokines and chemokines by lipopolysaccharide (LPS) or TNF [150]. These results suggest that the ESCRT-driven delay in cell death execution, mediated by repair of PS-exposing membrane, enables a sustained time for inflammatory signaling. This highlights that the time interval associated with PS exposure, rather than the cell lysis itself, might be the inflammation-promoting arm of necroptosis.
Reports regarding the sequential events in the phagocytosis of dying cells are somewhat confusing. Phagocytosis of apoptotic cells by LPS-activated monocytes has been reported to increase IL-10 secretion, while reducing secretion of TNF-α, IL-1β, and IL-12 [151]. In addition to IL-4 and IL-13, recognition of apoptotic, but not necrotic, neutrophils by the PS-receptors MerTK and Axl is essential for induction of anti-inflammatory and repair programs in BMDMs [152]. We have also shown that phagocytosis of both PS-exposing apoptotic and necroptotic cells results in IL-6 secretion, while only phagocytosis of necroptotic cells leads to significantly elevated TNF-α and CCL2 secretion from macrophages [103]. Necroptotic cancer cells induce dendritic cell maturation in vitro, cross-priming of T cells in vivo, and antigen-specific IFN-γ production ex vivo. Vaccination with necroptotic cancer cells facilitates efficient antitumor immunity [153], and administration of mRNA coding for MLKL induces anti-tumor immunity [154,155]. Martinez et al. have reported that phagocytosis of either apoptotic, necroptotic, or necrotic cells is followed by the secretion of IL-10 (higher in apoptosis) and transforming growth factor (TGF)-β (slightly higher in necroptosis). LAP-deficient macrophages secrete elevated levels of IL-1β and IL-6, but show decreased IL-10 and TGF-β, in response to these dying cells [133]. This is consistent with the anti-tumor or autoimmunity seen when LAP is impaired, further implicating LAP in the regulation of the immune response [133][134][135].
As previously proposed in our model of the 'three waves of immunomodulatory effects during necroptosis', the PS-exposing bodies released during early necroptosis may serve as signaling vehicles that stimulate the microenvironment [120,126]. For example, EVs that are released from LPS-activated, caspase-8-deficient BMDMs in a MLKL-dependent manner, contain IL-1β [121]. In addition, the fact that phagocytosis of necroptotic, but not apoptotic, cells induces inflammation might be explained by the presence of necroptotic bodies, rather than a distinct effect of these PS-exposing engulfed cells.
Concluding remarks
Exposure of PS by non-apoptotic cells has long been disregarded, leading to the role of PS exposure during apoptosis being overstated with respect to how inflammation is mitigated during apoptosis. Here, we have briefly outlined apoptotic and necroptotic RCD, and their respective roles in promoting inflammation. We have outlined the evidence for PS exposure in nonapoptotic cells and EVs, discussed a potential mechanism, and looked at the effect of PS-exposure on the reversibility of cell death, the phagocytosis of dead cells, and subsequent inflammation.
Recent reports challenging the idea that PS exposure is exclusive to apoptosis highlight that communication between RCD and the immune system is far from being fully understood. Even more fundamental, however, is the need to improve the classification of RCD pathways in published literature, as well as develop more definitive methods for their characterization. As non-apoptotic cells can also present "eat me" signals and be engulfed, phagocytosis should be considered as a kind of 'bridge' between a dying cell and the immune system. How dying cells affect signaling in phagocytes will be fascinating to examine in light of this new understanding. In this regard, studying the contents, uptake, and dissemination of PS-exposing vesicles may shed light on the immunological effects of non-apoptotic RCD. In addition, a better understanding of PS exposure and recognition of non-apoptotic cells by phagocytes might provide new therapeutic tools in the PS field. The evident involvement of the ESCRTIII machinery could be manipulated as a powerful tool to regulate cell death and inflammation. In examining PS biology, this review challenges the dichotomy typically thought to exist between apoptosis and other forms of RCD, and highlights the importance of understanding the inflammatory consequences of PS exposure in the context of all cell death modalities.
|
v3-fos-license
|
2023-07-12T07:27:52.964Z
|
2023-06-28T00:00:00.000
|
259738597
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "CLOSED",
"oa_url": "https://doi.org/10.5194/acp-23-7103-2023",
"pdf_hash": "32772560d8e1d1b855c485d7aa026354eccf7430",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:576",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "7fb095533b77be47555598ad65f7a41a5e0aeb6d",
"year": 2023
}
|
pes2o/s2orc
|
Photoaging of Phenolic Secondary Organic Aerosol in the Aqueous Phase: Evolution of Chemical and Optical Properties and Effects of Oxidants
. While gas-phase reactions are well established to have significant impacts on the mass concentration, chemical 10 composition, and optical properties of secondary organic aerosol (SOA), the aqueous-phase aging of SOA remains poorly understood. In this study, we performed a series of long-duration photochemical aging experiments to investigate the evolution of the composition and light absorption of the aqueous SOA (aqSOA) from guaiacyl acetone (GA), a semivolatile phenolic carbonyl that is common in biomass burning smoke. The aqSOA was produced from reactions of GA with hydroxyl radical (•OH-aqSOA) or a triplet excited state of organic carbon ( 3 C*-aqSOA) and was then photoaged in water under conditions that 15 simulate sunlight exposure in northern California for up to 48 hours. The effects of increasing aqueous-phase •OH or 3 C* concentration on the photoaging of the aqSOA were also studied. High resolution aerosol mass spectrometry (HR-AMS) and UV-vis spectroscopy were utilized to characterize the composition and the light absorptivity of the aqSOA and to track their changes during aging. Compared to •OH-aqSOA, the 3 C*-aqSOA is produced more rapidly and shows less oxidation, a greater abundance of 20 oligomers, and higher light absorption. Prolonged photoaging promotes fragmentation and the formation of more volatile and less light-absorbing products. More than half of the initial aqSOA mass is lost and substantial photobleaching occurs after 10.5 hours of prolonged aging under simulated sunlight illumination for 3 C*-aqSOA and 48 hours for •OH-aqSOA. By performing positive matrix factorization (PMF) analysis of the combined HR-AMS and UV-vis spectral data, we resolved three generations of aqSOA with distinctly different chemical and optical properties. The first-generation aqSOA shows significant oligomer 25 formation and enhanced light absorption at 340–400 nm. The second-generation aqSOA is enriched in functionalized GA species and has the highest mass absorption coefficients in 300–500 nm, while the third-generation aqSOA contains more fragmented products and is the least light-absorbing. These results suggest that intermediately-aged phenolic aqSOA is more light-absorbing than other generations, and that the light absorptivity of phenolic aqSOA results from a competition between brown carbon (BrC) formation and photobleaching, which is dependent on aging time. Although photoaging generally 30 increases the oxidation of aqSOA, a slightly decreased O/C of the •OH-aqSOA is observed after 48 hours of prolonged
Despite extensive research on the formation of aqSOA from phenols, the aging and degradation of phenolic SOA in water remain poorly characterized.Atmospheric lifetimes of SOA range from hours to weeks (Wagstrom and Pandis, 2009), during which chemical reactions can occur, leading to continuous aging and evolution of SOA.Functionalization (i.e., the addition of functional groups to the molecules) and fragmentation (i.e., the breaking of bonds within the molecules to form smaller species) are critical mechanisms in the aging of SOA that can greatly change the chemical composition and loading of aerosols (Kroll et al., 2009;Leresche et al., 2021;Shrivastava et al., 2017).Chemical aging can also influence the optical properties of SOA, as some reactions increase the light absorptivity while others cause photobleaching by destroying chromophores (Lee et al., 2014).Furthermore, fragmentation can result in the formation of volatile and semivolatile products, causing a loss of SOA mass and photobleaching (Kroll et al., 2015).Yu et al. (2016) studied the aqueous-phase photooxidation of phenol and methoxyphenols and observed that, as aging progresses, fragmentation reactions become increasingly dominant in comparison to oligomerization and functionalization reactions.However, a portion of the aqSOA appears to be resistant to fragmentation and remains chemically unchanged even after prolonged exposure to simulated sunlight in the aqueous phase (Yu et al., 2016).
Similarly, in an environmental chamber study, O'Brien and Kroll (2019) reported that 70−90% of the α-pinene SOA mass remained in particles after an initial decay during photochemical aging.
The impacts of aging on the concentrations and properties of SOA in the atmosphere have been widely observed in biomass burning emissions (Brege et al., 2018;Chen et al., 2021;Garofalo et al., 2019;Kleinman et al., 2020;Zhou et al., 2017).For instance, aged wildfire plumes subjected to aqueous processing experience substantial losses in organic aerosol (OA) mass, increases in SOA oxidation, and changes in optical properties (Che et al., 2022;Farley et al., 2022;Sedlacek et al., 2022).
Aqueous-phase oxidation of organic molecules, including phenols, and the formation of SOA have been observed in residential wood burning smoke in both urban and rural environments as well (Brege et al., 2018;Kim et al., 2019;Stefania et al., 2016;Sun et al., 2010).In addition, in remote regions where aerosols are generally highly aged and have been subjected to more extensive aqueous-phase and heterogeneous processing, SOA is significantly more oxidized, less volatile, and more hygroscopic compared to those in urban areas (Jimenez et al., 2009;Morgan et al., 2010;Ng et al., 2011;Zhang et al., 2011;Zhou et al., 2019).
Understanding the chemical aging process of SOA in the aqueous phase is important for better predicting the concentration of SOA in ambient air and assessing its potential impacts on climate and human health.Sunlight-triggered aqueous-phase reactions, such as direct photolysis of organics, nitrate, nitrite, and hydrogen peroxide, as well as energy and charge-transfer reactions driven by 3 C*, significantly impact the chemical aging of SOA, leading to changes in particle composition and properties (Corral Arroyo et al., 2018;Ervens et al., 2011;Herrmann et al., 2015;Mabato et al., 2022).The extent of exposure of aqSOA to oxidants in atmospheric waters can vary widely, influenced by the concentration and residence time of oxidants.
For example, the steady-state concentration of •OH can vary from 10 -16 to 10 -12 M (Herrmann et al., 2010) while that of 3 C* can vary from 10 -14 to 10 -11 M (Kaur et al., 2019), depending on the solute concentration, which ranges from dilute fog/cloud droplets to highly concentrated solutions in particle water.Exposure to elevated levels of oxidants can promote the formation of highly oxygenated SOA (Daumit et al., 2016;Kang et al., 2011;Lambe et al., 2015;Ng et al., 2010), but can also decrease SOA mass and facilitate a shift from the functionalization-dominant regime to the fragmentation-dominant regime (Lambe et al., 2012).
This study investigates the long-timescale aqueous aging of the aqSOA formed from the photooxidation of guaiacyl acetone (GA).GA is a common component of biomass burning emissions and has been widely used as a model compound to study SOA formation in BB emissions.In previous work (Arciva et al., 2022;Jiang et al., 2021;Ma et al., 2021;Misovich et al., 2021;Smith et al., 2016), we examined the kinetics and mechanisms of aqSOA formation from the photoreactions of GA.
Here, we extend this research to investigate the aqueous-phase photoreactions of GA with •OH and 3 C* and propose reaction pathways of GA with •OH and 3 C*.The focus of our investigation is to study the impact of prolonged aqueous aging on the chemical composition and optical properties of the aqSOA.Specifically, we examine the effects of •OH reaction and 3 C* reactions induced by simulated sunlight for up to 72 hours and 14 hours, respectively, which correspond to approximately 21 days and 4 days of winter-solstice sunlight exposure in northern California (George et al., 2015).Furthermore, we examine the effects of light and additional oxidant exposure on the aging of the aqSOA.
Formation and Aging of Phenolic AqSOA
The initial reaction solution was prepared with 100 μM of guaiacyl acetone and either 100 µM of hydrogen peroxide (H2O2; as a source of •OH) or 5 µM of 3,4-dimethoxybenzaldehyde (3,4-DMB; as a source of 3 C*) in Milli-Q water.The pH of the solution was adjusted to 4.6 using sulfuric acid.These conditions were set to mimic wood burning-influenced cloud and fog waters (Jiang et al., 2021).The reaction solution was placed in a 400 mL Pyrex tube, continuously stirred and illuminated inside a RPR-200 photoreactor system equipped with three different types of bulbs to roughly mimic sunlight (George et al., 2015).The steady-state concentration of •OH ([•OH]) is 2.6×10 -15 M in the •OH-mediated reaction, similar to the values observed in fog water (Kaur and Anastasio, 2017), and the [ 3 C*] is 1.1×10 -13 M in the 3 C*-mediated reaction, about 2 times higher than in fog water (Kaur and Anastasio, 2018) (see Section 3.1 for more details).When ~95% of the initial GA has reacted (i.e., after 24 h of irradiation for the •OH reaction and 3.5 h for the 3 C* reaction), the solution was separated into four aliquots and moved into separate 110 mL Pyrex tubes for further aging.This aging occurred under four different conditions: 1) aging in the dark (tube wrapped with aluminum foil); 2) continued illumination without the addition of extra oxidant; 3) photoaging with the addition of 100 µM of H2O2; and 4) photoaging with the addition of 5 µM of 3,4-DMB.Small aliquots of the solutions were then periodically taken from each tube to measure the chemical composition and optical properties.
During the photoreaction, the solutions were continuously stirred.The Pyrex tubes were capped but not hermetically sealed, and the caps were briefly removed during sample collection.Due to the presence of oxygen in the reaction system, secondary reactive oxygen species (ROS) such as singlet oxygen ( 1 O2*), superoxide/hydroperoxyl radicals (O2• -/HO2•) and •OH can be generated in the solution via energy transfer from 3 C* to dissolved O2 (Vione et al., 2014;Zepp et al., 1977), electron transfer from the intramolecular charge-transfer complex of DMB (DMB• +/ • -) to O2 (Dalrymple et al., 2010;Li et al., 2022b), and the reactions between DMB ketyl radical and O2 (Anastasio et al., 1997).However, according to our previous studies (Smith et al., 2014), singlet oxygen is expected to contribute only minimally to the oxidation of GA in this reaction system.In addition, the negligible loss of GA and DMB in the dark controls suggests there was negligible evaporation of the precursor or the photosensitizer during the experiments.
Chemical and Optical Analyses
The concentrations of GA and 3,4-DMB were determined by a high-performance liquid chromatograph equipped with a diode array detector (HPLC-DAD, Agilent Technologies Inc.).A ZORBAX Eclipse XDB-C18 column (150×4.6 mm, 5 µm) was used with a mobile phase consisting of a mixture of acetonitrile and water (20:80), and the flow rate was 0.7 mL min -1 .
Both GA and DMB were detected at 280 nm, and their retention times were 8.349 and 15.396 min, respectively.The mass concentration and chemical composition of the aqSOA products were characterized using a high resolution time-of-flight aerosol mass spectrometer (HR-AMS; Aerodyne Res.Inc).The liquid samples were atomized in argon (Ar, industrial grade, 99.997 %) followed by diffusion drying (Jiang et al., 2021).This process allowed volatile and semi-volatile products to evaporate, leaving only the low-volatility products in the particle phase, which were characterized by AMS.HR-AMS data were processed using standard toolkits (SQUIRREL v1.56D and PIKA 1.15D).Since Ar was used as the carrier gas, the CO + signal of aqSOA was quantified directly (Yu et al., 2016).While the organic H2O + signal (org-H2O + ) can also be directly determined for dry aerosols, it tends to be noisy due to high sulfate H2O + (SO4-H2O + ) signal interference.
Therefore, org-H2O + was parameterized as org-H2O + = 0.4×CO2 + , based on the linear regression between the determined org-H2O + signal (= measured-H2O + − SO4-H2O + ) and the measured organic CO2 + signal (Jiang et al., 2021).The other org-H2O + related signals were parameterized as org-OH + = 0.25×org-H2O + and org-O + = 0.04×org-H2O + (Aiken et al., 2008).Atomic ratios of oxygen-to-carbon (O/C) and hydrogen-to-carbon (H/C), and organic mass-to-carbon ratio (OM/OC) ratios, were subsequently determined (Aiken et al., 2008), and the average oxidation state of carbon (OSC) of aqSOA was calculated as OSC = 2×O/C-H/C (Kroll et al., 2011).The aqSOA concentration in the solution ([Org]solution, µg mL -1 ) was calculated using sulfate as the internal standard: where [Org]AMS and [Sulfate]AMS are the AMS-measured concentrations (µg m -3 ) of aqSOA and sulfate in the aerosolized solution, and [Sulfate]solution is the spiked concentration (µg mL -1 ) of sulfate in the solution.The aqSOA mass yield (YSOA) after a given time of illumination (t) was calculated as: where [GA]0 is the initial GA concentration (µg mL -1 ) in the solution, and [Org]t and [GA]t denote the concentrations of the aqSOA and GA, respectively, in the solution after a period of irradiation.
The light absorbance of the reaction solution was measured using a UV-Vis spectrophotometer (UV-2501PC, Shimadzu).
The mass absorption coefficient, the absorption Ångström exponent, and the rate of sunlight absorption of the aqSOA were calculated (Section S1).
Results and Discussion
3.1 Formation and Characteristics of the aqSOA from Photooxidation of GA by •OH and 3 C * Figures 1a and 1h demonstrate that the loss of GA follows first-order kinetics in both •OHand 3 C*-mediated photoreactions.The pseudo-first-order rate constants were determined to be 0.14 and 0.73 h -1 , respectively, under our experimental conditions.Based on a previous study (Smith et al., 2016), direct photolysis of GA is expected to be negligible in this study.The fact that the reaction of GA with 3 C* is much faster than with •OH is consistent with previously reported kinetics for other phenols (Smith et al., 2014;Yu et al., 2016) and can be attributed to the higher oxidant concentration in the 3 C*-mediated reaction.Based on the second-order rate constants for GA reacting with •OH (1.5×10 10 M -1 s -1 ) (Arciva et al., 2022) and with 3 C* (1.8×10 9 M -1 s -1 ) (Ma et al., 2021) S1 and S2.
As GA is transformed, the mass of the aqSOA increases (Figures 1c and 1j).For both •OHand 3 C*-mediated reactions, the aqSOA formation rate relative to the GA decay rate is similar initially, giving a relatively constant mass yield of ~ 90% until one GA half-life (t1/2, which is 4.8 h for the •OH reaction and 0.95 h for the 3 C* reaction).However, in the •OH reaction, the formation of aqSOA slows down after t1/2, resulting in a reduction in SOA yield to as low as 46% when ~ 95% of the initial GA has reacted (Fig. 1d).In contrast, in the 3 C* reaction, the aqSOA yield stabilizes in the range of 85-90% until GA has been completely consumed (Figures 1d and 1k).These results suggest that the aqSOA reacts with •OH to produce volatile products, which leads to mass loss and slower mass growth.In addition, the results suggest that the photodegradation of •OH-aqSOA of GA has a higher tendency than 3 C*-aqSOA to form volatile and semi-volatile compounds that evaporate from the condensed phase.This finding is confirmed by prolonged photoaging experiments, which are presented in Sections 3.2 and 3.4.
The chemical composition of GA aqSOA changes continuously during photoreaction.In both the •OH and 3 C* reactions, the O/C, OM/OC, and OSC of the aqSOA increase, while H/C slightly decreases until all the GA has been consumed (Figures 1e-g and l-n).The HR-AMS spectra of the aqSOA (Figures 2a and 2b) show that when ~95% of the initial GA has reacted (at 24 h for the •OH reaction and 3.5 h for the 3 C* reaction), the •OH-aqSOA (O/C = 0.64 and OSC = −0.10) is more oxidized than the 3 C*-aqSOA (O/C = 0.56 and OSC = −0.29).In addition, compared with the •OH-aqSOA, the 3 C*-aqSOA spectrum shows a significantly greater abundance of high m/z ions (Figures 2c-e), including the marker ions of GA oligomers (e.g., C18H19O5 + and C20H22O6 + at m/z 315 and 358, respectively) (Jiang et al., 2021), suggesting a higher production of oligomers with 3 C*.This observation aligns with the trend we observed previously in the aqueous-phase oxidations of phenol and methoxyphenols, where more oligomerization occurred in photoreactions initiated by 3 C*, while •OH reactions promoted the breakdown of aromatic rings and formation of smaller organic acids (Sun et al., 2010;Yu et al., 2014).The compositional differences between the •OH-aqSOA and 3 C*-aqSOA can be attributed to their different reaction mechanisms (as discussed in section 3.2).In the •OH experiment, the reaction can start either by •OH-addition to the aromatic ring to generate OH-adducts or by H-atom abstraction from the hydroxyl group to generate a phenoxy radical.The subsequent coupling of phenoxy radicals leads to the formation of oligomers (Kobayashi and Higashimura, 2003).According to previous studies on phenol oxidation in the gas phase (Atkinson, 1986;Olariu et al., 2002), it has been observed that at room temperature, only ~10% of the phenol + •OH reaction involves H-atom abstraction that leads to the formation of phenoxy radical, whereas ~90% of the •OH reaction proceeds through OH addition.Moreover, modeling studies have indicated that in both gas-phase and aqueous-phase •OH oxidation of phenols, the OH addition pathways exhibit considerably lower activation energy than the H-abstraction pathway (Kılıç et al., 2007).As a result, it is highly likely that the primary products of the •OH reaction with phenols are hydroxyphenols.On the other hand, the 3 C* reaction primarily proceeds through electron transfer and/or H-atom abstraction which produces a phenoxy radical (Anastasio et al., 1997;Canonica et al., 2000;Yu et al., 2014).The more pronounced production of phenoxy radicals in the 3 C* reaction can lead to more prominent oligomerization.Although the reactions of phenols with 3 C* also produce •OH radical, the amount generated is relatively small (Anastasio et al., 1997;Smith et al., 2014) and the •OH addition pathway in the 3 C* reaction is expected to be less important than in the •OH reaction.
Figures 3d and 3j (and Figures S3a and S4a) show the mass absorption coefficient spectra of the •OH-aqSOA and the C*-aqSOA.Both aqSOAs are more light-absorbing than the parent GA (Figure S5), which is likely due to the formation of GA oligomers and functionalized products containing conjugated structures.Phenolic dimers and higher oligomers formed through the coupling of phenoxyl radicals and monomeric phenol derivatives formed through •OH and carbonyl addition to the 215 aromatic ring are effective light absorbers (Jiang et al., 2021;Misovich et al., 2021;Yu et al., 2014).In addition, the C*-aqSOA exhibits greater light absorption than the •OH-aqSOA for a similar extent of GA decay, reflecting the fact that the C*-aqSOA is generally enriched with more high-molecular-weight conjugated species.
Aqueous-phase Reaction Pathways of Guaiacyl Acetone
Drawing from the results of this study, our previous research (Jiang et al., 2021), and existing literature, we present in Schemes 1 and 2 proposed chemical mechanisms for the aqueous-phase reactions of GA with •OH and 3 C*.As a phenolic 12 carbonyl, GA has two reactive sites, i.e., the phenol functional group and the carbonyl functional group.Scheme 1 outlines the main reaction pathways triggered by the phenol functional group of GA.In •OH-mediated reactions, •OH can either abstract a H atom from the hydroxyl group to form a phenoxyl radical or add to the aromatic ring to form a dihydroxycyclohexadienyl radical (Olariu et al., 2002).The phenoxyl radical can couple to produce dimers and higher oligomers (Sun et al., 2010), or react with HO2• to produce quinonic and hydroxylated products (D'Alessandro et al., 2000).The dihydroxycyclohexadienyl radical reacts with O2 to form a peroxyl radical which can subsequently eliminate a HO2• to produce hydroxylated products (Barzaghi and Herrmann, 2002).Furthermore, the peroxyl radical can undergo further O2 addition and cyclization to generate a bicyclic peroxyl radical, leading to the cleavage of the aromatic ring via C-C and O-O bond scission and producing fragmented products such as small carboxylic acids, aldehydes, and ketones (Dong et al., 2021;Suh et al., 2003).
However, according to our previous studies (Smith et al., 2014;Yu et al., 2014), the amount of •OH and 1 O2* generated in the reaction of phenols with 3 C* is small and they are expected to be minor oxidants compared to 3 C*.
Scheme 1. Postulated aqueous-phase reaction pathways triggered by the phenol functional group of GA.
Scheme 2 outlines the reaction pathways that can be triggered by the carbonyl functional group of GA.The α-position of the ketone group of GA can undergo H-atom abstraction by •OH or 3 C*, which generates alkyl radicals (Talukdar et al., 2003;Wagner and Park, 2017).The alkyl radicals can react with O2 to produce peroxyl radicals which can further react to form dicarbonyls (Kamath et al., 2018).These dicarbonyls can then undergo photo-dissociation or hydration in the aqueous phase, forming diols and tetrols, which can further react to produce oligomers and functionalized products (Lim et al., 2013;Parandaman et al., 2018;Tan et al., 2010;Zhang et al., 2022).In our previous study (Jiang et al., 2021), we observed that the majority of the GA aqSOA products were formed through reactions triggered by the phenol functional group.The importance 250 of the reactions initiated by the carbonyl functional group may need to be evaluated in future work.
Scheme 2. Postulated aqueous-phase reaction pathways triggered by the ketone functional group of GA.
As shown in the proposed reaction pathways, dissolved O2 plays an important role in the aqueous-phase reactions of GA and can influence the reactions in several ways.Firstly, the presence of O2 is essential for the formation of peroxyl radical, which serves as a crucial intermediate in hydroxylation and ring-opening pathways.Therefore, high O2 concentration in the aqueous phase can lead to enhanced hydroxylation and fragmentation, while suppressing oligomer formation from phenoxyl radical (Dong et al., 2021).Additionally, in 3 C*-mediated reactions, the involvement of O2 can generate secondary ROS (e.g., can act as potential oxidants for GA.For instance,1 O2* reacts with phenols mainly through 1,4-cycloaddition route to produce quinoic products (Al-Nu'airat et al., 2019;García, 1994), whereas •OH and O2• -/HO2• are important contributors to the hydroxylation and ring-cleavage of phenols.Therefore, the presence of O2 is expected to facilitate functionalization and ringopening pathways while inhibiting oligomerization in 3 C*-initiated reactions.
Photo-transformation of AqSOA and Influence of Prolonged Photoaging on SOA Yield and Composition
After ~95% of the initial GA has reacted, the aqSOA was subjected to additional aging under different conditions: 1) aging in the dark; 2) continued illumination without the addition of extra oxidant; and 3) continued illumination with the addition of an oxidant (•OH or 3 C*).As shown in Figures 1c-g, 1j-n, and S6, the mass concentration, elemental ratios, and HR-AMS spectra of the aqSOA remain unchanged during dark aging, indicating negligible dark chemical reactions.In contrast, continued exposure to simulated sunlight results in a 46% reduction in the mass of 3 C*-aqSOA over about 10.5 hours of prolonged aging (i.e., 14 hours of irradiation in total).More than 60% of the •OH-aqSOA mass is degraded after 48 hours of extended photoaging (i.e., 72 hours of irradiation in total).These observations indicate that phenolic aqSOA is susceptible to photodegradation and that fragmentation reactions and evaporation of volatile products likely play important roles in the photoaging process.
The fitted pseudo-first-order decay rate constant (k) is 0.073 h -1 for the 3 C*-aqSOA and 0.017 h -1 for the •OH-aqSOA (Figures 1c, 1j, and 4).The faster decay of the 3 C*-aqSOA is likely due to the higher oxidant concentration in the 3 C* reaction during aqSOA aging.Here, we assume that the steady-state concentrations of •OH and 3 C* at the onset of the prolonged photoaging are approximately the same as in the initial solutions, and thus the [ 3 C*] in the 3 C* reaction is about 40 times higher than the [•OH] in the •OH reaction during aqSOA aging.This assumption is proved by the first-order decay behavior of GA and the relatively stable 3,4-DMB concentration during the aqSOA formation period.Additionally, •OH production from 3 C* becomes increasingly important during the prolonged photoaging (Anastasio et al., 1997), which may lead to an increased oxidant concentration in the 3 C* solution.Another possible reason for the faster decay of the 3 C*-aqSOA compared to the •OH-aqSOA may be related to its higher light absorptivity, which can contribute to faster direct photodegradation.However, it is important to note that the rate of photodegradation is also dependent on the quantum yield of photodegradation (i.e., the ratio of the number of compounds destroyed to the number of photons absorbed) (Smith et al., 2016).
Figure 4. Pseudo-first-order decay rate constants for loss of mass of •OH-aqSOA and 3 C*-aqSOA under different photoaging conditions
As depicted in Figures 1l-n and S7, the chemical composition of 3 C*-aqSOA evolves continuously during photoaging, with the O/C ratio increasing from 0.59 to 0.77, consistent with previous research demonstrating that SOA becomes more oxidized during chemical aging (Kroll et al., 2015;Yu et al., 2016).In contrast, the O/C and H/C ratios of •OH-aqSOA exhibit negligible changes (O/C = 0.67±0.008and H/C = 1.36±0.008)during prolonged photoaging (Figures 1e-g and S7), even though the mass of aqSOA decreases significantly.This can be explained by the simultaneous evaporation of highly oxidized volatile compounds and the transformation of less oxidized species into more oxidized, low-volatility products, thereby maintaining relatively constant bulk elemental ratios.Additionally, as shown in Figures 5, S8 To further elucidate the chemical evolution of the aqSOA, we performed PMF analysis on the combined AMS and UVvis absorption spectral data and successfully resolved three distinct factors for both •OH-aqSOA and 3 C*-aqSOA, each with different temporal profiles, mass spectra, and absorption spectra that represent different generations of aqSOA products (Figures 6 and 7).The formation and decay rate constants of different generations of the aqSOA products were determined by performing exponential fits (y = a(1−e −bx ) + c and y = ae −bx + c, respectively) to the time trends of the aqSOA factors (Figures 6d,f,h and 7d,f,h).The fitted parameter b (in the unit of h -1 ) represents the first-order rate constant for the aqSOA formation or decay in the photoreactor.
The first-generation 3 C*-aqSOA, which is the least-oxidized (O/C=0.49and H/C=1.48),shows enhanced ion signals corresponding to GA oligomers, such as C18H19O5 + and C20H22O6 + (Figures 7a, 7j, 7k and S11e-f).These products grow rapidly and peak within the first hour of 3 C*-aqSOA formation, but they subsequently decrease and disappear completely when GA is consumed.The second-generation factor (O/C=0.59 and H/C=1.42), in which the oligomer tracer ions are substantially and C15H11O4 + (Figures 7b,7j,7k and S11d).The second-generation products build up gradually, peak after GA is consumed, and degrade more slowly than the 1st-generation 3 C*-aqSOA (k = 0.36 h -1 vs 1.8 h -1 ) during prolonged aging (Figure 7d).shows a slight decrease towards the end of prolonged photoaging.These findings agree with our previous studies, which demonstrate that oligomerization and functionalization play a more significant role in the initial formation of phenolic aqSOA, while fragmentation and ring-opening reactions to produce more oxidized compounds become more important later (Jiang et al., 2021;Yu et al., 2016).Further, the observed decay of the 3rd-generation aqSOA indicates that prolonged aging leads to the formation of volatile compounds that evaporate from the condensed phase, resulting in mass loss of aqSOA.This implies that photochemical aging can remove aqSOA from the atmosphere, in addition to wet and dry deposition (Hodzic et al., 2016).
The mass spectral features of the •OH-aqSOA factors (Figures 6a-c) are generally similar to those of the 3 C*-aqSOA factors (Figures 7a-c).However, in •OH-aqSOA, we observed that the 2nd-generation (O/C = 0.73 and H/C = 1.36) is the most oxidized factor and shows strong correlations not only with the tracer ions representing functionalized GA monomers (e.g., C9H7O3 + and C15H11O4 + ) but also with a group of small, oxygenated ions (e.g., CHO2 + , CH2O2 + , and C4H5O3 + ) that are enriched in the 3rd-generation 3 C*-aqSOA.One possible reason for the observed difference is that •OH reaction tends to form highly oxidized products that degrade over long aging times, whereas the 3 C* reaction can generate highly oxidized SOA products that are more resistant to degradation.Another possible explanation for the observed difference in the evolution of 3 C*-aqSOA and •OH-aqSOA is that the highly oxidized species in the aqSOA exhibit different reactivity with 3 C* and •OH due to their electron availability (Walling and Gibian, 1965).In general, 3 C* is known to be less reactive with electron-poor compounds, whereas •OH can rapidly react with a wide range of organic compounds in the aerosol at diffusion-controlled rates (Herrmann et al., 2010).As a result, electron-poor products may persist in 3 C*-aqSOA, while •OH has the capability to further oxidize these products, eventually transforming them into volatile species.This explanation is supported by the more significant decay of the 3rd-generation 3 C*-aqSOA when extra H2O2 is added during prolonged photoaging (Figures 7d and 7f).In addition, compared to 3 C*-aqSOA, •OH-aqSOA exhibits much lower production of 1 st -generation products but higher 2 nd -generation (Figures 7d-e vs. Figures 6d-e), suggesting that oligomerization is more pronounced in 3 C*-aqSOA, while functionalization plays a more important role in •OH-aqSOA.
Evolution of AqSOA Optical Properties during Prolonged Aging
Figures 3, S3, and S4 illustrate the evolution of the light absorption properties of the aqSOA during formation and aging.
The aqSOA experiences photobleaching during prolonged aging, with the MAC365nm value of the •OH-aqSOA decreasing from 0.41 (the maximum) to 0.14 m 2 g -1 , and that of 3 C*-aqSOA decreasing from 0.62 (the maximum) to 0.13 m 2 g -1 .The rates of sunlight absorption, both normalized and un-normalized by aqSOA mass, also decrease during prolonged aging.Figure 8 displays the absorption Ångström exponent (AAE) of the aqSOA as a function of log10 (MAC405) and an optical-based classification of BrC (Saleh, 2020;Zhai et al., 2022).As a result of prolonged photoaging, the GA aqSOA shifts from being classified as weak BrC to very weak BrC.The changes in the light absorption properties of the GA aqSOA are also influenced by elevated oxidant concentrations (see Section 3.4 for further discussions).The 1st-generation aqSOA factor exhibits a hump in the MAC spectra between 340 and 400 nm, a feature observed previously in phenolic aqSOA (Smith et al., 2016) and attributed to the high conjugation present in oligomeric products.For both •OHand 3 C*-mediated reactions, the intermediate, 2nd-generation aqSOA are the most light-absorbing compared to the fresher (i.e., 1st-generation) and more aged (i.e., 3rd-generation) aqSOA.Nevertheless, the 2nd-generation •OH-aqSOA shows relatively lower MAC values (MAC365nm = 0.47 m 2 g -1 ) than the 2nd-generation 3 C*-aqSOA (MAC365nm = 0.89 m 2 g -1 ).This difference could be attributed to the more pronounced oxidative ring-opening reactions that cause the destruction of conjugation in •OH-aqSOA, resulting in the breakdown of chromophores.The 3rd-generation aqSOA factors are the least absorbing (MAC365nm = 0.070 m 2 g -1 for the •OH-aqSOA and 0.018 m 2 g -1 for the 3 C*-aqSOA), consistent with the dominance of fragmented and ring-opening products in prolonged aging.
Effects of Additional Oxidant Exposure on AqSOA Aging
To investigate the effect of the concentrations of condensed-phase oxidants on the photoaging of phenolic aqSOA, we added either 100 µM H2O2 or 5 µM 3,4-DMB into the solution after the majority (~ 95 %) of GA had reacted.Since GA decay follows first-order kinetics, we assumed that the steady-state concentration of oxidants remained constant during initial aqSOA formation.By introducing additional H2O2 or 3,4-DMB, we increased the •OH or 3 C* concentration, as well as the overall oxidant concentration in the solution during the photoaging of the aqSOA.
As shown in Figures 1c and 4 and Table S3, compared to continued photoaging without addition of extra oxidant (k = 0.017 h -1 ), the photodegradation rates of •OH-aqSOA are substantially faster when extra •OH or 3 C* are introduced (k = 0.11 and 0.057 h -1 , respectively).Likewise, the addition of extra •OH or 3 C* results in more extensive mass loss of the •OH-aqSOA with reductions of 88% or 79% of the aqSOA mass observed at the end of the photoaging, respectively.These levels of mass loss were significantly higher than without extra oxidant (i.e., 62%).These findings suggest that the presence of additional •OH or 3 C* accelerates the photochemical aging process and leads to increased formation of volatile and semi-volatile products that subsequently evaporate.As shown in Figures 6d, 6f, and 6h, the decay of the 1st-and 2nd-generation •OH-aqSOA is increased, and concurrently, the formation of the 3rd-generation factor is accelerated when extra oxidants are introduced, suggesting a faster transformation from the 1st to 2nd to 3 rd generation.In addition, in the later stage of photoaging, we observed a more significant decay of the 3rd-generation •OH-aqSOA when extra oxidants are added, which suggests that higher concentrations of oxidants also facilitate the ultimate breakdown of the 3rd-generation •OH-aqSOA.The O/C and the average oxidation state of carbon (OSc) of •OH-aqSOA exhibit a slightly faster increase upon the addition of extra •OH or3 C*, but eventually decrease more significantly by the end of the photoaging (Figures 1f and 1g), indicating accelerated formation of highly oxidized species and enhanced production of volatile compounds under high oxidant concentrations.
Furthermore, increased oxidant concentrations also have significant impacts on the photobleaching of •OH-aqSOA.Specifically, the MAC values of the aqSOA decrease faster when extra oxidants are added (Figure 3c-f).It is noteworthy that the addition of 100 µM of H2O2, the source of •OH, has a greater impact on the degradation of •OH-aqSOA mass and light absorption than the addition of 5 µM of 3,4-DMB, the source of 3 C*, despite the fact that the GA reaction with 3 C* is much faster than that with •OH in this study.This result may reflect the reactivity differences between GA and the aqSOA towards the oxidants, i.e., while both oxidants react quickly with GA, for the electron-poor aqSOA products, 3 C* may be much less reactive whereas •OH can oxidize them rapidly.Another possible interpretation is that the addition of 3,4-DMB into the solution during aqSOA aging produces unique low-volatility, light-absorbing products which cannot be generated in •OH reactions.These products may counteract some of the mass and absorption loss due to fragmentation and evaporation.
Elevated oxidant concentrations also affect the photoaging of 3 C*-aqSOA.Unlike •OH-aqSOA, the addition of •OH during aging only slightly accelerates the decrease of mass and light absorption of the 3 C*-aqSOA (Figure 1j-k and Figure 3g-l).One possible explanation is that the added •OH only accounts for a small fraction of the total oxidant amount in the 3 C*-initiated reaction system, and thus shows little impact on the aqSOA aging compared to the preexisting oxidants (e.g., 3 C* and •OH generated from 3 C*).Another possible explanation is that the added •OH reacts with 3,4-DMB in the solution, resulting in a decrease in the amount of 3 C* source.Additionally, the reaction of •OH with 3,4-DMB may also generate low-volatility products that balance out the increased decay of the aqSOA.This interpretation is consistent with the fast decay of 3,4-DMB after the addition of extra •OH (Figure 1i).However, when extra 3 C* (i.e., 3,4-DMB) is added to the 3 C*-aqSOA solution during extended aging, it slows down the decay of 3 C*-aqSOA mass and light absorption due to the enhanced formation of 2nd-generation products (Figure 7h vs. Figure 7d).This suggests that an increase in 3 C* concentration during aging promotes the formation of low-volatility functionalized products.
Conclusions
This study investigates the evolution of the composition and optical properties of phenolic aqSOA during prolonged photoaging, including the effects of increased oxidant concentrations.The aqSOA was generated by reacting GA with •OH or enriched in small, highly oxygenated species.Consistent with their compositional differences, the 3 C*-aqSOA is more lightabsorbing than the •OH-aqSOA.
The chemical composition of the aqSOA evolves during photoaging, with oligomerization and functionalization being the dominant mechanisms during initial aqSOA formation, while fragmentation and volatile product formation becoming more important during prolonged aging.This leads to a loss of 62-88% of the •OH-aqSOA mass after 48 hours of prolonged aging under simulated sunlight with or without added oxidants, while the 3 C*-aqSOA experienced a loss of 25-54% of its mass after 10.5 hours of extended photoaging.These results indicate that aqueous-phase photochemical aging can significantly reduce atmospheric aqSOA, in addition to wet and dry deposition.In this study, the rate of loss for phenolic aqSOA during photochemical aging was found to be in the range of 0.017-0.11h -1 (i.e., 5-30×10 -6 s -1 ).The photochemical kinetics in our RPR-200 photoreactor system were ~7 times faster than those experienced under ambient winter solstice sunlight in Northern California (George et al., 2015).Consequently, these observations indicate a photochemical lifetime of 3-17 days for phenolic aqSOA in ambient conditions.The deposition loss rate constant of submicron particles in the atmosphere, assuming wet deposition is the dominant process, is approximately 2×10 −6 s −1 (resulting in a lifetime of approximately 5 days) (Henry and Donahue, 2012;Molina et al., 2004).These findings suggest that the contribution of photochemical aging to the removal of phenolic aqSOA can be comparable to that of wet deposition.
The average oxidation state of the 3 C*-aqSOA increases continuously during photoaging, while that of •OH-aqSOA exhibits a slight decrease towards the end of photoaging when additional oxidants are introduced; this is likely due to the formation and evaporation of highly oxidized volatile products.This finding indicates that photoaging does not necessarily increase the average oxidation state of condensed-phase organics, as the evaporation of highly oxidized products may decrease the average O/C of aqSOA.As photoaging continues, photobleaching becomes more pronounced, causing the aqSOA to shift from weakly absorbing BrC to very weak BrC.We also observed through PMF analysis that the second-generation aqSOA, enriched in functionalized phenolic compounds, is the most light-absorbing.This suggests that intermediately-aged phenolic aqSOA is more light absorbing than other generations, and that the light absorptivity of phenolic aqSOA is the result of a competition between BrC formation and photobleaching.Elevated oxidant concentrations during photoaging promote fragmentation reactions over oligomerization and functionalization reactions and can ultimately promote the breakdown and evaporation of the aqSOA products, resulting in a faster decline in aqSOA mass and light absorption.
Figure 1 .
Figure 1.Overview of aqSOA formation and aging in •OH-and 3C*-initiated photoreactions of GA.Decay of (a & h) GA and (b & i) 3,4-DMB in the solution.Trends of aqSOA (c & j) mass concentration and (d & k) mass yield and (e & l) H/C, (f & m) O/C and (g & n) OSC determined by HR-ToF-AMS.These measured values are also shown in TablesS1 and S2.
Figure 2 .
Figure 2. HR-AMS mass spectra of (a) •OH-aqSOA and (b) 3 C*-aqSOA after nearly all the initial GA has reacted.Scatter plots that compare the mass spectra of the •OH-aqSOA with 3 C*-aqSOA for (c) all ions and for (d) ions with m/z > 80. (e) Relative abundances of the GA-oligomer tracer ions and high mass ions (m/z>180) in the HR-AMS spectra of the aqSOA.
Figure 3 .
Figure 3. Evolution of the optical properties of •OH-aqSOA and 3 C*-aqSOA during the course of the photoreactions: (a & g) rate of 220 , and S9, both •OH-aqSOA and 3 C*-aqSOA show increasing fCHO2+ (mass fraction of CHO2 + in the total organic signal, a tracer of carboxylic acids) and decreasing fC2H3O+ (mass fraction of C2H3O + , a tracer of non-acid carbonyls) during prolonged photoaging, indicating the importance of acid formation in the aqSOA.Furthermore, the continuous increase of fCHO2+ indicates a more pronounced production of carboxylic acids from the 3 C*-aqSOA compared to the •OH-aqSOA.However, acid formation is comparably important during •OH-aqSOA and 3 C*-aqSOA formation initially.
Figure 5 .
Figure 5.The plots of fCO2+ vs. fC2H3O+ and fCO2+ vs fCHO2+ that illustrate the evolution of the •OH-aqSOA and 3 C*-aqSOA during the formation and the prolonged photoaging periods under the condition of without extra oxidant addition.The solid black markers represent the period of aqSOA formation, while the colored markers represent prolonged aqSOA aging (i.e., after ~95% of the initial GA is consumed).
Figure 6 .
Figure 6.Characteristics of the three generations of the •OH-aqSOA products resolved by PMF: (a-c) MS profiles; (d, f, and h) mass concentration time series; and (e, g, and i) fractional contribution time series of the PMF factors.(j) Mass fraction of selected AMS tracer ions attributed to each PMF factor.(k) Correlation between PMF factors and selected AMS tracer ions.
Figure 7 .
Figure 7. Characteristics of the three generations of the 3 C*-aqSOA products resolved by PMF: (a-c) MS profiles; (d, f, and h) mass 325 330
Figure 8 .
Figure 8.The light absorption properties of (a) •OH-aqSOA and (b) 3 C*-aqSOA as shown in the AAE vs. log10(MAC405) space.The shaded areas in each plot represent very weakly, weakly, moderately, and strongly absorbing BrC denoted based on the opticalbased BrC classification scheme (Saleh, 2020; Zhai et al., 2022).The numbers 1, 2, and 3 represent the different generations of the •OH-aqSOA and the 3 C*-aqSOA products obtained from PMF.
Figure 9
Figure 9 presents the mass absorption coefficient spectra resolved by PMF for the three generations of GA aqSOA resulting from •OH and 3 C* reactions.In general, the 3 C*-aqSOA factors are more light-absorbing than the •OH-aqSOA factors, which is consistent with the higher abundance of oligomers and conjugated high molecular weight products in the 3 C*-aqSOA.
Figure 9 .
Figure 9. Mass absorption coefficient spectra of the PMF resolved three generations of the OH-aqSOA and the 3 C*-aqSOA, comparing with previously reported MAC values of SOA produced from aromatic precursors by 1 Liu et al., 2016, 2 Lambe et al., 2013, 3 Yu et al., 2014 and 4 Smith et al., 2016.
|
v3-fos-license
|
2021-04-18T06:16:14.077Z
|
2021-04-16T00:00:00.000
|
233277770
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0250269&type=printable",
"pdf_hash": "e6911713a7b2486d49228fc376b80b6445fe1507",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:580",
"s2fieldsofstudy": [
"Sociology",
"Medicine",
"Economics"
],
"sha1": "cc93d609961b1dcc915d3668703858f8dd57085f",
"year": 2021
}
|
pes2o/s2orc
|
Behavioural response to the Covid-19 pandemic in South Africa
Background Given the economic and social divide that exists in South Africa, it is critical to manage the health response of its residents to the Covid-19 pandemic within the different socio-economic contexts that define the lived realities of individuals. Objective The objective of this study is to analyse the Covid-19 preventive behaviour and the socio-economic drivers behind the health-response behaviour. Data The study employs data from waves 1 and 2 of South Africa’s nationally representative National Income Dynamics Study (NIDS)—Coronavirus Rapid Mobile Survey (CRAM). The nationally representative panel data has a sample of 7073 individuals in Wave 1 and 5676 individuals in Wave 2. Methods The study uses bivariate statistics, concentration indices and multivariate estimation techniques, ranging from a probit, control-function approach, special-regressor method and seemingly unrelated regression to account for endogeneity while identifying the drivers of the response behaviour. Findings The findings indicate enhanced behavioural responsiveness to Covid-19. Preventive behaviour is evolving over time; the use of face mask has overtaken handwashing as the most utilised preventive measure. Other measures, like social distancing, avoiding close contact, avoiding big groups and staying at home, have declined between the two periods of the study. There is increased risk perception with significant concentration among the higher income groups, the educated and older respondents. Our findings validate the health-belief model, with perceived risk, self-efficacy, perceived awareness and barriers to preventive strategy adoption identified as significant drivers of health-response behaviour. Measures such as social distancing, avoiding close contact, and the use of sanitisers are practised more by the rich and educated, but not by the low-income respondents. Conclusion The respondents from lower socio-economic backgrounds are associated with optimism bias and face barriers to the adoption of preventive strategies. This requires targeted policy attention in order to make response behaviour effective.
Introduction
The impact of the corona virus pandemic on the South African economy and the health of its residents is evolving in real time. South Africa went into a hard lockdown (Level 5) quite early in the pandemic (March 2020). During the hard lockdown, the residents were mostly confined to their homes, leaving very little need for proactive decision-making by individuals. The complete lockdown, however, was untenable, even with the fiscal relief measures put in place by the government [1]. The economic implications of the nationwide shutdown made it unsustainable, with increasing levels of hunger, poverty and unemployment among the vulnerable sections of society [2].
Since then, the government has reduced the levels of lockdown restrictions in phases to permit the economy to function once again. The government declared the move to lockdown Level 4 from 1 May, Level 3 from 1 June and, Level 2 from 18 August. Behavioural restrictions have been lifted in a calibrated manner commensurate with the lockdown level, albeit with precautionary messaging from the government. While regulations have been passed making the wearing of face masks essential, the limited capacity for monitoring implies that it is largely left to the individuals to comply with the regulation and other precautionary measures to prevent Covid-19 infection. This means that the responsibility of managing the pandemic through restrained behaviour has essentially shifted to the residents of the country. With no immediate prospects of eradicating Covid-19, non-pharmaceutical interventions remain the most effective defence against the pandemic [3].
Therefore, the control of the pandemic depends on the behavioural response of individuals. Findings from the NIDS-CRAM Wave 1, however, suggest that high-impact behaviour changes are not happening fast enough in South Africa, even though a large percentage of the population reported some form of change in behaviour [4]. In a country as economically and socially divided as South Africa [5,6], it would be unrealistic to expect a uniform response from its residents. The purpose of this policy paper is to explore the role of socio-economic contexts in determining the health-behaviour response of individuals. The findings of this study will enable effective policymaking by taking into account socio-economic inequality in behavioural responses. The purpose of this study is to provide this perspective to policy makers that will enable a more nuanced policy formulation.
The study seeks answers to the following questions: 1. How has preventive behaviour evolved over the ongoing Covid-19 pandemic?
2. What are the key drivers behind the health-response behaviour?
The study uses bivariate statistics to identify significant differences across binary variables like sex, race, age and geographical location. Concentration indices are used to estimate the income, education and age-related inequalities in behavioural response. Lastly, multivariate regression analysis is used to identify the drivers of the response behaviour.
Literature review
The study adopts the Health Belief Model (HBM) as its analytical framework. The Health Belief Model [7,8] highlights the role of people's awareness, perceived risk, self-efficacy (the confidence and belief that pro-health action can yield desirable outcomes), and feasibility of precautionary/preventive action in explaining individual health-response behaviour. In a country like South Africa that is torn apart by dual realities (South Africa has is one of the most unequal countries in the world with an income related Gini coefficient of over 0.63 (Kollamparambil 2020a)), it is important not to assume a common behavioural response from all sections of society. The HBM model is therefore suited to understanding the differential responses from individuals in a country that is known as one of the most unequal in the world.
The available literature on the individual protective-response behaviour to a pandemic (prior to is within the context of the HINI virus and the SARS virus. [9] provide an effective review of some of the most pertinent studies that look into the demographic and attitudinal determinants of protective behaviours during a pandemic. The review identified 26 papers that met the study inclusion criteria. The studies were of variable quality and most lacked an explicit theoretical framework. With the exception of [10], others were cross-sectional in design, with little predictive power over time. The research shows that there are demographic differences in behaviour: being older, female and more educated, or non-White, is associated with a higher chance of adopting the behaviours. There is evidence that greater levels of perceived susceptibility to and perceived severity of the diseases and greater belief in the effectiveness of recommended behaviours to protect against the disease are important predictors of behaviour. There is also evidence that greater levels of state anxiety and greater trust in authorities are associated with preventive behaviour. The findings are, however, questionable considering that the endogeneity of variables measuring risk perceptions have seldom been acknowledged, let alone accounted for, in empirical analyses. This is despite the simultaneity of the relationship between risk perception and health behaviours being acknowledged in other contexts ( [11] for HIV/AIDS and sexual behaviour, [12] for drug safety warnings and preventive use, among others). In this article, we argue that risk perceptions have to be treated as endogenous to preventive behaviour in order to produce accurate and reliable measures of an individual's valuation of the pandemic risk.
Literature on behavioural response to Covid-19 is fast emerging [13][14][15][16][17][18]. The limitations highlighted in the context of the pandemic literature prior to Covid-19 holds true for the more recent studies as well. The studies are mostly cross-sectional descriptive statistics analyses within developed country contexts. In the absence of a multivariate analysis, the studies do not account for correlations between the various factors and therefore confuse mediating variables with the real drivers of behaviour.
[19] is one of the earliest studies undertaken in the context of Poland, reporting the role of dark personality traits as drivers of behaviour during the pandemic. The study, however, did not account for socio-economic and demographic characteristics. Another key limitation is the sample size of the study, which is below 900 for both waves. Further, the two waves were conducted within a gap of two weeks, reducing its predictive power over time.
The relevance of the Health Belief Model in explaining the behavioural change within the context of the Covid-19 pandemic is also established by [20]. The small sample and limited geographical coverage of the cross-sectional survey held within the Kerala State of India provides little external validity. Further, although the study undertakes a logistic regression, it does not account for endogeneity. Similar limitations exist with the other attempts in other country contexts ( [21] for South Korea and [22] for the US).
The current study is one of the first on the Covid-19 pandemic that is based on nationally representative data within the context of an African country and contributes to the Health Belief literature by building in an estimation strategy that accounts for endogeneity in the model.
Summary statistics
The analysis utilises the first and second waves of the National Income Dynamics Survey (NIDS)-Coronavirus Rapid Mobile Survey (CRAM) [23,24]. The NIDS-CRAM survey is a special follow up with a subsample of adults from households in Wave 5 of the National Income Dynamics Study (NIDS) run by SALDRU [25]. The NIDS-CRAM has a smaller sample by comparison, covering complete questionnaire information for 7073 individuals in Wave 1 and 5676 individuals in Wave 2. The survey is designed to be nationally representative and remains the best available source of quantitative information on a national scale to assess the socio-economic impact of the corona virus pandemic in South Africa.
The NIDS-CRAM sample is drawn using a stratified sampling design with "batch sampling", by which the sampled individuals were sent to the fieldwork team in batches of 2500 individuals [26]. The response rate in NIDS-CRAM was approximately 40%. The sampling process incorporated a non-response adjustment by oversampling strata where strata response rates in the initial batches were low. A further 8% of the selected respondents were classified as a refusal; they were contacted but refused to be interviewed. The non-response adjustment is undertaken following [27], whereby the design weight is multiplied by the inverse of the conditional probability of being interviewed. Further, trimming is used to adjust the weights with weights below the 1st percentile of all weight values set to the 1st percentile and those weights above the 99th percentile set to the 99th percentile [27]. Lastly, the issue of panel attrition between waves 1 and 2 is addressed by [28]. Using probit regression models to predict the determinants of attrition, the study found attrition to be random across all model specifications.
The first wave of the NIDS-CRAM survey was conducted over the months of May and June 2020, and the second wave was administered in July and August 2020. Therefore, about 25% of the first wave was under lockdown Level 4 conditions, while 75% was under Level 3 conditions. The second wave has been conducted in its entirety over lockdown Level 3 conditions. Where possible, the analysis is structured to explore the evolving behaviour using both waves for comparison. However, this is limited by the availability of information in both the survey waves. For example, certain variables like the sources of information about Covid were collected only in the first wave and therefore the analysis relies exclusively on the first wave for it.
In order to minimise the pressure to respond in a socially desirable way, all survey participants were verbally informed that their identity would be kept confidential and all the collected information would be anonymised. Further, the respondents were made aware that participation in the study was voluntary, and that they could stop the interview at any time. Despite all of these measures, it is hard to assert that there was no strategic bias. The study therefore acknowledges the limitation that the analysis is based on self-reported data and therefore susceptible to hypothetical and strategic bias [29]. A further limitation to be highlighted is the high proportion of missing information on household income. The analysis relating to household income therefore is restricted to 3599 individuals in Wave 1 and 3569 individuals in Wave 2. Despite these challenges, the NIDS-CRAM survey remains the best available source of data to analyse the nation's response to the pandemic. The descriptive statistics of the key socio-economic variables in the sample (Table 1) indicate resonance with national statistics. Table 1 shows that the two samples are fairly representative of the national population (based on the 2011 census) in terms of the important demographic variables such as race, sex, and education. As the sample is restricted to adults, the average sample age is higher than that of the population. The reduction in average income observed in the sample is in line with the economic devastation that occurred during the pandemic.
Health-response behaviour
It is reassuring to see that there has been significant improvement in the percentage of individuals reporting adopting some form of behavioural change in response to the threat of corona virus. While 92% reported changing their behaviour in Wave 1, this rose to 99.7% in Wave 2 (Fig 1).
There are significant changes in the preventive measures used between the two waves ( Fig 2). While in Wave 1, handwashing was the predominant measure, this has changed to the use of face masks in Wave 2. It is clear the individuals are responding to public messaging; in the initial phases handwashing was emphasised over face mask use. Subsequently, the expert views available to the public changed in favour of face masks and this is reflected in the surge of face mask use from under 50% to over 70%. While this is heartening, it needs to be noted that a significant proportion is still not utilising face masks despite the regulation making it mandatory.
The other major shift observed in preventive behaviour is the reduction in the practice of physical distancing (Fig 2). With the opening up of the economy, it is noticeable that staying at home has reduced substantially from just under 50% to well below 40%. More concerning is that those reporting social distancing, avoiding close contact and avoiding big groups have reduced significantly. While this seems to be compensated for through the increased use of
PLOS ONE
face masks and hand sanitisers, the reduction in physical distancing measures remains a concern.
The socio-economic inequality in the use of preventive measures is revealing. Awareness of the necessary measure is a necessary precondition, but the barriers to adopting it within an
PLOS ONE
individual's living and livelihood conditions might make it infeasible. Therefore, socio-economic context is expected to play a key role in the nature of preventive measures adopted by individuals. While in Wave 1, the use of face masks was concentrated among the economically affluent, the concentration index is not statistically significant in Wave 2 ( Table 2). This is an encouraging sign, that face mask use has spread across income groups. However, physical distancing practices like social distancing and avoiding close contact remains pro-rich. Or in other words, these practices are significantly more concentrated among the rich than the poor. This highlights the question of the feasibility of these preventive measures for respondents who live in crowded households and neighbourhoods and have no alternative to public transport. The lifting of capacity restrictions in public taxis have particular bearing for individuals from the lower economic strata of society, exposing them to higher risks compared to those with private transport. Staying home as a preventive measure is seen to be concentrated among the poor but is not significant in Wave 2.
Education-related inequality is visible along the lines of the income-related inequality in the use of preventive measures ( Table 3). The results indicate that social distancing, avoiding close contact and the use of sanitisers are practised more among the educated. The use of face masks was also pro-educated in Wave 1 but has become insignificant in Wave 2, indicating its popularity cutting across education lines.
Age related concentration indices indicate that the concentration in behavioural change amongst the young has declined, but still remains significant (Table 4). While the use of a flu vaccine is revealed as a strategy concentrated amongst the older individuals, social distancing is evident as a measure concentrated amongst the younger individuals.
Risk perception
Individual risk perception is an important pillar in the Health Belief model. The process of individual risk perception determination is seldom entirely rational. Both cognitive and emotional assessments contribute to the formulation of risk perception. While cognitive skills through the logical weighing of evidence and reasoning contribute to risk perception formulation, equally, emotional appraisals, through the use of intuition and imagination, play an
PLOS ONE
important role [30]. The literature has highlighted the role of optimism bias (the tendency to believe that one's own risk is less than that of others) in reducing the health-protective behaviour or increasing risk-taking [31]. It is therefore important to identify the high-risk taking category for targeted policymaking.
The risk perception information in the study was obtained through the 'yes' or 'no' response to the question 'Do you think you are likely to get the corona virus?'. As indicated earlier, just over 75% of responses in Wave 1 were captured under lockdown Level 3 and the rest under lockdown Level 4 and 100% of Wave 2 responses were sought in Level 3 conditions. Despite this, the findings show that there was a significant increase in risk perception in Wave 2 relative to Wave 1. While 33% of individuals reported a risk of infection, this increased to 50% in Wave 2 (Fig 3).
PLOS ONE
Emerging research on the vulnerability to Covid-19 infection has highlighted the role of covariates associated with poverty. These include the non-availability of a private mode of transport, lack of access to information, lack of hygiene facilities at home like water and sanitation, over-crowded households, and multi-generational households [32]. Therefore, considering the race-based poverty-rate differences in South Africa [6], it is surprising to note that the South African black population group perceive a significantly lower risk (48% in Wave 2) compared to non-blacks (60% in Wave 2). Even though there is a significant increase in the risk perception of black Africans in the second wave, the non-black risk perception has also increased and therefore the race gap remains significant in both waves. The gap between the black and non-black categories could be interpreted as the result of a dual bias. The high riskperception of non-blacks (with lower poverty rates) is due to an over assessment of risk and optimism bias on the part of the black African population (with higher poverty rate). Together, both biases compound to create a significant gap in risk perception between black and nonblack population groups.
A comparison of the perceived risk of corona virus infection across the demographic categories indicate that there are no significant differences across sex but significant differences exist across race and geographical locations (Fig 3). Respondents based in rural locations reported significantly lower risk than those in urban areas. Both locations report increased risk perceptions in the second wave and the difference in the risk perceptions across the geographical divide remains significant. The lower risk perception in rural areas can be attributed to a cognitive assessment based on the lower density of populations in relation to urban areas and lesser interaction with the outside world, which lowers the probability of acquiring this "imported" virus.
An analysis of risk perceptions levels across the income quintiles is further revealing (Fig 4). There are significant differences in risk perceptions across income groups with higher income quantiles having significantly higher risk perceptions compared to the lower income quantiles. Explaining this rationally is difficult considering that higher income groups are in a better position to adopt protective measures. However, over exposure to information, especially from social media and informal sources, can contribute to heightened risk perceptions. Further, an
PLOS ONE
emotional response not entirely grounded in reality can contribute to higher risk perceptions [30]. Contrary to this, there is a distinct optimism bias among the lower income quantiles, although this reduced from Wave 1 to Wave 2.
Further, we use the concentration index to quantify the level of concentration of risk perception along key continuous variables like income, education and age. The concentration index is a measure of socio-economic inequality based on the ranking of individuals by some measure of socio-economic status. This paper uses household per capita income, education and age as the ranking variables so that the risk perception levels of individuals can be compare across the levels of these variables [33]. Given that the risk perception variable is binary, we estimate the Erreygers'-corrected concentration index [34]. The results in Table 5 indicate significant pro-rich, pro-education and pro-age concentrations of risk perception. This implies that risk perception is concentrated more among the richer, more educated and older respondents.
Although age is considered to be a factor in the severity of symptoms, hospitalisation and fatality, it is not clear that infection itself can be differentiated along age lines among adults. According to the Centre for Disease Control and Prevention (CDC), the age group 18-27 (18)(19)(20)(21)(22)(23)(24)(25)(26)(27) to be at higher risk of infection [35]. Studies further indicate that Covid-19 transmission through young adults is higher compared to other age groups [36]. Therefore, the inference is that the youth suffer from an optimism bias compared to the elderly. It is of concern that the concentration levels are high and are increasing along income, education and age factors across waves. The highest concentration of risk perception is along income lines, highlighting the social and health implications of the income divide of the country. Given that the black and rural populations in South Africa have lower average incomes [5,6], the lower risk perception identified earlier among the black and rural populations could be driven by income as the confounding factor. Isolating the impact of various variables therefore calls for a multivariate analysis, which is undertaken in Section 4.
Self-efficacy
The belief that positive health outcomes can be achieved through personal action (self-efficacy) is an important motivation for individual good health behaviour [37]. Self-efficacy is measured in NIDS-CRAM data through the 'yes' or 'no' response to the question, 'Can you avoid getting the corona virus?'. Self-efficacy, unlike risk perception, has remained unchanged over the two waves, at 87% (Fig 5). Self-efficacy is significantly higher among the majority black African population (accounting for 82% of the country's population), compared to the minority non-black population. The rural population that accounts for one-third of the country's population accounts for higher self-efficacy compared to their urban counterparts.
Multivariate analysis of health-response behaviour
Although the above bivariate analysis gives interesting insights, it does not control for confounding factors driving relationships, causing possible misinterpretations. Multivariate analysis allows one to address this issue and is undertaken in two steps. First, the drivers of behavioural response to the pandemic are estimated within the context of the health belief model. Multiple estimation techniques are utilised to account for possible limitations of available techniques in the context of possible simultaneity between risk perception and preventive behaviour, discussed in Section 2. Second, in order to identify the specific nature of behavioural change, a seemingly unrelated regression model is estimated between the most prominent behavioural change strategies, viz., the use of face masks, handwashing, sanitisers, social distancing and staying at home.
Behavioural change
Given the binary outcome variable on behavioural change (taking the value 1 for those who changed behaviour in some manner or the other and 0 for those who have not changed behaviour), a probit regression estimation is an appropriate starting point. However, considering the possible endogeneity arising through simultaneity between risk perception and behavioural change, we introduce the control function approach proposed by [38] using the ivprobit module in STATA. The control function approach is estimated with neighbourhood conditions as instruments. The Wald test of endogeneity is not rejected (p value: 0.6753). However, this result is not sufficient to revert to the baseline probit model because, although the control function approach accounts for the endogenous regressor, it is not appropriate for non-linear models and when the endogenous regressor is discrete [39]. Therefore, we proceed to use the special regressor method estimation [40,41] to counter any possible bias due to the binary nature of both our dependent variable as well as the endogenous regressor. In order to estimate the special regressor model, a special regressor (V) satisfying the assumption conditions that a) it is continuously distributed and has a large support; b) it is exogenous; and c) it is conditionally independent of the model error term. We chose the age of the individual as V since it is continuously distributed with large support (varying from 17 years to 101 years). However, a limitation of the special regressor method is that the large support condition is not testable [42].
The special regressor-based regression is estimated using the sspecialreg module in STATA [43]. The classical tests of instruments' validity can be applied at the final two-stage least squares regression [44]. In our estimation, risk perception is instrumented with two neighbourhood variables that explore the adherence to government regulations (The variables are based on the survey questions: "How many people in your neighbourhood, if any, went out and drank alcohol with their friends during lockdown?" and "How many people in your neighbourhood stayed home and did not go out for social activities or to see their family?", with the response options provided as: "None, A few people, About half, Most people"). The Wu-Hausman test of endogeneity is rejected (p-value: 0.0012), indicating possible bias in the baseline estimations. The Sargan-Hansen test of over-identification (p-value: 0.6574) were performed at the last stage of the special regressor model estimation and confirmed the validity of the two instruments.
The results indicate significant upward bias among the baseline estimations in relation to the special regressor regression result ( Table 6). The health belief model is validated with both risk perception as well as self-efficacy variables being positive and significant predictors of behavioural change across the three estimations. The former has a stronger effect on behavioural change. Perceived awareness is also found to be an important correlate, with reporting no source of reliable information being negatively associated with behavioural change. Household income is a positive correlate of behavioural change, as expected, indicating the feasibility and barriers to adoption of behavioural change are an important driver as set out in the health belief model.
In addition, there are significant socio-economic drivers behind the health-response behaviour. It is clear through all the models that education plays a significant role in driving behaviour. As expected, those employed are positively correlated with some form of behavioural change. The majority black African population are more likely to have made some behavioural change compared to the minority population groups. Similarly, males are significantly more likely to have changed their behaviour compared to females. It is clear from the findings that, in addition to the Health Belief Model, socio-economic drivers are relevant in moulding the response behaviour in South Africa.
Considering that both the survey waves included in this study were undertaken before the pandemic peaked in the country, the increasing trend observed in behavioural-change adoption is in line with the growing awareness and anxiety among the population.
Preventive behaviour
While the findings in the earlier section validate the Health Belief Model and broadly match that of the earlier studies undertaken in the context of the SARS pandemic [9], a deeper understanding of the socio-economic correlates of the type of behavioural change is warranted. Behavioural change in the form of preventive strategies, as discussed in earlier sections, have primarily taken the form of the use of face masks, handwashing, use of sanitisers, social distancing and staying home. We next look at the socio-economic drivers of preventive behaviour for each of these preventive strategies. A seemingly unrelated regression [45] is the appropriate estimation method, considering the correlation across the error terms of the various preventive strategy regressions.
It is most interesting to see a clear shift in preventive behaviour strategy over time (Table 7). There is increased use of face masks and hand-sanitisers, while other preventive strategies mark a significant decline. This is in keeping with the gradual restarting of economic activity, making staying home and social distancing difficult. Also, with a better understanding of the airborne nature of the infection, the public messaging focus from government and media shifted from staying home, social distancing and handwashing to the use of sanitisers and face masks.
Despite face-mask use having increased significantly across the population spectrum, there is a positive and significant association in its use among the educated, employed and those who perceive access to some source of reliable information. Age has a non-linear relationship with the use of face masks, handwashing and social distancing strategies. Apparently, the middle aged are the most compliant in these practices, perhaps because the working age group are most exposed to the risk of infection.
Handwashing has declined over time and is practised as a strategy more by females, black Africans, those living in larger households, and with perceived access to reliable information. The use of hand sanitisers has increased over time, with females, urban residents and persons with higher socio-economic status more likely to use hand sanitisers. With increased mobility outside of homes, the use of hand sanitisers seems to have replaced handwashing as a more practical mode of hand hygiene.
Gender is emerging as a key driver of preventive strategies. Males are less likely to adopt stay home, sanitisers and handwash strategies compared to women. However, they are more likely to use the social distancing strategy compared to females. These findings are in line with a study on the practice of social distancing done in the Egyptian context [46]; being male and working and living in urban communities positively contribute to the practice of social distancing, with high statistical significance. In addition, our study indicates that household income and race also are contributing factors to the practice of social distancing.
Education, across the board is positively associated with the use of preventive strategies, except for handwashing. Neighbourhood effects are strongest for social distancing, with nonadherence to government regulations leading to a negative effect on the practice of preventive
PLOS ONE
strategies. This is significant for social distancing, given the public good nature of the practice. It requires cooperation from other individuals and cannot be implemented unilaterally by an individual. As expected, with the easing of lockdown conditions and the reopening of the economy, the employed have not been able to adopt a staying home strategy. Use of face masks and hand-sanitisers are positive correlates of those employed. Per capita household income has a negative association with the staying home strategy. Other strategies, except face masks, are significantly associated with income. This finding highlights the barriers against the adoption of certain strategies. For example, social distancing is not feasible for those without private vehicles and for those who rely on public transport. Similarly, the cost of sanitisers is a barrier for the poor. The use of face masks is across the income spectrum and as such we do not see a significant association between income and face-mask use.
The perceptions on sources of reliable information also throw light on the chosen preventive strategies. Those who perceived news, community leaders and health workers as sources of reliable information were more likely to adopt face masks, handwashing and social distancing strategies. Those who reported no source of reliable information were more likely to be staying home.
Discussion and conclusion
This study undertakes an in-depth look into the socio-economic inequality of behavioural responses towards the corona virus pandemic. Despite the high income inequality and poverty in the country, the enhanced behavioural response is comparable to that of other developed countries [13]. There has been an increase in enhanced behavioural responsiveness, with 99% (as against 92% earlier) of respondents reporting some form of change in behaviour as a preventive measure against infection. Preventive behaviour is evolving over time; the use of face masks has overtaken handwashing as the most utilised preventive measure. Over 70% of respondents in June indicated the use of face masks, an increase from under 50% in April. Handwashing featured as the second most popular measure in June. While there is an increased use of hand sanitisers and home cleaning as preventive measures against infection in June as compared to April, other measures like social distancing, avoiding close contact, avoiding big groups and staying at home have declined subsequently between the two periods. The increasing adoption of face masks and decline in social distancing are in line with the trends observed in developed countries [16]. This underlines the need for public messaging to emphasise the complementary nature of these measures, given that any one measure in itself is not sufficient on its own.
Our findings are aligned with [15] who found older and more educated adults had higher risk perceptions compared to young and less-educated adults. In addition, we find that risk perception is significantly concentrated among the higher income groups. Despite higher vulnerability, there is an optimism bias among black South Africans, lower income, less educated and younger age groups. As Covid-19 vulnerability is observed to be associated with multidimensional poverty [32], the perceived differences in risk appears to be the result of two possible biases: optimism bias among the less affluent (the more vulnerable category) and an over estimation of risk among the affluent (less vulnerable) sections. This points to the continued tag of Covid-19 as a "rich man's disease" [47][48][49]. The self-efficacy rate has remained unchanged with 87% of respondents reporting that Covid-19 can be avoided in both survey periods.
The multivariate model that accounts for endogeneity validates the Health Belief Model. Perceived risk, self-efficacy, perceived awareness and barriers to preventive strategy adoption are significant drivers of health-response behaviour. Social economic factors also play an important role in this regard. It is clear that, with the opening up of the economy and the return of individuals to employment; it has become harder for individuals, especially in the lower income categories, to observe physical distancing. There is significant income-and education-related inequality between the types of preventive measures adopted. Measures such as social distancing, avoiding close contact, and the use of sanitisers are practised more by the rich and educated. The low-income respondents are not able to maintain physical distancing measures as the economy opens up. This highlights the question of the feasibility of these preventive measures for the poor who live in crowded households and neighbourhoods and have no alternative to public transport. The lifting of capacity restrictions in public mini-bus taxis have particular bearing for individuals from the lower economic strata of society, exposing them to higher risks compared to those with private transport. It is recommended that the government reintroduce capacity restrictions in public transport to protect the vulnerable who do not have access to private transportation.
This study highlights the need to consider the individual motivation and impediment factors as additional drivers of behavioural response. The feasibility of adopting a certain preventive measure by an individual is contingent on their living and livelihood circumstances. The awareness campaigns and policy recommendations therefore have to talk to the lived realities of individuals in different circumstances. Practical interventions, like making sanitisers freely available in public spaces where people tend to congregate, making free face masks available to the poorest of the poor, among others, are recommended. Moreover, the optimism bias recorded in the literature [50] can lead people to risky behaviour because they falsely believe that they are less at risk of negative events than are other people. The study has identified the categories of individuals more prone to optimism bias to enable more targeted awareness creation.
|
v3-fos-license
|
2018-04-03T05:03:53.434Z
|
2006-11-16T00:00:00.000
|
25026677
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2006/842753.pdf",
"pdf_hash": "f817309dd6a7c0f7f2f0d5399bdb93566792f710",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:581",
"s2fieldsofstudy": [
"Education"
],
"sha1": "f817309dd6a7c0f7f2f0d5399bdb93566792f710",
"year": 2006
}
|
pes2o/s2orc
|
Implementation of the Tier 1 Program of the Project P.A.T.H.S.: Interim Evaluation Findings
To understand the implementation quality of the Tier 1 Program of the Project P.A.T.H.S., 25 schools and three school social service units were randomly selected to participate in telephone interviews regarding the quality of the implementation process of the Tier 1 Program of the P.A.T.H.S. Project. In the telephone interviews, the participants described the responses of the students and the workers to the program, the perceived benefits of the program, their assessment of the positive and negative features of the program, as well as difficulties involved in the implementation process. Results showed that most workers perceived that the students had positive responses to the program and half of the workers had positive experiences about the program, although negative comments on the program design and difficulties in the implementation were also recorded. Nearly all workers (97.1%) regarded the program to be beneficial to the students and most of them (78.6%) had positive global evaluation of the project. In short, while the program implementers expressed concerns about the program design and the implementation process, they generally regarded the program as helpful to the students and they had positive global evaluation of the program.
INTRODUCTION
There are many researchers who argue that the development of developmental assets in adolescents is helpful to their development [1,2]. In addition, there are views that emphasize the importance of holistic development in adolescents [3]. With reference to the Chinese culture, there is a strong emphasis on academic excellence in adolescents and the importance of holistic adolescent development is not taken seriously by Chinese parents [4]. Furthermore, there are research findings showing that adolescents in Hong Kong faced high levels of stress in different psychosocial domains [5]. Obviously, how to promote holistic development in Chinese adolescents and help them to cope with life stresses are important issues to be considered.
To promote holistic adolescent development, The Hong Kong Jockey Club Charities Trust has approved HK$400 million for a positive youth development program entitled "P.A.T.H.S. to Adulthood: A Jockey Club Youth Enhancement Scheme". The word "P.A.T.H.S." denotes Positive Adolescent Training through Holistic Social Programmes. In the Tier 1 Program of the project, 15 positive youth development constructs identified from the existing successful positive youth development programs [6] are covered in the developed curriculum. To enable colleagues in the field to get familiar with the program, 52 schools were included in the Experimental Implementation Phase of the project in the academic year of 2005-06.
Obviously, it is important to ask whether the program objectives were actually achieved. To answer this question, several evaluation mechanisms were carried out to evaluate the project. First, objective outcome evaluation data utilizing a one-group pretest-posttest design was used [7]. Second, qualitative evaluation data were collected based on focus group interviews [8]. Third, process evaluation data based on systematic observations of the program implementation were collected [9]. Finally, the workers and participants were invited to complete subjective outcome evaluation forms at the end of the program using Form A and Form B developed by the researchers.
In view of the pioneering nature of the project, it is argued that process evaluation is of great importance. Hence, besides process evaluation in terms of systematic observations of the implementation of the program [9], an interim evaluation was carried out during the program implementation process to gain more understanding of the reactions of the participants and workers to the program. According to Meyer et al. [10], the development of a feedback loop from the participants and workers regarding the program implementation is important for program refinement. In this study, through process evaluation via telephone interviews, information in the following areas was collected: (1) workers' perceptions of the responses of the participants to the program, (2) experiences of the workers delivering the program, (3) perceived helpfulness of the program, (4) positive aspects of the program, (5) aspects of the program that require improvement, (6) difficulties encountered during program implementation, and (7) overall evaluation of the program. In this study, the general principles of qualitative research (e.g., use of openended questions without preset answers for some of the questions, consciousness of how biases might influence data interpretations, encouragement of the informants to freely narrate their views) were maintained.
METHODS Participants
In the Experimental Implementation Phase in 2005-06, 52 schools joined the project. In each school, the school social work service was operated by a Government organization (NGO). Among these schools, 29 adopted the 20h full program and 23 adopted the 10h core program. Among these participating schools, 15 schools joining the full program (i.e., 20h program), 10 schools joining the core program (i.e., 10h program), and three NGOs providing school social work services were randomly selected to join this study. For each selected school or NGO providing school social work services, the relevant contact persons were invited to participate in telephone interviews on a voluntary basis. The participants included 25 teachers and three social workers, involving 27 schools joining the project. The number of schools that participated in this research can be regarded as respectable as more than half of the participating schools of the project joined the interviews. Moreover, because the schools were randomly selected, the generalizability of the findings can be enhanced. These justifications satisfy Principle 2 in the implementation of qualitative evaluation research proposed by Shek et al. [11].
Procedures
According to Shek et al. [11], the procedures of a qualitative research should be clearly presented (Principle 3). As such, the procedures for data collection are systematically described. The telephone interviews were conducted between late-March to mid-April 2006. As the Experimental Implementation Phase took place from January 2006 to August 2006, late-March to early-April 2006 can be regarded as midway of the implementation process. While telephone interviews have the problems of psychological distance and inability to observe the nonverbal cues of the informants, its major advantage is efficiency in collecting the data. This advantage is important because many schools might reject the idea of completing questionnaires and participating in face-to-face interviews during term time. In addition, follow-up calls could be arranged if there was a need to clarify the responses of the informants.
A self-constructed, semi-structured interview guide with seven questions was used to collect information on the program implementation process. These questions were: 1. What are the responses of the students to this program? 2. What are the experiences of the workers when they implement the program? 3. Do you think this program is beneficial to the students? If yes, what are the benefits? 4. What are the good aspects of the program? 5. Which areas of the program require improvement? 6. Have you encountered any difficulties during the program implementation process? If yes, what problems have you encountered? 7. Overall, what is your evaluation of the program?
For question 1, 2, and 7, besides inviting the informants to freely narrate their experiences, the informants were asked to indicate whether their experiences were "positive", "negative", or "neutral", and give examples to illustrate their answers.
Informed consent was obtained from the informants and they participated in the study in a voluntary manner. The interviews were conducted by the research assistants of the research team who were registered social workers with substantial working experience. The telephone interviews were conducted in Cantonese and the responses of the informants were jotted down during the interviews. The handwritten notes were then transcribed and analyzed.
Data Analyses
The data were analyzed using general qualitative analyses techniques [12]. There were three steps in the process. First, the transcribed interview materials were coded to differentiate semantically different information and group semantically close information. Relevant raw codes were developed for words, phrases, and/or sentences that form meaningful units. Second, the codes were further combined to reflect higher-order attributes, such as perceived benefit in promoting holistic development in students in different domains. Finally, except Question 6, all responses were categorized in terms of whether they were positive responses, negative responses, neutral responses, or responses that cannot be determined (i.e., "undecided" responses).
The present codes were developed after several preliminary analyses of the transcripts by the researchers. In order to ensure the reliability of the coding, both intra-and inter-rater reliability on the coding was calculated. The researcher and another research assistant who was a registered social worker not affiliated with this project recoded 20 randomly selected responses for each question (except Question 6) without knowing the original codes given at the end of the scoring process.
Shek et al. [11] highlighted the importance of being aware of one's biases and preoccupations of the researchers (Principle 4) and to address such biases in a proper perspective (Principle 5). In this study, as the researchers were the program developers, it was likely for them to believe that the implemented program was worthy and beneficial to the participants, and they tended to look for positive evidence or overlook negative evidence. Therefore, several steps were taken to guard against the above biases. First, the researchers were conscious about their researcher role and conducted this study in a disciplined manner. Second, intra-and inter-rater reliability on the coding was carried out (Principle 6). Third, multiple research assistants were involved in the data collection process (Principle 7). Finally, the procedures involved in data collection and data interpretations were systematically documented (Principle 9).
Responses of the Students to the Program
As shown in Table 1, a total of 18 informants (64.3%) reported that the students had positive responses towards the program, while 10 informants (35.7%) reported that the students' responses towards the program were neutral. With reference to the raw narratives, 28 responses were regarded as positive responses, two responses were coded as neutral responses, and nine responses were regarded as negative responses. The percentage of intra-rater agreement was 95% and the inter-rater agreement percentage between the rater and another research assistant was 90%. • Interactive format is attractive to students (e.g., "liked the games in the units", "liked the interactive curriculum", "liked the activity-based curriculum", "liked the flexible teaching format") • High student involvement (e.g., "active", "involved", "eager to join the program", "willing to participate") • Welcomed by students • Other responses (e.g., "relaxed", "happy", "playful", "helpful") Overall Neutral Evaluation 10 (35.7%)
Related Neutral Responses (N = 2; see examples below)
• "The Tier 1 Program is partially welcomed by students" • "The classes with disciplinary problems partially accept the program" Overall Negative Evaluation 0 (0%)
Workers' Experiences about the Program Implementation
As shown in Table 2, 14 informants (50%) reported having positive experiences about the program implementation, nine informants (32.1%) reported having neutral feelings (i.e., a mixture of both positive and negative feelings), and five informants (17.9%) reported having negative feelings during the program implementation. Among the raw descriptions, there were 21 responses coded as positive responses, nine responses coded as neutral responses, and 20 responses coded as negative responses. Both the intra-and inter-rater agreement percentages calculated were 100%. • Happy • Believe the program is helpful to the students • Appreciate the curriculum (e.g., "rich curriculum, including wide content, interesting and various teaching methods"; "comprehensive teaching materials") • Other responses (e.g., "it is meaningful, and I like students' sharing and enjoy participating in the students' growing process"; "can have a deeper understanding on and more interaction with students"; "smooth implementation") Overall Neutral Evaluation 9 (32.1%)
Related Neutral Responses (N = 9; see examples below)
• "It is worthwhile and happy, but the implementation process is tight and confused" • "It is helpful to the students, but it is busy" Overall Negative Evaluation 5 (17.9%)
Related Negative Responses (N = 20; see examples below)
• Time constraints (e.g., "time is tight for running too many activities in each unit", "time is pressing from the confirmation to implementation") • Activity design (e.g., "have too many student worksheets") • Administrative difficulties (e.g., "the original administrative arrangement is affected", "have difficulties in coordination") • Feeling uneasy and busy (e.g., "feeling hard", "it is difficult to control class discipline", "being busy and confused", "being exhausted") Total (%) 28 (100%)
Perceived Benefits of the Program to the Students
As shown in Table 3, there were 34 raw descriptions on the perceived benefits of the program to the students. Except for one response that was coded as "undecided" (2.9%), the rest of the responses were coded as positive responses. These included seven responses (20.6%) indicating that the program could facilitate the holistic development of the students; five responses (14.7%) indicating that the program could build up students' peer relationships; 10 responses (29.4%) indicating that the program could strengthen students' behavioral, social, and cognitive competence; three responses (8.8%) indicating that the program could enhance students' moral values; and eight responses (23.5%) indicating that there were other benefits. Both the intra-and inter-rater agreement percentages calculated were 100%.
Positive Aspects of the Program and Areas that Require Improvement
There were 36 raw descriptions regarding the good aspects in the project (Table 4). Except for one response (2.7%) that was coded "undecided", there were 19 responses (52.8%) indicating that the curriculum content was good, four responses (11.1%) indicating that the philosophy underlying the project was comprehensive, six responses (16.7%) indicating that activities were well designed, and six responses (16.7%) indicating that training for workers, evaluation on the project, and support to teachers were sufficient. On the other hand, there were 36 raw descriptions on the areas requiring improvement in 7 (20.6%) • "The design matches students' developmental needs" • "It has comprehensive content that students need to know" • "It facilitates personal development" • "It enables comprehensive discussion on students' growth issues" 8 (23.5%) • "The materials are good and help students to learn" • "Teaching materials are systematic, and let teachers and students come across the content that has been neglected"
Total (%) 34 (100%)
the project (Table 5). Except for two responses (5.6%) indicated "none", there were 18 responses (50%) indicating that the activity design in the curriculum needed to be modified, seven responses (19.4%) suggested having sufficient time for implementation, and nine responses (25%) suggested other areas requiring improvement. The intra-and inter-rater agreement percentages calculated were 95% and 85%, respectively.
Difficulties Encountered During Program Implementation
As shown in Table 6, there were 44 raw descriptions on the difficulties encountered by the workers during program implementation. Among them, 13 responses (29.5%) were related to difficulties in teaching and coordination, 17 responses (38.6%) were related to difficulties in time management, seven responses (15.9%) were related to difficulties in manipulating the activity design, and seven responses (15.9%) were concerned about difficulties in handling students' responses. No intra-and inter-rater agreement percentages were calculated for the related responses. 19 (52.8%) • "The content is rich" • "It has a clear content" • "The curriculum has a comprehensive coverage" • "It has ready-made teaching manuals" • "The instructions are detailed" • "It has sufficient resources" Philosophy (see examples below) 4 (11.1%) • "The philosophy behind the program is good" • "It has comprehensive rationales" • "The spirit of positive growth is agreeable" Activity Design (see examples below) 6 (16.7%) • "Some activities are good" • "The topics match the students' needs" • "Most of the time, there is no definite answer in the unit, and thus providing space for students to reflect" Other Responses, Including Training and Evaluation (see examples below) 6 (16.7%) • "It is great that both teachers and social workers can have training" • "The training is good" • "Evaluation on the project is comprehensive" • "Support to teachers is very sufficient" Total (%) 36 (100%)
Global Evaluation of the Program
As shown in Table 7, the responses of 22 informants (78.6%) could be regarded as positive evaluation and responses of three informants (10.7%) were regarded as neutral evaluation. Besides, two responses (7.1%) were coded as negative evaluation and one response (3.6%) was regarded as "undecided". Among the raw descriptions, there were 25 responses coded as "positive", one response coded as "neutral", and two responses coded as "negative". Both the intra-and inter-rater agreement percentages calculated were 95%.
DISCUSSION
The primary purpose of this paper was to report interim evaluation findings on the implementation of the 18 (50%) • "The aims of some units are too difficult to Secondary 1 students" • "There are too many aims in each unit" • "The number of themes can be reduced" • "The content is very broad, and not in depth" • "There is a need to modify the content to match students' needs" • "The design of some units is similar" • "There is insufficient flexibility in the program design" • "There is a need to have various teaching aids to cater for different students' needs" • "There are too many worksheets" • "The Growth Puzzle is too big" • "There is a need to have an English version of the curriculum" • "It is suggested to distribute the curriculum earlier, so that the school and teachers can make appropriate adjustment" • "It is suggested to have more sharing among schools in order to understand the implementation and effectiveness of the program in different schools" • "There is a need to solve the manpower issues" • "There is a need to reduce teachers' heavy workload" Total (%) 36 (100%) appreciated the program, they also identified problems in the implementation process and proposed suggestions for refinement. Taken as a whole, these findings reinforced the findings arising from other evaluation studies that the program was helpful to the students, and both the students and workers had positive perceptions of the program [7,8,9]. Regarding the negative experiences expressed by the workers and the problems they encountered during the process, there are four factors contributing to these observations. First, as implementation of positive youth development program utilizing the curricula approach is relatively new in Hong Kong, teachers need time to adjust to the mode of teaching (e.g., using games and interactive teaching methods) that is different from the traditional didactic form of teaching. Second, as it is not common for social workers and teachers to use teaching manuals to deliver a program, they might find the program inflexible. Third, as the implementation period was rather short (January 2006 to August 2006), the expressed concern about time management difficulty is expected. Finally, as the program was conducted by social workers in collaboration with the teachers, they might need time to make the collaboration more streamlined. 13 (29.5%) • "Not all the teachers are interested in running the activities" • "Social workers may not get acquainted with the students, and it is difficult to handle class discipline" • "The number of students per class is too high" • "Teacher's role is not clear" • "Division of labor is not clear" • "Teachers feel uneasy to collaborate with social workers" • "Teachers have to spend much time and energy to prepare the activities" • "There is insufficient flexibility in arranging manpower and time for the project" Time Management (see examples below) 17 (38.6%) • "The implementation time of the project is short" • "There is a need to coordinate different parties in order to spare sufficient time to finish all the content in the curriculum" • "Time is limited for running units" • "It is difficult to control time" Activity Design (see examples below) 7 (15.9%) • "It is too academically based, and there is a difficulty in matching the activity design" • "The curriculum design is not very smooth" • "Tier 1 Program lacks flexibility" • "Some activities are repeating" Students' Responses (see examples below) 7 (15.9%) • "Students have low motivation to participate" • "There is low student involvement" • "It is difficult to handle poor class discipline" Total (%) 44 (100%) Shek et al. [11] suggested that it is important to consider alternative explanations in the interpretations of qualitative evaluation findings (Principle 10). There are several possible alternative explanations for the present findings. First, the findings can be explained in terms of demand characteristics. However, this explanation is not likely because the informants were encouraged to voice their views without restriction and negative voices were in fact heard. In addition, as telephone interviews were conducted (i.e., no faceto-face interaction), the tendency for the informants to express negative views would be higher. The second alternative explanation is that the findings were due to selection bias. However, this argument is not strong as the schools and NGOs that provided school social work services were randomly selected. The third alternative explanation is that the positive findings were due to ideological biases of the researchers. As several safeguards were used to reduce biases in the data collection and analysis processes, this possibility is not high.
With reference to Principle 12 of Shek et al. [11], there are several limitations of the study. First, it should be noted that in order to gain efficiency, the price one has to pay in telephone interviews is the inability to collect in-depth information. Nevertheless, because schools are always reluctant to participate in in-depth studies during term time, telephone interviews can be regarded as a pragmatic way of collecting data. Second, although other mechanisms of evaluation have been carried out [7,8,9], the • Helpful to students (e.g., "it is beneficial to students' growth", "it can enhance students' skills and competence") • Resources for students (e.g., "there is extra resources for student activity", "the program has sufficient resources") • Curriculum design (e.g., "it has a systematic curriculum design", "the topics covered in the curriculum are broad", "it meets students' needs", "the teaching package is comprehensive", "it provides progressive personal growth curricula for Secondary 1 to 3 students", "it has sufficient flexibility, and is easy to adapt") • Other responses (e.g., "it has sufficient support to teachers", "it raises the sense of belonging in the class", "it enables teachers to discover students' qualities in other domains besides academic performance", "it has training for teachers and enables them to acquire more knowledge") Overall Neutral Evaluation 3 (10.7%) Related Neutral Responses (N = 1) • "It is suggested to employ other teachers to teach students, so as to reduce in-service teachers' workload" Overall Negative Evaluation 2 (7.1%) Related Negative Responses (N = 2) • "The project is ambitious in term of time and teaching" • "Teachers have done a lot, and it is doubtful whether the benefit will be proportional to the sacrifice" Total (%) 28 (100%) inclusion of other qualitative evaluation strategies, such as in-depth individual interviews, would be helpful to further understand the subjective experiences of the program participants. Finally, although the principles proposed by Shek et al. [11] were upheld in this study, peer checking and member checking (Principle 8) were not carried out in this study because of time and manpower constraints. Furthermore, the researchers were not able to construct "thick descriptions" based on telephone interview data. Despite these limitations, this study provides pioneering interim evaluation findings supporting the positive nature of the Project P.A.T.H.S. and its effectiveness in promoting holistic youth development among Chinese adolescents in Hong Kong.
|
v3-fos-license
|
2020-07-02T10:26:22.667Z
|
2020-06-30T00:00:00.000
|
220389308
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/17/13/4725/pdf",
"pdf_hash": "f180f878c2a847a9f5407b5a482153b332252d7e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:583",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "adc7977fb6eaef0b7d5d1523dc99f480ce453fda",
"year": 2020
}
|
pes2o/s2orc
|
Digital versus Conventional Impression Taking Focusing on Interdental Areas: A Clinical Trial
Due to the high prevalence of periodontitis, dentists have to face a larger group of patients with periodontally compromised dentitions (PCDs) characterized by pathologic tooth migration and malocclusion. Impression taking in these patients is challenging due to several undercuts and extensive interdental areas (IAs). The aim of this clinical trial was to analyze the ability of analog and digital impression techniques to display the IAs in PCDs. The upper and the lower jaws of 30 patients (n = 60, age: 48–87 years) were investigated with one conventional impression (CVI) using polyvinyl siloxane and four digital impressions with intraoral scanners (IOSs), namely True Definition (TRU), Primescan (PRI), CS 3600 (CAR), and TRIOS 3 (TIO). The gypsum models of the CVIs were digitalized using a laboratory scanner. Subsequently, the percentage of the displayed IAs in relation to the absolute IAs was calculated for the five impression techniques in a three-dimensional measuring software. Significant differences were observed among the impression techniques (except between PRI and CAR, p-value < 0.05). TRU displayed the highest percentage of IAs, followed by PRI, CAR, TIO, and CVI. The results indicated that the IOSs are superior to CVI regarding the ability to display the IAs in PCDs.
Introduction
Due to the high prevalence of periodontitis [1], dentists have to face a larger number of patients with periodontally compromised dentitions (PCDs) characterized by several undercuts and extensive interdental areas (IAs). The prevalence of moderate chronic periodontitis ranges between 20 and 42% in patients aged 40-60 years and increases up to 68% in patients aged above 65 years [2,3]. Severe periodontitis affects 10-37% of patients aged 40-60 years and 29-43% of patients aged above 65 years. Severe periodontitis can result in the loss of severely affected teeth due to periodontal destruction [2]. Frequently observed side effects of periodontitis include pathologic tooth migration and development of malocclusion, especially in the anterior jaws which is characterized by flaring and elongation of the teeth, development of diastemas, bite deepening, and crowding of the incisors [4][5][6].
A clinical example of a typical PCD is shown in Figure 1. After anti-inflammatory periodontal therapy, many patients seek prosthetic as well as orthodontic treatment. The interdisciplinary orthodontic treatment of older adults has been the fastest growing area in orthodontics in the last decade [7]. The main challenges for the orthodontic treatment of PCDs are the need for the application of light forces and anchorage control, the ability for excellent oral hygiene, and the esthetic demands of patients regarding nearly invisible appliances [5][6][7]. Therefore, aligner treatment may be the best solution for these requirements [8][9][10][11].
Due to the undercuts present in the extensive interdental areas (IAs) of periodontally affected patients, obtaining an accurate conventional impression (CVI) is challenging for prosthetic as well as for orthodontic demands. In terms of the CVIs, the logistics of manufacturing the prosthetic restorations as well as the orthodontic aligners require long-term storable and precise impression materials such as polyvinyl siloxanes or polyethers. Due to attachment loss, the elastomeric material flows into the undercuts of the extensive IAs and sets. As the elasticity of the material is lower than the required removal forces, tearing and distortion of the material may be observed during the removal of the impression.
Especially for orthodontic aligner treatment, excellent impressions are required for treatment planning as well as for aligner fabrication [12][13][14][15][16]. At the beginning of the treatment planning workflow, the planning software divides all teeth into segments automatically according to the underlying algorithms. If the impression fails to display the teeth and the IAs sufficiently, a clear distinction between the adjacent teeth cannot be extrapolated by the software algorithm. Therefore, closed IAs would result in inaccurate segmentation, leading to misshapen digital teeth and the proceeding steps of treatment planning and aligner fabrication would be negatively influenced. As a consequence, aligner manufacturers reject impressions and scans of insufficient quality [16].
In the last few decades, all fields of dentistry have become increasingly digital. Particularly, due to the continuous development of intraoral scanners (IOSs), impression taking has changed from indirect digitalization of gypsum models using laboratory scanners to direct digitalization of intraoral situations using IOSs [17]. However, the requirements of the accuracy of full-arch scans After anti-inflammatory periodontal therapy, many patients seek prosthetic as well as orthodontic treatment. The interdisciplinary orthodontic treatment of older adults has been the fastest growing area in orthodontics in the last decade [7]. The main challenges for the orthodontic treatment of PCDs are the need for the application of light forces and anchorage control, the ability for excellent oral hygiene, and the esthetic demands of patients regarding nearly invisible appliances [5][6][7]. Therefore, aligner treatment may be the best solution for these requirements [8][9][10][11].
Due to the undercuts present in the extensive interdental areas (IAs) of periodontally affected patients, obtaining an accurate conventional impression (CVI) is challenging for prosthetic as well as for orthodontic demands. In terms of the CVIs, the logistics of manufacturing the prosthetic restorations as well as the orthodontic aligners require long-term storable and precise impression materials such as polyvinyl siloxanes or polyethers. Due to attachment loss, the elastomeric material flows into the undercuts of the extensive IAs and sets. As the elasticity of the material is lower than the required removal forces, tearing and distortion of the material may be observed during the removal of the impression.
Especially for orthodontic aligner treatment, excellent impressions are required for treatment planning as well as for aligner fabrication [12][13][14][15][16]. At the beginning of the treatment planning workflow, the planning software divides all teeth into segments automatically according to the underlying algorithms. If the impression fails to display the teeth and the IAs sufficiently, a clear distinction between the adjacent teeth cannot be extrapolated by the software algorithm. Therefore, closed IAs would result in inaccurate segmentation, leading to misshapen digital teeth and the proceeding steps of treatment planning and aligner fabrication would be negatively influenced. As a consequence, aligner manufacturers reject impressions and scans of insufficient quality [16].
In the last few decades, all fields of dentistry have become increasingly digital. Particularly, due to the continuous development of intraoral scanners (IOSs), impression taking has changed from indirect digitalization of gypsum models using laboratory scanners to direct digitalization of intraoral situations using IOSs [17]. However, the requirements of the accuracy of full-arch scans depend on the indication of the impression. In prosthodontics, discussions in the literature regarding the accuracy and precision of IOSs are controversial. Some authors described CVIs to be more accurate than digital ones [18][19][20], while others have shown equal or even superior accuracies for the IOS compared to the CVI technique [21][22][23][24][25][26][27][28][29]. For orthodontic purposes, most studies described that full-arch intraoral scans meet the requirements of digital orthodontic workflows [30][31][32][33][34]. However, only a few studies have been conducted in vivo [25,33,34].
Thus, we systematically analyzed the ability to display the IAs of a periodontally compromised test model in a former study by comparing conventional polyvinyl siloxane impressions with two intraoral scanning systems. Within the limitations of an in vitro study, it was concluded that IOS, especially the one using the active wavefront sampling technique, displayed the IAs significantly better than CVIs [29].
To date, most of the studies analyzing full-arch digital impressions have been in vitro studies [19][20][21][22][23][24][25][26][27][28][29]31,32]. To overcome the limitations of our in vitro study, the present clinical trial was conducted. This study aimed to compare the ability of one conventional and four digital impression techniques to reproduce the IAs of PCDs. The following null hypotheses were investigated: (a) there is no significant difference among the five impression techniques and (b) there are no significant differences in the dimensions of the IAs with respect to their reproduction in PCDs.
Materials and Methods
The depend on the indication of the impression. In prosthodontics, discussions in the literature regarding the accuracy and precision of IOSs are controversial. Some authors described CVIs to be more accurate than digital ones [18][19][20], while others have shown equal or even superior accuracies for the IOS compared to the CVI technique [21][22][23][24][25][26][27][28][29]. For orthodontic purposes, most studies described that full-arch intraoral scans meet the requirements of digital orthodontic workflows [30][31][32][33][34]. However, only a few studies have been conducted in vivo [25,33,34]. Thus, we systematically analyzed the ability to display the IAs of a periodontally compromised test model in a former study by comparing conventional polyvinyl siloxane impressions with two intraoral scanning systems. Within the limitations of an in vitro study, it was concluded that IOS, especially the one using the active wavefront sampling technique, displayed the IAs significantly better than CVIs [29].
To date, most of the studies analyzing full-arch digital impressions have been in vitro studies [19][20][21][22][23][24][25][26][27][28][29]31,32]. To overcome the limitations of our in vitro study, the present clinical trial was conducted. This study aimed to compare the ability of one conventional and four digital impression techniques to reproduce the IAs of PCDs. The following null hypotheses were investigated: (a) there is no significant difference among the five impression techniques and (b) there are no significant differences in the dimensions of the IAs with respect to their reproduction in PCDs.
Materials and Methods
The At the beginning of the clinical examination, the dimensions of each IA were classified according to the criteria described by Nordland and Tarnow [35] (Table 1 and Figure 3) [29]. Subsequently, four digital impressions were obtained for each jaw (Table 2). OptraGate (Ivoclar Vivadent, Ellwangen, Germany) was used to retract the cheeks and lips, and dry tips (Microbrush International, Grafton, USA) were placed in the oral cavity to absorb the saliva from the parotid gland. The teeth were gently air-dried. If a calibration device was provided by the manufacturer, it was used to calibrate the tip of the IOS before usage [36]. To ensure a standardized test protocol, the same scanning path was followed for all IOSs beginning with the occlusal surfaces and finishing with the buccal surfaces [37]. Since the intraoral scanner, True Definition (TRU), required a thin layer of titanium dioxide powder (High-Resolution Scanning Spray, 3M, batch-no. NA 28789) for impression taking, digital impressions with Primescan (PRI), CS 3600 (CAR) and TRIOS 3 (TIO) were performed before the TRU digital impression. Data were exported as standard tessellation language (STL) datasets. [35].
Classification Description
Class I The tip of the interdental papilla lies between the interdental contact point and the most coronal extent of the cemento-enamel junction (CEJ) (space present but interproximal CEJ is not visible).
Class II
The tip of the interdental papilla lies at or apical to the interdental CEJ, but coronal to the apical extent of the facial CEJ (interapproximal visible).
Class III The tip of the interdental papilla lies level with or apical to the facial CEJ. At the beginning of the clinical examination, the dimensions of each IA were classified according to the criteria described by Nordland and Tarnow [35] (Table 1 and Figure 3) [29]. Subsequently, four digital impressions were obtained for each jaw ( Table 2). OptraGate (Ivoclar Vivadent, Ellwangen, Germany) was used to retract the cheeks and lips, and dry tips (Microbrush International, Grafton, USA) were placed in the oral cavity to absorb the saliva from the parotid gland. The teeth were gently air-dried. If a calibration device was provided by the manufacturer, it was used to calibrate the tip of the IOS before usage [36]. To ensure a standardized test protocol, the same scanning path was followed for all IOSs beginning with the occlusal surfaces and finishing with the buccal surfaces [37]. Since the intraoral scanner, True Definition (TRU), required a thin layer of titanium dioxide powder (High-Resolution Scanning Spray, 3M, batch-no. NA 28789) for impression taking, digital impressions with Primescan (PRI), CS 3600 (CAR) and TRIOS 3 (TIO) were performed before the TRU digital impression. Data were exported as standard tessellation language (STL) datasets. Table 1. Classification system by Nordland and Tarnow [35].
Classification Description
Class I The tip of the interdental papilla lies between the interdental contact point and the most coronal extent of the cemento-enamel junction (CEJ) (space present but interproximal CEJ is not visible).
Class II The tip of the interdental papilla lies at or apical to the interdental CEJ, but coronal to the apical extent of the facial CEJ (interapproximal visible).
Class III
The tip of the interdental papilla lies level with or apical to the facial CEJ. For CVIs, dry tips and cheek retractors were removed. A customized metal tray (Ehricke stainless steel, Orbis Dental, Germany) was selected for each jaw and a thin layer of tray adhesive (Universal adhesive, batch no. K010052, Kulzer, Hanau, Germany) was applied. Polyvinyl siloxane For CVIs, dry tips and cheek retractors were removed. A customized metal tray (Ehricke stainless steel, Orbis Dental, Germany) was selected for each jaw and a thin layer of tray adhesive (Universal adhesive, batch no. K010052, Kulzer, Hanau, Germany) was applied. Polyvinyl siloxane impression material (EXA'lence Putty: batch no. 1808131 and Light Body Regular: 1901301, GC, Tokyo, Japan) was used according to the single step putty-wash technique. Before preparing the model with type IV dental stone (Fujirock EP, batch no. 1810031, GC Europe, Leuven, Belgium), impressions were disinfected for 5 min and stored for at least 2 h. Before model casting, CVIs were controlled for torn impression material with gently air drying. In case an exact repositioning was possible (e.g., an interproximal area torn only in the center), the material was repositioned; otherwise the torn material was removed. For standardized procedure, CVIs were only poured once. Before the evaluation, all gypsum models were digitalized with a calibrated high-precision laboratory scanner (ATOS Core, GOM, Braunschweig, Germany) [38].
For the analysis of the impressions, STL datasets of the four digital impressions and the CVIs were imported to a computer-assisted design software (GOM Inspect 3D, version V8 SR1 2018, GOM, Braunschweig, Germany) and superimposed using a best-fit alignment to ensure the same measurement points for each IA. For standardized measurements, three planes (a, b, c) were constructed for each IA (Figure 4). The interdental contact point was determined 3 mm below the occlusal plane, and the absolute IA was defined as the area between the cementoenamel junction and the interdental contact point. Thus, the absolute IA was identical for the analysis of the different impression techniques. The percentage of the displayed IA in relation to the absolute IA was calculated. This procedure was conducted for each IA. impression material (EXA'lence Putty: batch no. 1808131 and Light Body Regular: 1901301, GC, Tokyo, Japan) was used according to the single step putty-wash technique. Before preparing the model with type IV dental stone (Fujirock EP, batch no. 1810031, GC Europe, Leuven, Belgium), impressions were disinfected for 5 minutes and stored for at least 2 hours. Before model casting, CVIs were controlled for torn impression material with gently air drying. In case an exact repositioning was possible (e.g., an interproximal area torn only in the center), the material was repositioned; otherwise the torn material was removed. For standardized procedure, CVIs were only poured once. Before the evaluation, all gypsum models were digitalized with a calibrated high-precision laboratory scanner (ATOS Core, GOM, Braunschweig, Germany) [38]. For the analysis of the impressions, STL datasets of the four digital impressions and the CVIs were imported to a computer-assisted design software (GOM Inspect 3D, version V8 SR1 2018, GOM, Braunschweig, Germany) and superimposed using a best-fit alignment to ensure the same measurement points for each IA. For standardized measurements, three planes (a, b, c) were constructed for each IA (Figure 4). The interdental contact point was determined 3 mm below the occlusal plane, and the absolute IA was defined as the area between the cementoenamel junction and the interdental contact point. Thus, the absolute IA was identical for the analysis of the different impression techniques. The percentage of the displayed IA in relation to the absolute IA was calculated. This procedure was conducted for each IA. Statistical analysis was performed using SPSS statistics (version 25, IBM Corp., Armonk, NY USA). The median test was applied, since the data revealed several 0 and 100 values with partial statistical outliers. Finally, p-values were corrected using the Bonferroni method due to the risk of alpha-error accumulation. The level of significance was set at p-value < 0.05.
Results
Altogether, 545 IAs were analyzed. According to the classification by Nordland and Tarnow [35], the following distribution was displayed: 325 class I IAs, 177 class II IAs, and 43 class III IAs.
The results showed significant differences among different impression techniques (except between PRI and CAR) regardless of the classification (p-value < 0.05, Figure 5). Therefore, the first part (a) of the null hypothesis was partially rejected. TRU displayed the highest percentage of IAs, followed by the other digital impression techniques (PRI, CAR, and TIO). CVI showed the lowest percentage of displayed IAs. Statistical analysis was performed using SPSS statistics (version 25, IBM Corp., Armonk, NY, USA). The median test was applied, since the data revealed several 0 and 100 values with partial statistical outliers. Finally, p-values were corrected using the Bonferroni method due to the risk of alpha-error accumulation. The level of significance was set at p-value < 0.05.
Results
Altogether, 545 IAs were analyzed. According to the classification by Nordland and Tarnow [35], the following distribution was displayed: 325 class I IAs, 177 class II IAs, and 43 class III IAs.
The results showed significant differences among different impression techniques (except between PRI and CAR) regardless of the classification (p-value < 0.05, Figure 5). Therefore, the first part (a) of the null hypothesis was partially rejected. TRU displayed the highest percentage of IAs, followed by the other digital impression techniques (PRI, CAR, and TIO). CVI showed the lowest percentage of displayed IAs. Furthermore, different results were observed for anterior and posterior IAs as well as for the different classifications ( Figure 6, Tables 3 and 4). Particularly, digital impression techniques showed a higher percentage of displayed IAs in the anterior region. Regardless of the impression technique, a tendency toward a higher percentage of displayed IAs was observed for class III IAs when compared with class II and class I IAs. Thus, the second part (b) of the null hypothesis was also partially rejected. Furthermore, different results were observed for anterior and posterior IAs as well as for the different classifications ( Figure 6, Tables 3 and 4). Particularly, digital impression techniques showed a higher percentage of displayed IAs in the anterior region. Regardless of the impression technique, a tendency toward a higher percentage of displayed IAs was observed for class III IAs when compared with class II and class I IAs. Thus, the second part (b) of the null hypothesis was also partially rejected. Figure 6. The 95% confidence interval of the displayed interdental area [%] for the different impression techniques and classification by Nordland and Tarnow [35] shown separately for anterior and posterior IAs.
The age distribution of patients included in the present study was representative of patients affected by periodontal disease. Additionally, all patients were undergoing SPT with no clinical signs of inflammation and with regular intervals of examination and professional tooth cleaning. Thus, they exhibited PCDs seeking further restorative, prosthetic, or orthodontic treatment for a comprehensive dental rehabilitation. The classes (I-III) according to Nordland and Tarnow [35] were unequally distributed. Class III IAs were especially underrepresented (43 IAs) when compared with classes I and II. This finding might be explained by routine evaluations and an early onset of periodontal diseases in patients. Class III IAs reflect the most severely affected teeth. Additionally, patients must be able to clean their IAs sufficiently. Therefore, some teeth might have been extracted before the study.
For comparable and standardized measurements, CVIs were indirectly digitalized with a high-precision laboratory scanner [27,39]. The additional step of creating gypsum models and the subsequent indirect digitalization might have led to additional underestimation of the IAs in the CVI group. This can be considered a limitation of the present study. To overcome this, direct digitalization of the CVI would have been beneficial. Nevertheless, the manufacturing of gypsum models from the CVI represents the analog workflow used in dentistry to date. Therefore, the methodology selected in this study points out the differences between conventional and digital impression workflows.
To date, there have been no clinical studies on the accuracy of the scanning path. However, in vitro studies have shown a significant impact of different paths on the accuracy of full-arch scans [37,40]. For a better comparison of different IOS systems, a single recommended scanning path was applied to all IOS systems [37]. Studies have shown that the software version and the calibration of the scanning handpiece can influence the quality of the impressions [36,41]. In the present study, calibration was carried out according to the manufacturer's specifications. Moreover, the calibration was renewed for each patient. According to the manufacturers, TRU and CAR do not need to be calibrated. To avoid the influence of different software versions, no software updates were performed during the study.
The results of this clinical trial showed a lower percentage of displayed IAs for TRU and TIO when compared with the results of a previous laboratory study wherein the same TRU and TIO hardware were used [29]. This difference can be explained by the presence of clinically influencing factors such as saliva, patient movement, and lack of space due to anatomical limitations [38,[42][43][44].
In contrast to a test model, each PCD shows different undercuts, angulations, and distributions of teeth, which might explain the difference in the results between in vitro and in vivo studies. Despite the statistical superiority of the IOS, especially the TRU scanning system, in the present study, the mean values of displayed IAs ( Figure 5) were clinically not satisfactory, which must be kept in mind during routine clinical use. The differences when compared with the in vitro results involving PCD [29] underline the necessity of in vivo clinical trials.
Regardless of the different classes, a higher percentage of displayed IAs was observed in the anterior area than in the posterior area. The anterior area is much more accessible and soft tissues such as tongue, lips, and cheeks are easier to retract. In addition, the salivary flow can be controlled more easily in the anterior areas. Furthermore, a greater angulation of even large scanner handpieces is possible in the anterior areas. This allows easier scanning of the IAs. The anatomy of the anterior teeth, which are narrow and long, often results in a more delicate contact point with a smaller proximal surface. Furthermore, the anterior IAs are not as deep as the posterior IAs due to the oro-vestibular extension of the teeth and the alveolar process. In contrast, the space available in the posterior areas is much more limited. A restricted mouth opening makes accessibility even more difficult. Angulation of the scanner handpiece is possible only to a limited extent, especially in the buccal area. The angle of the light emitted by the optical systems is limited by the angulation of the handpiece. Therefore, undercuts are more difficult or sometimes impossible to record in the focal plane.
A tendency toward greater percentages of displayed IAs in higher classes of Nordland and Tarnow [35] was observed for IOSs. Although the undercuts in class III IAs are deeper than those in class I or II IAs, the neighboring tooth surfaces that have to be captured are at a greater distance from each other. Therefore, the effect of interpolation by the scanning software might have been reduced in larger IAs.
Even though all IOSs displayed a higher percentage of IAs compared to CVI, the results among the IOS groups differed. TIO displayed the lowest percentage of IAs compared to all other IOSs. This finding might be explained by the measuring principle of confocal microscopy. The beam path deflects the reflected light toward the sensor, but the pinhole diaphragm passes only the reflected beams from the object within the focal plane. Moreover, the angulation of the relatively large handpiece is limited, especially in the posterior areas. Due to the measuring principle of the IOS, a larger angulation is necessary to display undercuts, since the beam path is not as straight as the beam path in the other systems.
TRU displayed the highest percentage of IAs using the measuring principle of active wavefront sampling. With this method, areas outside the focal plane can also be captured. However, coating teeth with titanium dioxide powder is required. It creates a higher quality reference pattern [45] and ensures uniform light scattering [46,47]. Ender et al. [45] assumed that coating of the tooth surfaces leads to an improvement in the image, especially at large angulations. Thus, it cannot be ruled out, that the superiority of TRU in the present study must be attributed to the use of powder instead of the differences in measuring principle. Nevertheless, in vivo application of a thin uniform powder layer presents difficulties. Moreover, renewal of coating can lead to inaccuracies and misrepresentations [48,49].
CAR uses the principle of active triangulation with strip-light projection. The elevation profile is captured by a distortion of lines, which makes it difficult to capture a narrow IA. PRI uses a new measuring principle, namely the optical high-frequency contrast analysis, which combines confocal microscopy with strip-light projection [50].
In addition to the different measuring principles, the computing algorithm of stitching single images of the scanning process into a three-dimensional image might influence the scanning performance. As long as the IOS manufacturers do not provide any information about the algorithm, its relevance can only be hypothesized.
Conclusions
In PCD, IOSs and especially TRU, which is based on active wavefront sampling technology, can display a higher percentage of IAs than CVI. The IAs in the anterior area of the jaws are better displayed by the IOSs than the IAs in the posterior area. Additionally, a higher percentage of displayed IAs by IOS was observed for class III IAs according to Nordland and Tarnow [27]. Thus, the use of IOSs in PCDs can be recommended prior to CVI, if displaying the IAs is required. Nevertheless, the correct display of the IAs remains a challenge even in digital impressions.
|
v3-fos-license
|
2018-04-03T01:51:32.962Z
|
2016-11-02T00:00:00.000
|
31238699
|
{
"extfieldsofstudy": [
"Geology",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-4292/8/11/910/pdf",
"pdf_hash": "7695258379fb5a89ca33fdaca3fc5ec51d744909",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:584",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "2d8ce201840d4ddf8cbac9b14e9253be1a034867",
"year": 2016
}
|
pes2o/s2orc
|
Vegetation Responses to Climate Variability in the Northern Arid to Sub-Humid Zones of Sub-Saharan Africa
: In water limited environments precipitation is often considered the key factor influencing vegetation growth and rates of development. However; other climate variables including temperature; humidity; the frequency and intensity of precipitation events are also known to affect productivity; either directly by changing photosynthesis and transpiration rates or indirectly by influencing water availability and plant physiology. The aim here is to quantify the spatiotemporal patterns of vegetation responses to precipitation and to additional; relevant; meteorological variables. First; an empirical; statistical analysis of the relationship between precipitation and the additional meteorological variables and a proxy of vegetation productivity (the Normalized Difference Vegetation Index; NDVI) is reported and; second; a process-oriented modeling approach to explore the hydrologic and biophysical mechanisms to which the significant empirical relationships might be attributed. The analysis was conducted in Sub-Saharan Africa; between 5 and 18 ◦ N; for a 25-year period 1982–2006; and used a new quasi-daily Advanced Very High Resolution Radiometer (AVHRR) dataset. The results suggest that vegetation; particularly in the wetter areas; does not always respond directly and proportionately to precipitation variation; either because of the non-linearity of soil moisture recharge in response to increases in precipitation; or because variations in temperature and humidity attenuate the vegetation responses to changes in water availability. We also find that productivity; independent of changes in total precipitation; is responsive to intra-annual precipitation variation. A significant consequence is that the degree of correlation of all the meteorological variables with productivity varies geographically; so no one formulation is adequate for the entire region. Put together; these results demonstrate that vegetation responses to meteorological variation are more complex than an equilibrium relationship between precipitation and productivity. In addition to their intrinsic interest; the findings have important implications for detection of anthropogenic dryland degradation (desertification); for which the effects of natural fluctuations in meteorological variables must be controlled in order to reveal non-meteorological; including anthropogenic; degradation.
Introduction
The effect of climate variation on vegetation productivity has been studied in many drylands [1][2][3] and elsewhere [4][5][6]. More recently, interest has intensified as global circulation models project an increase in inter-annual precipitation variation, higher temperatures, and an intensified precipitation the years 1982 to 2006 were used in this study (http://ltdr.nascom.nasa.gov). The LTDR data processing stream creates a daily reflectance product using a geographic projection at a spatial resolution of 0.05 • . The sequence of data included observations from AVHRR sensors onboard NOAA satellites 7, 9, 11 and 14. LTDR data processing includes a vicarious sensor calibration of the red (0.58-0.68 µm) and near infrared (NIR, 0.725-1.10 µm) channels using cloud/ocean techniques to minimize variations caused by changes in sensors and sensor drift [67,68].LTDR processing also includes an improved atmospheric correction scheme to reduce the effects of Rayleigh scattering, ozone, and water vapor but does not include corrections for the effects of aerosols [66]. It should be noted that the known random errors in the LTDR data do not affect the conclusions of the present study [69][70][71].
For the present study, prior to the calculation of NDVI values, the LTDR reflectances in the red and NIR were normalized to a standard sun-target-sensor geometry and cloud-contaminated observations were replaced with reconstructed values interpolated from preceding and succeeding clear-sky observations. Daily NDVI values were subsequently calculated (NDVI = (NIR − red)/(NIR + red)).
BRDF and atmospheric corrections reduce noise in surface NDVI data [70] that would otherwise result from the strong bidirectional properties of vegetation [72][73][74] and the considerable absorption in the AVHRR NIR channel by atmospheric water vapor [75]. The resulting daily data were intended to enable more precise identification of vegetation dynamics [76] than compositing (generally over 10 days or monthly), particularly in the drier areas with short growing season. Full details of the data preparation are given in [71].
Meteorological and Land Cover Data
The Princeton Hydrology Group 1.0 • dataset, constructed from NCEP (National Center for Environmental Prediction)-NCAR (National Center for Atmospheric Research) reanalysis data corrected for biases using station observations [65] were used in this study. Precipitation, surface air temperature, specific humidity (ratio of mass of water vapor to the mass of dry air in which it is mixed-dimensionless-used here to remove the temperature effect on humidity), atmospheric pressure and incident solar radiation were used. Daily data for the period 1982-2006 were downscaled from 1 • to the 0.05 • resolution of the AVHRR dataset using bilinear interpolation.
Land cover was obtained from [77].
Estimating Phenological Transition Dates and the Length of the Growing Season
The rates of change of daily NDVI data were used to define key phenological transition dates of the growing season [78]. These were the "onset of greenness increase", the "onset of maturity", the "onset of greenness decrease", and the "onset of dormancy", hereafter referred to as green-up, maturity, senescence and dormancy, respectively. Green-up is the date when NDVI begins to increase rapidly indicating the onset of leaf development. Maturity is the date when the rate of increase in NDVI slows and NDVI approaches its maximum, indicating peak green leaf area. Senescence is the date when NDVI begins to decrease rapidly indicating leaf death. Dormancy is the date when NDVI approaches its minimum, annual value owing to death of annuals and suspension of growth and dormancy in perennials.
To estimate the phenological transition dates, piecewise sigmoid functions (Equation (1) were fitted to periods of sustained NDVI increase (i.e., growth) and decrease (i.e., senescence). The rates of change in the curvature of the fitted sigmoid functions (i.e., the second derivative) were then calculated. During the period of sustained NDVI increase, the local maxima of the second derivative were used for the dates of green-up and maturity, and the local minima of the second derivative during the period of sustained NDVI decrease were used for senescence and dormancy [78]. The phenological transition dates were compared with MODIS Land Cover Dynamics Science Dataset Collection 4 [79] during the overlapping period (2002)(2003)(2004)(2005)(2006). where t is time in days, y t is the NDVI value at time t, a and b are fitting parameters, c is the maximum increment in NDVI over d, the initial minimum NDVI value. The onset of leaf development and leaf senescence were then used to define the timing and duration of the growing season. Annual and growing season sums of daily NDVI, precipitation, temperature, and humidity were calculated for each year ).
Relationship of Annual ΣNDVI with Annual Total Precipitation
The relationships of annual and growing season sums of precipitation and ΣNDVI were characterized using linear regression for the averages of every three by three pixels. The coefficients of determination (r 2 ) were mapped to show the geographical patterns of the ΣNDVI-total precipitation relationships for the entire year and for the growing season alone.
Relationship of Growing Season ΣNDVI with Intraseasonal Precipitation Distribution
A series of small precipitation events may have a different effect on vegetation production than an equivalent amount of rainfall occurring in a few intense events [30,31]. To describe the temporal characteristics of precipitation, two higher order moments of intraseasonal precipitation distribution were calculated from daily precipitation data. These were the growing season precipitation variance and its skewness. Summary statistics were used since it is impractical to specify explicitly the enormous number of seasonal patterns of rainfall frequency and amount that can occur for more than a few pixels. High precipitation distribution variance indicates higher than normal deviation from mean seasonal precipitation and can result from extended periods of drought or from intense precipitation events or a combination of both, while the skewness is a measure of the dominant frequency of either high intensity precipitation events (negative skewness) or low intensity precipitation events (positive skewness).
The relation of growing season ΣNDVI to seasonal precipitation totals, precipitation variance and skewness were characterized using multivariate linear regression analysis. To reduce the effects of multicollinearity between input variables and consequent overfitting [80], a subset of independent variables that 'best' explained ΣNDVI variation were selected for each 3 × 3 pixels. [81], to search for the variable subsets with the highest r 2 value adjusted for degrees of freedom (adjusted r 2 ). The variables of the regression model with the highest adjusted r 2 were tested for multicollinearity and the model regression coefficients were tested to determine whether they were significantly different from zero. To test for multicollinearity, the variance inflation factors (VIFs) of the model independent variables were evaluated relative to the r 2 value of the model [80]. Multicollinearity was considered strong enough to affect the model coefficient estimates whenever any of the VIFs was larger than 1/(1 − r 2 ) [82]. A t-test was used to test the null hypothesis that the model regression coefficients B k1 . . . n were equal to zero. If there was insufficient evidence to reject the null hypothesis (H0 k1 . . . n : B k1 . . . n = 0, p > 0.05) or if multicollinearity was strong enough to affect model estimates then the regression model with the second to highest adjusted r 2 was subjected to the same tests. The procedure was repeated until the test conditions were met.
Relationship of Growing Season ΣNDVI with Temperature and Humidity
The relationships of growing season ΣNDVI and seasonal precipitation totals, specific humidity and air temperature were characterized by regression analysis using the same computational approach described in the previous section. Furthermore, the three meteorological variables and the ΣNDVI data were standardized to zero mean and a standard deviation of one. The standardized regression coefficients were then estimated to measure the relative contribution of each meteorological variable to the observed ΣNDVI variation. The standardized regression coefficients were summarized by the landcover types in the study area [83] in order to characterize the relative contribution of each of the meteorological variables to the observed NDVI variation in grasslands, shrublands, and savannas.
Soil-Vegetation-Atmosphere Transfer Modeling
The Simplified Simple Biosphere (SSiB2 ver.2) land surface model [59,84] was used in its "offline" mode (no neighbor-effects) to represent ecosystem physiology as driven by prescribed meteorology and vegetation phenology Parameterization and validation studies and land surface model inter-comparison experiments (e.g., [85]) have demonstrated that SSiB2 can reasonably reproduce measured energy and water fluxes at diurnal, seasonal, and multi-annual scales across diverse climates and vegetation functional types.
SSiB2 was used to explore the underlying hydrological and physiological processes to which the empirical relationships, revealed in the statistical analysis of co-variation between meteorology and vegetation productivity, can be attributed. The model was run for the period 1999-2007 with a 3-hourly time step for a number of sites representative of different vegetation types and climatologies throughout the Sahel (Table 1 and Figure 1). Model inputs for the base run were Princeton Hydrology Group meteorology, LAI and fraction vegetation cover [86].
To investigate the sensitivity of vegetation to precipitation variation during the early stages of phenological development (i.e., greenup to maturity), SSIB2 was run eight times with the precipitation data modified for the corresponding period ( ±0.5, ±1, ±1.75 and ±2.5 standard deviations from the values used in the base run; changed values that exceeded the range of long term natural meteorological variation were reset to the minimum and maximum of observed meteorological variation, as appropriate) while keeping the remaining meteorological variables unchanged. The sensitivity experiments were repeated for the maturity stage (i.e., from maturity to senescence). The same approach was used to investigate the sensitivity of vegetation to changes in humidity and temperature. The resulting changes in soil moisture (to 1 m depth) and stomatal conductance and their relation to canopy scale net photosynthesis were summarized at a daily time step and averaged over each of the two stages of phenological development.
regression coefficients were then estimated to measure the relative contribution of each meteorological variable to the observed ΣNDVI variation. The standardized regression coefficients were summarized by the landcover types in the study area [83] in order to characterize the relative contribution of each of the meteorological variables to the observed NDVI variation in grasslands, shrublands, and savannas.
Soil-Vegetation-Atmosphere Transfer Modeling
The Simplified Simple Biosphere (SSiB2 ver.2) land surface model [59,84] was used in its "offline" mode (no neighbor-effects) to represent ecosystem physiology as driven by prescribed meteorology and vegetation phenology Parameterization and validation studies and land surface model inter-comparison experiments (e.g., [85]) have demonstrated that SSiB2 can reasonably reproduce measured energy and water fluxes at diurnal, seasonal, and multi-annual scales across diverse climates and vegetation functional types.
SSiB2 was used to explore the underlying hydrological and physiological processes to which the empirical relationships, revealed in the statistical analysis of co-variation between meteorology and vegetation productivity, can be attributed. The model was run for the period 1999-2007 with a 3-hourly time step for a number of sites representative of different vegetation types and climatologies throughout the Sahel (Table 1 and Figure 1). Model inputs for the base run were Princeton Hydrology Group meteorology, LAI and fraction vegetation cover [86].
To investigate the sensitivity of vegetation to precipitation variation during the early stages of phenological development (i.e., greenup to maturity), SSIB2 was run eight times with the precipitation data modified for the corresponding period ( ±0.5, ±1, ±1.75 and ±2.5 standard deviations from the values used in the base run; changed values that exceeded the range of long term natural meteorological variation were reset to the minimum and maximum of observed meteorological variation, as appropriate) while keeping the remaining meteorological variables unchanged. The sensitivity experiments were repeated for the maturity stage (i.e., from maturity to senescence). The same approach was used to investigate the sensitivity of vegetation to changes in humidity and temperature. The resulting changes in soil moisture (to 1 m depth) and stomatal conductance and their relation to canopy scale net photosynthesis were summarized at a daily time step and averaged over each of the two stages of phenological development.
Phenological Transition Dates
For the transition dates of greenup, maturity and senescence, the comparison between the AVHRR and MODIS [79] (Figure 2) measurements revealed a good agreement with root mean square errors only slightly higher than the reported accuracies of the MODIS products [78,79]. However, the measurements of the dormancy transition dates did not agree and the root mean square error (RMSE = 29 days) of the dormancy comparison was one order of magnitude higher than the RMSE values for greenup, maturity and senescence. This is perhaps due to the less pronounced transitions in the rates of change in NDVI curvature towards the end of the growing season which renders derivatives of the dormancy dates more sensitive to errors in NDVI measurements.
Phenological Transition Dates
For the transition dates of greenup, maturity and senescence, the comparison between the AVHRR and MODIS [79] (Figure 2) measurements revealed a good agreement with root mean square errors only slightly higher than the reported accuracies of the MODIS products [78,79]. However, the measurements of the dormancy transition dates did not agree and the root mean square error (RMSE = 29 days) of the dormancy comparison was one order of magnitude higher than the RMSE values for greenup, maturity and senescence. This is perhaps due to the less pronounced transitions in the rates of change in NDVI curvature towards the end of the growing season which renders derivatives of the dormancy dates more sensitive to errors in NDVI measurements. The greenup transition dates ( Figure 3) were characterized by a pronounced north-south gradient with greenup detected as early as February at lower latitudes (7.5°N) and as late as August at higher latitudes (17.5°N). The senescence transition dates also had a pronounced north-south gradient but with the earlier dates at higher latitudes (late August) than at lower latitudes (late October). Both transitions were found to vary between years with grasslands in the arid region showing the highest temporal variability in greenup transition dates. On average, the length of the growing season (the difference between the two dates) varied from approximately 20 days at the southern edge of the Sahara Desert to approximately 250 days in the wetter parts of the study area. The greenup transition dates ( Figure 3) were characterized by a pronounced north-south gradient with greenup detected as early as February at lower latitudes (7.5 • N) and as late as August at higher latitudes (17.5 • N). The senescence transition dates also had a pronounced north-south gradient but with the earlier dates at higher latitudes (late August) than at lower latitudes (late October). Both transitions were found to vary between years with grasslands in the arid region showing the highest temporal variability in greenup transition dates. On average, the length of the growing season (the difference between the two dates) varied from approximately 20 days at the southern edge of the Sahara Desert to approximately 250 days in the wetter parts of the study area.
Relationship of NDVI with Rainfall
The relationships of annual and growing season sums of rainfall and NDVI differed in strength and to some extent in their spatial patterns. The growing season rainfall-ΣNDVI relationships were generally stronger (Figures 4 and 5). The growing season rainfall-ΣNDVI relationships were significant in approximately 58% of the study area whereas the annual rainfall-ΣNDVI relationships were significant in 37% of the study area (critical t-values calculated for each pixel indicated that, in general, regressions with r 2 values greater than 0.3 were significant (p < 0.05)).
A belt of significant annual ΣNDVI-rainfall relationships was evident around the 700 mm rainfall isohyet, however, areas receiving less than 400 mm rainfall/year and areas receiving more than 1,000 mm rainfall/year were generally characterized by insignificant relationships (Figure 4). On average, stronger growing season ΣNDVI-rainfall relationships were found in the arid and semi-arid areas with shrubland and grassland landcover (r 2 = 0.43 ± 0.17) than in sub-humid areas with woody savanna land cover (r 2 = 0.3 ± 0.16) ( Figure 5). , of (a) greenup "onset of greenness increase", (b) senescence "onset of greenness decrease", and (c) length of growing season (days). The map in (d) is the between-year variation (±2 standard deviations) in the onset date of greenness increase. The red lines (black in (d)) from north to south are the 300 mm, 700 mm and 1100 mm rainfall isohyets.
Relationship of NDVI with Rainfall
The relationships of annual and growing season sums of rainfall and NDVI differed in strength and to some extent in their spatial patterns. The growing season rainfall-ΣNDVI relationships were generally stronger (Figures 4 and 5). The growing season rainfall-ΣNDVI relationships were significant in approximately 58% of the study area whereas the annual rainfall-ΣNDVI relationships were significant in 37% of the study area (critical t-values calculated for each pixel indicated that, in general, regressions with r 2 values greater than 0.3 were significant (p < 0.05)).
A belt of significant annual ΣNDVI-rainfall relationships was evident around the 700 mm rainfall isohyet, however, areas receiving less than 400 mm rainfall/year and areas receiving more than 1000 mm rainfall/year were generally characterized by insignificant relationships (Figure 4). On average, stronger growing season ΣNDVI-rainfall relationships were found in the arid and semi-arid areas with shrubland and grassland landcover (r 2 = 0.43 ± 0.17) than in sub-humid areas with woody savanna land cover (r 2 = 0.3 ± 0.16) ( Figure 5).
Relationship of NDVI with Rainfall
The relationships of annual and growing season sums of rainfall and NDVI differed in strength and to some extent in their spatial patterns. The growing season rainfall-ΣNDVI relationships were generally stronger (Figures 4 and 5). The growing season rainfall-ΣNDVI relationships were significant in approximately 58% of the study area whereas the annual rainfall-ΣNDVI relationships were significant in 37% of the study area (critical t-values calculated for each pixel indicated that, in general, regressions with r 2 values greater than 0.3 were significant (p < 0.05)).
A belt of significant annual ΣNDVI-rainfall relationships was evident around the 700 mm rainfall isohyet, however, areas receiving less than 400 mm rainfall/year and areas receiving more than 1,000 mm rainfall/year were generally characterized by insignificant relationships (Figure 4). On average, stronger growing season ΣNDVI-rainfall relationships were found in the arid and semi-arid areas with shrubland and grassland landcover (r 2 = 0.43 ± 0.17) than in sub-humid areas with woody savanna land cover (r 2 = 0.3 ± 0.16) ( Figure 5).
Relationship of Growing Season ΣNDVI with Seasonal Rainfall Distribution
The multivariate regressions between ΣNDVI, total growing season rainfall and the two moments of rainfall distribution (variance and skewness) provided robust yet simple statistical models of NDVI variation (Figure 6a). Compared to the growing season ΣNDVI-rainfall relationships, adding the two moments increased the ability of the models to account for NDVI variation (Figure 6b). The changes in percentage variance explained varied spatially but these were not significantly related to either the aridity gradient or to the spatial distribution of land cover types. . Spatial distributions of (a) the coefficients of determination (adjusted r 2 ) for the multiple regression of ΣNDVI on total growing season rainfall, its variance and its skewness. (b) The change in the percentage of variance explained by including the additional variables over the percentage variance of ΣNDVI and rainfall alone. The red lines from north to south are the 300 mm, 700 mm and 1100 mm rainfall isohyets.
The coefficients of the multivariate linear regressions quantified the direction and magnitude of the relationship between precipitation distribution and growing season NDVI. In general, growing season NDVI was positively related to precipitation totals and to the skewness of precipitation distribution, but negatively related to its variance which suggest that, for a given precipitation total,
Relationship of Growing Season ΣNDVI with Seasonal Rainfall Distribution
The multivariate regressions between ΣNDVI, total growing season rainfall and the two moments of rainfall distribution (variance and skewness) provided robust yet simple statistical models of NDVI variation (Figure 6a). Compared to the growing season ΣNDVI-rainfall relationships, adding the two moments increased the ability of the models to account for NDVI variation (Figure 6b). The changes in percentage variance explained varied spatially but these were not significantly related to either the aridity gradient or to the spatial distribution of land cover types.
Relationship of Growing Season ΣNDVI with Seasonal Rainfall Distribution
The multivariate regressions between ΣNDVI, total growing season rainfall and the two moments of rainfall distribution (variance and skewness) provided robust yet simple statistical models of NDVI variation (Figure 6a). Compared to the growing season ΣNDVI-rainfall relationships, adding the two moments increased the ability of the models to account for NDVI variation (Figure 6b). The changes in percentage variance explained varied spatially but these were not significantly related to either the aridity gradient or to the spatial distribution of land cover types. Figure 6. Spatial distributions of (a) the coefficients of determination (adjusted r 2 ) for the multiple regression of ΣNDVI on total growing season rainfall, its variance and its skewness. (b) The change in the percentage of variance explained by including the additional variables over the percentage variance of ΣNDVI and rainfall alone. The red lines from north to south are the 300 mm, 700 mm and 1100 mm rainfall isohyets.
The coefficients of the multivariate linear regressions quantified the direction and magnitude of the relationship between precipitation distribution and growing season NDVI. In general, growing season NDVI was positively related to precipitation totals and to the skewness of precipitation distribution, but negatively related to its variance which suggest that, for a given precipitation total, Figure 6. Spatial distributions of (a) the coefficients of determination (adjusted r 2 ) for the multiple regression of ΣNDVI on total growing season rainfall, its variance and its skewness. (b) The change in the percentage of variance explained by including the additional variables over the percentage variance of ΣNDVI and rainfall alone. The red lines from north to south are the 300 mm, 700 mm and 1100 mm rainfall isohyets.
The coefficients of the multivariate linear regressions quantified the direction and magnitude of the relationship between precipitation distribution and growing season NDVI. In general, growing season NDVI was positively related to precipitation totals and to the skewness of precipitation distribution, but negatively related to its variance which suggest that, for a given precipitation total, the seasonally summed NDVI values were higher when precipitation arrived in more frequent and less intense precipitation events (Figure 7). Similar results have been reported at the interannual temporal scale [2]. the seasonally summed NDVI values were higher when precipitation arrived in more frequent and less intense precipitation events (Figure 7). Similar results have been reported at the interannual temporal scale [2]. Figure 7. Coefficients of (a) seasonal rainfall variance, and (b) seasonal rainfall skewness obtained from the multivariate regressions of growing season ΣNDVI on total seasonal precipitation, precipitation variance and skewness. Missing values (white pixels) are areas with high multicollinearity between explanatory variables, or where the coefficients were insignificantly different from zero (p > 0.05). The red lines from north to south are the 300 mm, 700 mm and 1100 mm rainfall isohyets.
Relationship of Growing Season ΣNDVI with Air Humidity and Temperature
The adjusted r 2 of the multivariate regressions of growing season ΣNDVI on total growing season precipitation, specific humidity and temperature are shown in Figure 8a. Compared to the growing season ΣNDVI-rainfall relationships, adding specific humidity and temperature increased the ability of the models to account for NDVI variation (Figure 8b). On average, the largest gains in the percentage NDVI variance explained were to the south of the 700 mm rainfall isohyet (Figure 8b). However, the relationships remained insignificant in the humid coastal Guinean zone. This might be due to the saturation of NDVI at high values of LAI [87,88], to the persistence of cloud cover which adversely affects the quality of ΣNDVI values [69], or to the influence of other climatic and nonclimatic factors on net primary productivity (NPP), such as low plant nutrient availability or low incident photosynthetic radiation [61,89].
The regression coefficients calculated for every grid cell provided a statistical estimate of the mean rate of change in ΣNDVI in relation to variations in rainfall, humidity, and temperature (i.e., precipitation coefficient). The highest precipitation coefficient values (0.08-0.1 ΣNDVI·mm −1 ) were evident in the arid margins whereas the lowest (0.01-0.02) were in the wetter parts of the study area ( Figure 9a). Conversely, the humidity coefficient values were generally the lowest in the northern arid zone (Figure 9b). The temperature coefficient values, on the other hand, differed in sign with spatially coherent positive ΣNDVI relationships with temperature evident in the Bongos mountain range (in western Southern Sudan and northern Central African Republic) and in northern Ethiopian highlands (Figure 9c), while negative ΣNDVI relations to temperature were more common in the arid zone (300-700 mm).
A negative exponential pattern emerged when the precipitation coefficients were plotted against rainfall climatology (Figure 10a). However, there were some wet sites with comparatively high precipitation coefficients (green circle; Figure 10a). These were generally associated with the agricultural landscapes in eastern Ghana, southern Benin and Togo (Figure 9a). In these landscapes, the percentage of land used for farming was estimated to range between 45%-90% of the total area [90]. Here the high ΣNDVI was probably caused by irrigation rather than local rainfall as a result of several small, periurban irrigation systems [91] and large irrigation projects in the Ouémé Catchment in Benin [92] and the Volta river basin in Benin, Togo and Ghana [93]. In contrast, a positive linear Figure 7. Coefficients of (a) seasonal rainfall variance, and (b) seasonal rainfall skewness obtained from the multivariate regressions of growing season ΣNDVI on total seasonal precipitation, precipitation variance and skewness. Missing values (white pixels) are areas with high multicollinearity between explanatory variables, or where the coefficients were insignificantly different from zero (p > 0.05). The red lines from north to south are the 300 mm, 700 mm and 1100 mm rainfall isohyets.
Relationship of Growing Season ΣNDVI with Air Humidity and Temperature
The adjusted r 2 of the multivariate regressions of growing season ΣNDVI on total growing season precipitation, specific humidity and temperature are shown in Figure 8a. Compared to the growing season ΣNDVI-rainfall relationships, adding specific humidity and temperature increased the ability of the models to account for NDVI variation (Figure 8b). On average, the largest gains in the percentage NDVI variance explained were to the south of the 700 mm rainfall isohyet (Figure 8b). However, the relationships remained insignificant in the humid coastal Guinean zone. This might be due to the saturation of NDVI at high values of LAI [87,88], to the persistence of cloud cover which adversely affects the quality of ΣNDVI values [69], or to the influence of other climatic and non-climatic factors on net primary productivity (NPP), such as low plant nutrient availability or low incident photosynthetic radiation [61,89].
The regression coefficients calculated for every grid cell provided a statistical estimate of the mean rate of change in ΣNDVI in relation to variations in rainfall, humidity, and temperature (i.e., precipitation coefficient). The highest precipitation coefficient values (0.08-0.1 ΣNDVI·mm −1 ) were evident in the arid margins whereas the lowest (0.01-0.02) were in the wetter parts of the study area ( Figure 9a). Conversely, the humidity coefficient values were generally the lowest in the northern arid zone (Figure 9b). The temperature coefficient values, on the other hand, differed in sign with spatially coherent positive ΣNDVI relationships with temperature evident in the Bongos mountain range (in western Southern Sudan and northern Central African Republic) and in northern Ethiopian highlands (Figure 9c), while negative ΣNDVI relations to temperature were more common in the arid zone (300-700 mm).
A negative exponential pattern emerged when the precipitation coefficients were plotted against rainfall climatology (Figure 10a). However, there were some wet sites with comparatively high precipitation coefficients (green circle; Figure 10a). These were generally associated with the agricultural landscapes in eastern Ghana, southern Benin and Togo (Figure 9a). In these landscapes, the percentage of land used for farming was estimated to range between 45%-90% of the total area [90]. Here the high ΣNDVI was probably caused by irrigation rather than local rainfall as a result of several small, periurban irrigation systems [91] and large irrigation projects in the Ouémé Catchment in Benin [92] and the Volta river basin in Benin, Togo and Ghana [93]. In contrast, a positive linear pattern emerged when the humidity coefficients were plotted against rainfall climatology (Figure 10b) and there was no distinctive relationship between temperature coefficients and rainfall climatology (not shown). pattern emerged when the humidity coefficients were plotted against rainfall climatology ( Figure 10b) and there was no distinctive relationship between temperature coefficients and rainfall climatology (not shown). Regression coefficients of (a) precipitation (b) specific humidity, and (c) air temperature obtained from the multivariate regressions of growing season ΣNDVI on precipitation, specific humidity and temperature. Missing values (white pixels) are areas with high multicollinearity between explanatory variables, or where the coefficients were insignificantly different from zero (p > 0.05). The red lines from north to south are the 300 mm, 700 mm and 1100 mm rainfall isohyets. pattern emerged when the humidity coefficients were plotted against rainfall climatology ( Figure 10b) and there was no distinctive relationship between temperature coefficients and rainfall climatology (not shown). Regression coefficients of (a) precipitation (b) specific humidity, and (c) air temperature obtained from the multivariate regressions of growing season ΣNDVI on precipitation, specific humidity and temperature. Missing values (white pixels) are areas with high multicollinearity between explanatory variables, or where the coefficients were insignificantly different from zero (p > 0.05). The red lines from north to south are the 300 mm, 700 mm and 1100 mm rainfall isohyets. The standardized coefficients of the multivariate regression models were calculated to estimate the relative contributions of growing season precipitation, specific humidity and temperature on ΣNDVI variations. When summarized for the land cover types in the study area, precipitation emerged, on average, as the primary factor influencing NDVI, followed by specific humidity and then temperature (Figure 11). Except in woody savanna and forests, the precipitation standardized coefficients were significantly higher (p < 0.01) than the standardized specific humidity coefficients and approximately three to four orders of magnitude higher than the standardized temperature coefficients (Figure 11). The standardized specific humidity coefficients, on the other hand, were significantly higher (p < 0.01) than the standardized temperature coefficient in woody savannas and forests but not for the other land cover types ( Figure 11). The standardized coefficients of the multivariate regression models were calculated to estimate the relative contributions of growing season precipitation, specific humidity and temperature on ΣNDVI variations. When summarized for the land cover types in the study area, precipitation emerged, on average, as the primary factor influencing NDVI, followed by specific humidity and then temperature (Figure 11). Except in woody savanna and forests, the precipitation standardized coefficients were significantly higher (p < 0.01) than the standardized specific humidity coefficients and approximately three to four orders of magnitude higher than the standardized temperature coefficients ( Figure 11). The standardized specific humidity coefficients, on the other hand, were significantly higher (p < 0.01) than the standardized temperature coefficient in woody savannas and forests but not for the other land cover types ( Figure 11).
Soil-Vegetation-Atmosphere Transfer Modeling
The SSiB2 model was used to explore the hydrological and physiological mechanism that can explain the empirical relations found by correlation between meteorological variables and vegetation ΣNDVI. Five sites (Table 1) illustrate the overall results. Koumbi Saleh (southern Mauritania) is the driest and the warmest with a cumulative growing season precipitation of 300 mm, a mean growing season daily temperature of 30.45 °C, and growing season length of 3 months. Fadjė, located to the southeast of Lake Chad, is considerably wetter and 2.5 °C cooler than Koumbi Saleh. Growing season precipitation for the remaining three sites is greater than 650 mm (Kem Kem, Abyie, and Quadra Djallė) but the sites differ greatly in mean growing season temperature and mean growing season specific humidity ( Table 1).
The daily modeled responses of soil moisture, stomatal resistance, and NPP to changes in precipitation, air temperature, and specific humidity were summarized for the two periods of the growing season (green-up to maturity, and maturity to senescence) and are shown in Figures 12 to 14. Higher specific humidity reduced evapotranspiration demand (not shown) resulting in higher volumetric soil moisture content in the root zone ( Figure 12). Particularly at drier sites or during dry periods, higher volumetric soil moisture content and higher atmospheric vapor pressure combined to increase modeled stomatal conductance ( Figure 13) and therefore canopy-scale NPP (Figure 14). In the wetter sites such as, Kem Kem, Abyie and Quadra Djallė, higher specific humidity also increased leaf temperature at a rate of approximately 0.25 °C per unit increase in specific humidity . Higher leaf temperatures (but below the temperature inhibition range) can also increase NPP by increasing the photosynthetic reaction rates [56]. Dry sites such as Koumbi Saleh and Fadjė showed a strong increase in NPP in response to precipitation during the greenup period, and somewhat less in the maturity period ( Figure 14). In the wetter sites Quadra Djallė and Kem Kem, there were no noticeable changes in modeled NPP in response to precipitation during either the greenup or maturity periods (Figure 14). At these sites changes in soil moisture content in response to precipitation ( Figure 12) did not induce noticeable changes in stomatal resistance ( Figure 13) and hence NPP. The productivity in these sites, however, was sensitive to changes in temperature where increases in temperature increased modeled NPP (Figure 14). The woody savanna site (Abyie) which is wetter than Kem Kem but drier than Quadra Figure 11. Mean absolute values of the standardized coefficients of the multivariate regression between NDVI and explanatory variables (precipitation, specific humidity and temperature) summarized for the land cover types. Error bars are ±1 standard deviation around the mean.
Soil-Vegetation-Atmosphere Transfer Modeling
The SSiB2 model was used to explore the hydrological and physiological mechanism that can explain the empirical relations found by correlation between meteorological variables and vegetation ΣNDVI. Five sites (Table 1) illustrate the overall results. Koumbi Saleh (southern Mauritania) is the driest and the warmest with a cumulative growing season precipitation of 300 mm, a mean growing season daily temperature of 30.45 • C, and growing season length of 3 months. Fadjė, located to the southeast of Lake Chad, is considerably wetter and 2.5 • C cooler than Koumbi Saleh. Growing season precipitation for the remaining three sites is greater than 650 mm (Kem Kem, Abyie, and Quadra Djallė) but the sites differ greatly in mean growing season temperature and mean growing season specific humidity ( Table 1).
The daily modeled responses of soil moisture, stomatal resistance, and NPP to changes in precipitation, air temperature, and specific humidity were summarized for the two periods of the growing season (green-up to maturity, and maturity to senescence) and are shown in Figures 12-14. Higher specific humidity reduced evapotranspiration demand (not shown) resulting in higher volumetric soil moisture content in the root zone ( Figure 12). Particularly at drier sites or during dry periods, higher volumetric soil moisture content and higher atmospheric vapor pressure combined to increase modeled stomatal conductance ( Figure 13) and therefore canopy-scale NPP (Figure 14). In the wetter sites such as, Kem Kem, Abyie and Quadra Djallė, higher specific humidity also increased leaf temperature at a rate of approximately 0.25 • C per unit increase in specific humidity Higher leaf temperatures (but below the temperature inhibition range) can also increase NPP by increasing the photosynthetic reaction rates [56].
Dry sites such as Koumbi Saleh and Fadjė showed a strong increase in NPP in response to precipitation during the greenup period, and somewhat less in the maturity period ( Figure 14). In the wetter sites Quadra Djallė and Kem Kem, there were no noticeable changes in modeled NPP in response to precipitation during either the greenup or maturity periods (Figure 14). At these sites changes in soil moisture content in response to precipitation ( Figure 12) did not induce noticeable changes in stomatal resistance ( Figure 13) and hence NPP. The productivity in these sites, however, was sensitive to changes in temperature where increases in temperature increased modeled NPP (Figure 14). The woody savanna site (Abyie) which is wetter than Kem Kem but drier than Quadra Djallė showed a strong increase in stomatal conductance and NPP in response to precipitation during the greenup period but no responses during the maturity period (Figure 14). At Abyie and Fadjė, changes in temperature produced contrasting responses in modeled NPP (Figure 14). During the maturity period, when productivity was not limited by available soil moisture, productivity responded positively to higher temperatures. However, during the green-up period when soil moisture levels were comparatively lower (Figure 12), higher temperature lowered productivity. Djallė showed a strong increase in stomatal conductance and NPP in response to precipitation during the greenup period but no responses during the maturity period (Figure 14). At Abyie and Fadjė, changes in temperature produced contrasting responses in modeled NPP ( Figure 14). During the maturity period, when productivity was not limited by available soil moisture, productivity responded positively to higher temperatures. However, during the green-up period when soil moisture levels were comparatively lower (Figure 12), higher temperature lowered productivity. Figure 12. The response of daily soil moisture at root depth to changes in precipitation, temperature, and specific humidity averaged for the period from green-up to maturity (green-up period, grey diamonds; left hand axis) and from maturity to senescence (maturity period, black circles; right hand axis). Note the different ranges on the y axis between sites. Figure 12. The response of daily soil moisture at root depth to changes in precipitation, temperature, and specific humidity averaged for the period from green-up to maturity (green-up period, grey diamonds; left hand axis) and from maturity to senescence (maturity period, black circles; right hand axis). Note the different ranges on the y axis between sites. Figure 13. The response of stomatal resistance (s·m −1 ) to changes in precipitation, temperature, and specific humidity averaged for the period from green-up to maturity (green-up period, grey diamonds; left hand axis) and from maturity to senescence (maturity period, black circles; right hand axis). Note the different ranges on the y axis between sites. Figure 13. The response of stomatal resistance (s·m −1 ) to changes in precipitation, temperature, and specific humidity averaged for the period from green-up to maturity (green-up period, grey diamonds; left hand axis) and from maturity to senescence (maturity period, black circles; right hand axis). Note the different ranges on the y axis between sites. Figure 14. The response of net primary productivity (NPP) (µmol.m −2 ·s −1 ) to changes in precipitation, temperature, and specific humidity averaged for the period from green-up to maturity (green-up period, grey diamonds; left hand axis) and from maturity to senescence (maturity period, black circles; right hand axis). Note the different ranges on the y axis between sites.
Discussion
Rainfall is usually assumed to be the only significant environmental factor that determines primary production in drylands [55,94,95]. However, the statistical and process modeling reported here indicated a wider range of environmental variables related to primary production and regional differences in the importance of these factors. Figure 14. The response of net primary productivity (NPP) (µmol·m −2 ·s −1 ) to changes in precipitation, temperature, and specific humidity averaged for the period from green-up to maturity (green-up period, grey diamonds; left hand axis) and from maturity to senescence (maturity period, black circles; right hand axis). Note the different ranges on the y axis between sites.
Discussion
Rainfall is usually assumed to be the only significant environmental factor that determines primary production in drylands [55,94,95]. However, the statistical and process modeling reported here indicated a wider range of environmental variables related to primary production and regional differences in the importance of these factors.
Relationship of ΣNDVI and Climate Variability
In arid and semi-arid regions, NPP and ΣNDVI have been shown to have a strong relationship with precipitation [23,24,96]. Indeed, average NPP has been shown to increase linearly or near-linearly with mean annual precipitation, to an upper limit [23,60,97,98]. However, the interannual variability in NPP does not always exhibit such a strong relationship, as evidenced by the weak correlations between annually summed NPP and rainfall [3,26,29,99] and between ΣNDVI and rainfall [4,64,100]. Similarly, in this study, the correlations between annually summed NDVI and rainfall, in general, did not reveal strong relationships, yet there were some systematic, though weak, correlations in areas receiving intermediate precipitation (Figures 4 and 5). Others (e.g., [64,100]) have also noted that the degree of ΣNDVI variance explained by rainfall can be high in some areas and low in others with weak to insignificant relationships more common in the dry and wet margins of the Sahel.
The apparent lack of ΣNDVI response to additional precipitation in the dry sub-humid Sahel may be caused by the low sensitivity of f PAR , and thus NDVI, to additional rain during wet years [25]. Another plausible explanation is that precipitation in the wetter areas is not the primary factor controlling vegetation growth [29]. The simulations of net primary productivity in the dry sub-humid areas such as at Quadra Djallė and Kem Kem ( Figure 14) shown here revealed little or no sensitivity of NPP to variations in precipitation. At these sites, the modeled volumetric soil moisture in the root zone remained above approximately 14% by volume ( Figure 12) which is unlikely to induce acute water stress, stomatal closure and a drop in NPP ( Figure 13). Similarly, modeling results [27] suggested that the woody plant associations in the wetter parts of the Sudanian and the Guinean ecoclimatic zones had sufficient soil moisture to meet evapotranspiration demands even during years with below-average precipitation.
The variance of ΣNDVI explained by rainfall in the northern boundary of the Sahel was generally low. This was unexpected since several studies have reported a strong coupling between NDVI and rainfall there (e.g., [22,25,88,89]), however, those analyses used time-series of moving average monthly precipitation and ΣNDVI data although successive values in their series were usually highly autocorrelated [22]. Regression of autocorrelated variables can cause overestimation of the strength and significance of the relationship [101]. Whether the differences between the strength of the relationship found here and those reported by others were the result of using different integration periods cannot be deduced from the current analysis.
In the arid and semi-arid Sahel the correlations between growing season rather than annual integrated ΣNDVI and precipitation totals were generally higher (Figures 4 and 5), confirming that occasional rainfall outside the main growing season has little effect on vegetation production [24,38,[102][103][104]. Long periods of drought following early rain, the probability of which increases as the climate gets drier northwards [105][106][107], have been found to kill the seedlings of fast-germinating species favoring those with long-lived seed banks which have reserves of seeds that germinate when the rainy season resumes [40,41]. On the other hand, rains falling later cannot be used for production by most annuals irrespective of the amount of precipitation, since vegetative growth ends with fructification, a date set by sensitivity to photoperiod [3,108].
Interestingly, the geographical distribution of the precipitation coefficients (Figure 9a) was correlated with precipitation totals; higher precipitation coefficients in dry areas and lower in wet areas (Figure 10a) [108,109]. Analysis of eddy-covariance measurements across a range of vegetation types and climate zones in Africa [60], found that NPP at the wetter sites varied over a narrow range in relation to precipitation variability, whereas NPP at the drier sites responded more strongly. The maximum photosynthetic response to precipitation variation was greater for grasses in dry areas than for trees in wetter areas, which has been attributed to the differences in the photosynthetic pathways of trees (C 3 ) and grasses (C 4 ) [60]. It could also be that the differences are a result of the non-linearity of soil moisture response to precipitation in the wetter areas ( Figure 12) where high precipitation rates can saturate infiltration and lead to surface runoff, so additional precipitation does not increase soil moisture and photosynthesis.
In contrast to the ΣNDVI-rainfall relations, specific humidity coefficients (Figure 8b) were higher in the wetter areas (Figure 10b). Unfortunately, this could not be compared to eddy-covariance studies since they usually report the relationship of net photosynthesis to vapor pressure deficit rather than to specific humidity. Mechanistically, however, high specific humidity may restrict evapotranspiration-driven reductions in soil water thus alleviating plant soil water stress. On the other hand, low specific humidity may increase evapotranspiration demand resulting in a net decrease in soil moisture availability [27]. The combination of soil moisture stress and low specific humidity was found to increase stomatal resistance which in turn decreased productivity (Figures 12-14).
Surprisingly, the ΣNDVI-temperature relations differed between the two directions of change (Figure 9c). The effects of temperature on plant growth are largely mediated by its effects on chemical reactions (e.g., photosynthesis and respiration) and its effects on soil moisture. On the one hand, photosynthesis reaction rates increase with temperature up to an upper limit beyond which photosynthesis decreases due to the denaturation of proteins. On the other hand, the desiccating effects of higher temperatures can reduce net photosynthesis. The empirical results show that for some areas in the Ethiopian highlands, the Guinean ecoclimatic zone and from western South Sudan to southern Chad growing season temperature was positively related to ΣNDVI. These and the modeling results at the Kem Kem (Ethiopian highlands) and the Quadra Djallė sites (Bongos Mountains) ( Figure 14) suggest that increases in temperature-dependent photosynthetic reaction rates may counter the desiccating effects of higher temperature. However, global studies of climatic limits on plant growth do not identify temperatures as an important factor influencing vegetation growth in either the Ethiopian highlands or in the Bongos Mountain range, rather they suggest that vegetation growth in these areas is primarily limited by incident photosynthetic active radiation (PAR) [6,110]. The influence of PAR on vegetation production was not investigated here owing to the low spatial resolution (2.5 • ) of the data available at the time.
The suggestion [38] that an intensified hydrological regime would increase NPP in xeric environments while reducing NPP in mesic environments was not verified in the present study. The suggestion was based on the assumption that, in xeric environments, the proportional losses of precipitation to canopy interception and to evaporation would be reduced if the precipitation event size increased and that this reduction would offset or even exceed the volume of water lost to runoff, thereby increasing soil water availability [38]. In the present study, throughout most of the Sahel, an intensified precipitation regime (higher variance and lower skewness) was inversely related to ΣNDVI values. This difference in frequency and total rainfall in the drier and wetter parts of the Sahel has been reported for the interannual scale using annual sums of NDVI (e.g., [2]) and the present work adds an intra-annual component. The proportional effects of reductions in evaporation due to an intensified precipitation regime might be less than theorized as the percentage of total precipitation that falls in very small events (<7 mm/day) in the Sahel is minimal [10,111,112]. Thus it is plausible that larger precipitation events with longer intervening dry periods would lead to greater drying of the soil and reduce NPP.
Phenological Transition Dates
The "onset of greenness increase" was characterized by a pronounced north-south gradient with onset dates detected as early as February at lower latitudes (7.5 • N) and as late as August at higher latitudes (17.5 • N). The "onset of greenness decrease" also had a pronounced north-south gradient but with the onset dates earlier at higher latitudes (late August) than at lower latitudes (late October). Both dates were also found to vary between years with grasslands in arid region showing the highest interannual variability. On average, the length of the growing season (the difference between the two dates) varied from approximately 20 days at the southern edge of the Sahara Desert to approximately 250 days in the wetter parts of the study area ( Figure 3). The spatiotemporal variability in the timing and duration of the growing season throughout the Sahel , clearly indicated that daily data are needed to monitor the shorter growing seasons It also indicates that interannual changes in ΣNDVI gs cannot be adequately captured using a standard integration period such as the June, July, August (JJA) period usually used to define the start and end of the growing season (e.g., [100,109]). Rather than using a standard integration period, growing season sum NDVI and meteorological data were calculated here by integrating daily values bounded by the interval between the two transitions dates; i.e., the onset of greenness increase and the onset of greenness decrease.
The interannual variation in the timing of greenup was highest in the arid regions dominated by grasslands, for which there are several possible causes. In the Sahelian eco-climatic zone, the onset of the summer monsoon in successive years can vary by more than 30 days [18]. After the start of the wet season, above ground biomass production starts when seedlings establish their root system [3]. This is followed by rapid growth that produces a detectable increase in NDVI. However, the length of time between the start of the wet season and rapid growth has also been found to vary between years [3]. In this study, in general, the interannual variation in the timing of green-up decreased from north to south probably because of the lower interannual variability in the onset of rainy season at lower latitudes [10].
Conclusions
Vegetation growth and rates of development in arid and semi-arid Sahel were, as expected, generally related to precipitation but it was also found that air humidity and temperature have a significant role, in agreement with several recent modeling studies [27,61]. The magnitude of the effects of these three variables varied geographically, between vegetation functional types and elevation. Inaccuracies in the reconstructions of daily AVHRR NDVI and of the independent variables, particularly meteorological data, may influence these conclusions. Despite these possible shortcomings, it was evident that vegetation dynamics in the Sahel and their environmental correlates are more complex than equilibrium relationships between growing season precipitation and NPP variation.
One surprising result was that the vegetation, particularly at the wetter sites, did not always respond directly and proportionately to variations in soil moisture. Model simulations showed that, while variations in meteorology were indeed found to significantly alter soil moisture, this did not always increase production. The changes in vegetation productivity at the wetter sites were either dampened or enhanced by the direct effects of temperature and humidity on leaf temperature and stomatal conductance. These results were based on modeling and should be generalized with caution; for example, it is known that, in some regions, antecedent rainfall affects productivity in the following year [4]-so called lags-but interannual processes are not simulated in SSiB.
Seasonal precipitation distribution also influenced productivity. For the same total precipitation amount, productivity was higher when precipitation arrived in more frequent and less intense precipitation events. The suggestion [38] that vegetation productivity in xeric environments responds favorably to more intense and less frequent precipitation events was not supported.
The effects of precipitation, temperature and humidity on productivity were geographically coherent [109], suggesting fundamental causes. Unfortunately, the lack of a dense network of observational data meant that the emergent spatial patterns found here could not be analyzed further. Still, it is worth noting that the general patterns were compatible with previous modeling studies [27] and observational data from the few flux tower measurements from the study area [60].
One application of these results concerns the detection of anthropogenic dryland degradation ("desertification" [113][114][115]). Fundamentally, the term "degradation" implies a comparison with an explicit, standard, base-line, or reference condition, so no measure of degradation is useful unless the condition in the absence of degradation is first known. However, as shown in this study, NPP is strongly affected by meteorological variables and so any degradation caused by human activities or other, non-meteorological factors can only be inferred if these meteorological effects are first eliminated or at least controlled by normalization. An early method [46], which, explicitly or implicitly, is still widely used (e.g., [94,100,116]) is rain use efficiency (RUE) in which the NPP for each site is simply divided by its precipitation, and the maximum values in a region are taken to be a non-degraded reference. However, the results reported here and by others show that precipitation is not the only meteorological variable that affects NPP. Thus a more accurate normalization should use more complete relationships. Clearly, the selection of the appropriate model, acquisition of the necessary variables for each geographical location, and the need for additional meteorological data require more effort, but the potential errors caused by omission of the dependencies uncovered in the present study strongly support the need for improvement of the normalization.
The results indicate clearly that vegetation dynamics in the Sahel and their environmental correlates are more complex than statistical relationships between growing season precipitation and variation. The spatially explicit representation of these relationship presented here provide a new dimension to rainfall-productivity relationships in the Sahelian-Guinean ecoclimatic-zones.
|
v3-fos-license
|
2017-12-05T18:05:58.819Z
|
2017-12-01T00:00:00.000
|
8680484
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-017-3034-6",
"pdf_hash": "00b846221d5836f6dd27884aaddf5e2590a0c1d4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:585",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "001654ece843281581f85931d7b0ff70b046a7c9",
"year": 2017
}
|
pes2o/s2orc
|
A predominance of hypertensive heart disease among patients with cardiac disease in Buea, a semi-urban setting, South West Region of Cameroon
Objective The pattern of heart disease is diverse within and among world regions. The little data on the spectrum of heart disease in Cameroon has been so far limited to major cities. We sought to describe the pattern of heart disease in Buea, the South West Region of Cameroon, a semi-urban setting. This was a descriptive cross-sectional study. Between June 2016 and April 2017 the echocardiography register of the Buea Regional Hospital was surveyed. We extracted data on the age, sex and echocardiographic diagnosis. Results Out of 529 patients who underwent echocardiography, 239 (45.2%) had a definite heart disease. There were 137 (57.3%) females. The mean age was 58 years (range 3–94 years). The most common echocardiographic diagnoses were hypertensive heart disease (43.2%), dilated cardiomyopathies (17.6%), ischemic heart diseases (9.6%), and cor pulmonale (8.8%). Rheumatic heart disease affected 6.7% of the patients. The most common rheumatic heart disease was mitral stenosis followed by mitral regurgitation. Congenital heart disease represented 2.1% and 5 patients (2.1%) had pulmonary hypertension. Hypertensive heart disease is the most common cardiac disease in this semi-urban region in Cameroon. Rheumatic heart disease still affects a sizable proportion of patients. Prevention of cardiac disease in our setting should focus on mass screening, the treatment and control of hypertension.
Introduction
Sub-Saharan Africa like other developing countries of the world is undergoing epidemiological transition with increase in the prevalence of non-communicable diseases including cardiovascular diseases [1,2]. Cardiovascular diseases are the leading cause of death worldwide with more than 80% of the deaths occurring in low and middle income countries including sub-Saharan Africa [3].
The pattern of heart disease is diverse within and among world regions. The pattern of heart disease can provide an indicator of the health transition from communicable to non-communicable diseases. There are few studies describing the pattern of heart disease in Cameroon where cardiovascular specialists and equipements for the diagnosis of heart diseases are very limited and located mainly in the big urban cities. Whether data generated through these studies reflect rural and semi-urban area is unknown. The aim of this study was to report the pattern of heart disease in the South West Region of Cameroon, a semi-urban setting.
Study design and setting
This was a descriptive cross-sectional study. The study was performed at the cardiac exploration unit of Buea
BMC Research Notes
*Correspondence: cnkoke@yahoo.com 1 Buea Regional Hospital, Buea, South West Region, Cameroon Full list of author information is available at the end of the article Regional Hospital. It is the referral hospital in the South West Region of Cameroon. The town has a population of about 130,000 inhabitants. This region is characterized by a very limited access to effective interventions for prevention, diagnosis and treatment of cardiovascular diseases. In 2016, a cardiologist was posted to this hospital. The hospital receives patients referred from other parts of the region for the investigation and/or management of suspected heart disease. Before that, patients with suspected heart disease had to travel to other regional headquarters to see a cardiologist.
The echocardiography register was surveyed between June 2016 and April 2017. Only the first echocardiographic examination report for each patient was included. Using a pre-defined questionnaire, we extracted data on age at the time of diagnosis, sex, and the echocardiographic diagnosis.
Echocardiographic examination was performed in the parasternal long axis, short axis, apical four chamber and occasionally in the subcostal and suprasternal views. Indices analyzed included the left ventricle telesystolic diameter (LVIDS), left ventricle telediastolic diameter (LVIDD) and the ejection fraction (EF). All the echocardiographic diagnoses were based on existing guidelines.
The diagnosis of rheumatic heart disease (RHD) was based on the World Heart Federation (WHF) criteria for echocardiographic diagnosis of RHD. Briefly, RHD was defined by the presence of any evidence of mitral or aortic regurgitation seen in two planes associated with at least two of the following morphologic abnormalities of the regurgitating valve: restricted leaflet motility, focal or generalized valvular thickening, and abnormal sub-valvular thickening [4].
Hypertensive heart disease was diagnosed in the presence of any or combination of the following abnormalities: left ventricular diastolic dysfunction (e.g. altered E:A ratio), left ventricular hypertrophy (indexed LV mass > 51 g/m 2.7 ), left ventricular systolic dysfunction and dilated left atrium, a surrogate of impaired LV filling (left atrial diameter > 3.8 cm in women and > 4.2 cm in men) in the presence of hypertension. Left ventricular geometric patterns were defined according to Ganau et al. [5].
Ischemic heart diseases were documented by detection of regional wall motion abnormality on different region of heart (such as loss systolic thickening, hypokinesia, akinesia dyskinesia) and associated with LV systolic dysfunction.
Dilated cardiomyopathy was diagnosed when there are dilated heart chambers with normal or decreased wall chambers as well as impaired LV systolic function [6].
Pericardial effusion was diagnosed when there is echo free space between the visceral and parietal pericardium.
Cor pulmonale was present when there is dilated and hypertrophied right ventricle (RV), evidence of increased RV systolic pressure D-shaped LV in diastole (diastolic flattening of the LV septum).
Data analysis
The data collected was analyzed using SPSS software version 20. Continuous variables were expressed as mean ± SD (standard deviation) and categorical variables expressed as percentages. Differences in categorical variables were assessed by Chi square analysis where appropriate. A p value of < 0.05 was considered statistically significant.
Results
During the 11 month period, 529 echocardiograms were performed (Fig. 1). There were 239 echocardiograms with a definite heart disease. There were 137 (57.3%) women. The mean age of all the patients was 58.0 years ± 15.8 SD and ranged from 3 to 94 years. Women were significantly older than men (58.8 vs 56.9 years; p = 0.011). The most common conditions were hypertensive heart disease (43.2%), dilated cardiomyopathy (17.6%), and ischemic heart disease (9.6%), Table 1. Rheumatic heart disease accounted for 6.7% of heart diseases. Figure 2 shows a subject with rheumatic mitral stenosis. There were 5 (2.1%) cases of congenital heart disease. Congenital heart diseases included tetralogy of fallot (20%), atrial septal defect (20%) and ventricular septal defect (60%). Among the patients with dilated cardiomyopathy, 6 were HIV positive. Table 2 shows the different types of Figure 3 shows the distribution of different heart diseases and Fig. 4 shows a subject with pericardial effusion.
Discussion
We have reported on the spectrum of cardiac disease for the first time in the South West Region of Cameroon, a semi-urban setting. This echocardiographic hospital based study has shown that hypertensive heart disease is by far the most common type of heart disease followed by dilated cardiomyopathy and ischemic heart disease.
Hypertensive heart disease was the most common heart disease in this study which could be expected given the high prevalence of hypertension in the general population coupled with poor awareness, low treatment control and control rates [7,8]. In a study conducted by Jingi et al. looking into the pattern of heart disease in the West Region of Cameroon, hypertensive heart disease was the most prevalent condition accounting for 41.5% of cardiac diseases diagnosed by echocardiography [9]. This report is comparable to the finding in our study. In similar studies in Nigeria, it was reported that hypertensive heart disease was the most common form of heart disease diagnosed on echocardiography [10][11][12]. We could not tell if there was an association between the duration and severity of hypertension and the development of hypertensive heart disease.
It is well known that hypertension forms the bulk and is the foundation of cardiovascular diseases in Africa. The Abuja Heart Study (2006-2010) in Nigeria [13] and the Heart of Soweto Study (2006)(2007)(2008) in South Africa [14] showed that hypertension is now a dominant cause of heart failure in adults in these countries. In the THE-SUS-HF registry, hypertension accounted for 43.9% of heart failure in sub-Saharan Africa [15]. In a major urban city of Cameroon (Yaounde), Kingue et al. reported that hypertension accounted for 54.49% of causes of heart failure [16]. In Cameroon, hypertension has become a major public health problem. It is estimated to affect about 30% of the general population [7]. This is as a result of the epidemiological transition Cameroon is traversing like other developing countries. In a survey of blood pressure control among patients with hypertension in Yaounde, the capital city of Cameroon, only 30% of patients had their blood pressure controlled [8].
Globally, hypertension is the leading cause of cardiovascular diseases and deaths, and accounts for about 7.5 million deaths per year [17]. Like most cardiovascular diseases, the natural course of hypertension can be modified with the use of effective and inexpensive medications. Many randomized controlled trials have demonstrated unequivocally that treatment of hypertension reduces the risk of stroke, coronary heart disease, congestive heart failure and mortality [18,19]. It is therefore imperative that lowering of blood pressure to targets be achieved in patients with hypertension to prevent heart disease and other cardiovascular diseases. Dilated cardiomyopathy was the second most common heart disease in our study, representing 17.9%. As also reported by Jingi et al. cardiomyopathies were the second most frequent cause of heart disease in the West Region of Cameroon [9]. Our findings are also similar to that reported by Kingue et al. in Yaounde [16]. Similar studies in sub-Saharan Africa have demonstrated cardiomyopathies to be a significant cause of heart disease [10,15,20,21].
In our study, 9.8% of the patients had ischemic heart disease representing the 3rd most common heart disease. The proportion of ischemic heart disease in our study was higher than that reported by Jingi et al. [9,10,12]. Although ischemic heart disease was considered to be rare in sub-Saharan African, recent evidence suggests ischemic heart disease is by no means rare in Africans [22,23]. This increasing incidence of ischemic heart disease in Africans is due to the epidemiological transition with the adoption of western lifestyles.
Rheumatic heart disease was a significant cause of heart disease in our study (6.8%). On the contrary, rheumatic heart disease accounted for only 3.4% of all cases of heart disease in the West Region of Cameroon [9]. Ukoh et al. in a similar study in Benin City in Nigeria had a prevalence of 18.1% for rheumatic heart disease [12]. Rheumatic disease has almost disappeared in developed countries but it remains a major public health issue in children and young adults in low and middle income countries including sub-Saharan Africa [24][25][26][27]. Rheumatic heart disease still remains a significant cause of heart failure in sub-Saharan Africa. In the THESUS-HF registry, it was the third most common cause of heart failure after hypertension and cardiomyopathies [15].
The spectrum of pericardial disease in our study was different from that reported by Jingi et al. in the West Region of Cameroon where they reported a higher proportion of pericardial disease [9].
Conclusion
Hypertensive heart disease is the most common cardiac condition in this semi-urban setting. Effective preventive strategies of heart disease heart diseases in this setting should focus on detection, treatment and control of hypertension.
Limitations
Our study is a hospital based review of prospectively recruited patients and subject to bias. Patients referred for echocardiography may be those with more severe patterns of heart disease as patients with more severe lesions are more likely to seek medical attention. Despite this shortcoming our study provides insight into the pattern of heart disease in this region of our country as seen on echocardiography.
|
v3-fos-license
|
2021-02-17T05:10:00.609Z
|
2021-02-15T00:00:00.000
|
231934309
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-021-83168-2.pdf",
"pdf_hash": "b4ee59597b31ee55c61755953221e4d5025cf335",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:586",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b4ee59597b31ee55c61755953221e4d5025cf335",
"year": 2021
}
|
pes2o/s2orc
|
Rate of achievement of therapeutic outcomes and factors associated with control of non-communicable diseases in rural east Malaysia: implications for policy and practice
Non-communicable diseases (NCDs) are an increasing problem worldwide, including in Malaysia. National surveys have been performed by the government but had poor coverage in east Malaysia, particularly in rural regions. This study aimed to describe the achievement of target therapeutic outcomes in the control of diabetes mellitus (DM), hypertension (HPT), and dyslipidemia (DLP) among diabetic patients in rural east Malaysia. A cross-sectional study was conducted among DM patients who visited the NCDs clinic in Lundu Hospital, Sarawak, Malaysia, from Jan to March 2016. In total, 214 patients (male, 37.9%; female, 62.1%) were recruited using a systemic sampling method. Multiple logistic regression models were applied to estimate the adjusted odds ratio (AOR) and confidence interval (CI) for the target therapeutic achievement in the control of DM, HPT, and DLP. Compared to the national average, therapeutic target achievement in Lundu was higher for DM (43.0% vs. 23.8%), equal for DLP (35.8% vs. 37.8%) but lower for HPT (30.9% vs. 47.9%). DM patients who had at least yearly HbA1c monitoring (AOR 2.30, 95% CI 1.04–5.06, P = 0.039), and those 58.7 years or older (AOR 2.50, 95% CI 1.32–4.74, P = 0.005) were more likely to achieve the therapeutic target for DM. Health promotion and public education regarding HPT needs to be emphasized in rural Malaysia. HbA1c monitoring at least once a year was one of the important factors associated with achieving DM control in rural east Malaysia. Accessibility to HbA1c tests and monitoring should be ensured for diabetic patients.
Non-communicable diseases (NCDs) are an increasing problem worldwide, including in Malaysia. The World Health Organization (WHO) reported that 71% of all deaths in 2018 were due to NCDs. Of these, cardiovascular diseases alone accounted for 44% of all NCDs deaths 1 . With rising population and prevalence for NCDs, health expenditures have been increasing globally 2 . Cost savings and the sustainability of the health system are major issues in nations that aim to achieve universal health coverage, thus, prevention and good control of NCDs have been shown to be the most economical way to manage the problem 3 . Malaysia government provides highly subsidised healthcare for all Malaysian and had achieved universal health coverage (UHC) in 1990's 4 . As a tax funded national healthcare system however 4 , providing good quality and accessibility while managing cost is always a challenge. Malaysia has seen a shift in the causes of mortality through the years. In 2007, for the first time, diseases of the circulatory system overtook infectious diseases as the number one cause of mortality in hospital settings and have continued to increase as a proportion of the total mortality 5 . The latest Ministry of Health Annual Report indicated a consistent disproportionate increase in nephrology and cardiology cases seen in specialist clinic from 2002 to 2011, increasing from 0.85% to 2.41% for nephrology and 1.01% to 4.58% for cardiology of all cases, both of which are highly dependent on the management of NCDs in the country 6,7 . The 5th National Health and Morbidity Survey (NHMS), performed in Malaysia in 2015, also reported that 17.5%, 30.3%, and 47.7% of adults have diabetes mellitus (DM), hypertension (HPT), and dyslipidemia (DLP), respectively, in Malaysia, a persistent increase since the first survey was conducted in 1986 8,9 . Preventive health leads to an improvement of general public health, prevents overcrowding of tertiary centers, and reduces the long-term cost of healthcare 10 . Studies of NCDs have been conducted in Malaysia, although often concentrated only in west Malaysia and urban areas 11 . DiabCare 2013 was a multi-center study carried out to gauge the control of NCDs in DM patients in both east and west Malaysia; however, it was only performed in urban, tertiary centers 12 .
The rural population in Malaysia has been reducing through the years, but still made up 25% of the general population in 2017 13 . Rural regions are frequently plagued with infrastructure and resources restrictions thus may consist of a different pattern of control for NCDs compared to the urban region. However, information regarding rural primary healthcare centers' achievements in NCDs are scarce.
To the best of our knowledge, factors affecting HbA1c achievement have been reported in studies in Malaysia but never in rural east Malaysia, where indigenous populations are predominant 14 . In addition, demography background such as the relationship between biochemical monitoring such as HbA1c and fasting lipid profile (FLP) to DM and DLP control have never been examined. The National Diabetes Registry Report (NDR) 2012 reported that up to 22.0% of diabetic patients managed in government clinics did not undergo any HbA1c monitoring in a year 15 . The ratio of HbA1c monitoring varies significantly between states, and is likely worse in east Malaysia, where many health facilities are not equipped with HbA1c-capable laboratories. Thus, this study aimed to examine the associations of the target therapeutic achievement in the three diseases with age, years of follow-up, sex, ethnicity, body mass index (BMI), smoking status, and relevant biochemical monitoring, and to describe the control of DM, HPT, and DLP in a rural setting in east Malaysia, bridging the gap of information available in Malaysia to assist policymakers in creating more effective long-term nationwide plans catering to both east and west Malaysia as well as urban and rural populations.
Materials and methods
Study design and participants. A cross-sectional study was conducted in Lundu District Hospital, in the northwest Kuching Division of Sarawak, east Malaysia. The population of Lundu constitutes 1.4% (33,413) of the total population of Sarawak. As comparison, the similarly sized neighbouring urban Kuching District have close to 19 times the number of population 16 . Lundu District population, like the rest of less developed Sarawak, generally engaged in agriculture, aquaculture, fishing as well as tourism as main economical activities 16 . The latest official document in 2010 combines both indigenous populations (mainly Bidayuh and Iban) and Malays in the statistics, making up 82.4% of the total population, while the Chinese population constitutes another 10.6% 17 . Besides an adequately maintained single carriageway dual lane highway connecting Lundu to other District, roads connecting to smaller villages in Lundu are mostly single lane, unpaved road, with wooden bridges connecting between multiple rivers running across the district, making even intra-district traveling challenging and time consuming.
Lundu Hospital offers both inpatient and outpatient services and is the sole healthcare center in the Lundu District. The hospital is a 40 bedded hospital with average of close to 130 outpatient visits per day. A DM patient is initially diagnosed in the outpatient clinic by a doctor and sent to register with the NCDs division under the outpatient department situated inside Lundu Hospital for further regular follow-ups. The patients' history and treatment information are recorded in two places: 1) a record book given to patients to be brought around, if needed; 2) an identical diabetes registry record kept in the NCDs clinic as a precaution against lost information. www.nature.com/scientificreports/ Both records should receive identical input from the doctor upon follow-up, but due to errors and negligence, at times only one record receives the input. Owing to the weaknesses mentioned above, this study was conducted by identifying patients and obtaining their record card during their follow-ups instead of merely extracting information from the diabetes registry record as it allowed investigators to extract and synchronize information from both the record book and the diabetes registry record. Furthermore, the total diabetes registry record in the clinic was numbered at 1048; however, it dates back to 1992 and was never updated. The 1048 records included many patients who had moved, died, or defaulted completely.
As the patient's visit interval to the clinic may vary, to recruit based on the attendance of the patient, we needed to remove the duplicates while not excluding patients with longer follow-ups. The follow-up duration that was given to any diabetic patient in this clinic was between 1 week and 3 months; thus, 3 months was chosen as the duration of our study. Based on information from September 2015 to November 2015, we found that there was an average of 440 unique visits in a total of 1074 visits in 3 month, and based on our estimation, a recruitment of roughly every 5-6 patients would have been sufficient for our sample size requirement of 205 (based on 440 unique visits). In the end, we decided to recruit the first and then every 6th patient after who visited the NCDs clinic in Lundu Hospital, from every weekday from January 1 to March 30, 2016, synchronizing information from both record books of patients and diabetes registry records.
Inclusion criteria were as follows: patients older than 18 years, diagnosed as having Type 2 DM, not pregnant, and followed up for at least 1 year in the NCDs clinic. A total of 260 patients were recruited. Among them, 33 patients were found to be duplicated, 10 of them had only been followed up for less than 1 year in the NCDs clinic, and three had been transferred from other NCDs clinics, with poor documentation. Therefore, a final total of 214 were considered for further data analysis. Three qualified medical doctors who were also the investigators of this study were involved in the digitalization of the diabetes registry records. A digital data form is created for each entry with necessary information required to be filled in in order to proceed to prevent incomplete data collection.
Data collection. Data collection was performed by retrieving data from the diabetes registry record and the patient's record book. The first page of the records containing information regarding initial registration in the NCDs clinic, demographics, medical history, risk factors, the year of diagnosis, and BMI were obtained. The latest visit with documentation of BMI, biochemical results, and last date of biochemical tests were obtained. Therapeutic target achievements. Therapeutic target achievements for the three NCDs (DM, HPT, and DLP) were defined similarly to the latest national data, namely the National Diabetes Registry Report 2012 (NDR), defining a single target therapeutic range for DM, HPT, and DLP. The target therapeutic range for DM was defined as 6.5% or below, HPT was defined as ≤ 130 mmHg/90 mmHg and DLP target as low-density lipoprotein cholesterol (LDL-C) ≤ 2.6 mmol/L 15 . The target biochemical examination regular interval for HbA1c was ≤ 12 months 18 , while that of FLP was defined as ≤ 6 months 19 , following the latest Malaysian clinical practice guidelines during the conduction of the research. Statistical analysis. The collected data were analyzed using the Statistical Package for Social Science (SPSS for Windows program version 25; IBM, Armonk, New York, USA). Descriptive statistics were calculated for demographic information, therapeutic target achievement, and biochemical examination interval. The data obtained were further classified into dichotomous factors. Three separate logistic regression models were used to measure the odds ratio (OR) of the achievement of therapeutic control targets for DM, HPT, and DLP. A P-value of less than 0.05 was considered significant.
Ethical considerations. This study protocol was approved by Medical Research and Ethics Committee
(MREC) of the National Medical Research Register (NMRR) (approval number NMRR-14-1867-23844, issued on October 19, 2015). All methods were performed in accordance with the relevant guidelines and regulations. Permission to conduct the research in Hospital Lundu was given by MREC and director of Hospital Lundu. The requirement for informed written consent from each participant was waived by MREC, as this study only included information from existing medical records and do not involve interaction with patient or collection of identifiable private information. Each entry of sample data was assigned an identification number in place of names to ensure confidentiality. Table 2). None of the factors were found to be significantly associated with HPT therapeutic target achievement before as well as after adjustments for age, sex, ethnicity, smoker, BMI, and duration of HPT follow-up.
Results
Dyslipidemia. The proportion of patients with DLP was 70.6% (N = 151), with a mean age of 58.8 years and a follow-up duration of 5.8 years. The means of the lipid profile for total cholesterol (TC), triglyceride (TG), high-density lipoprotein cholesterol (HDL-C), and LDL-C were 4.99 mmol/L, 2.15 mmol/L, 1.27 mmol/L, and 2.80 mmol/L, respectively. Therapeutic target achievement for DLP in diabetic patients was 35.8% (Table 2).
All three diseases. The proportion of patients with all three diseases was 62.6% (N = 134), with a mean age of 60.2 years and follow-up duration of 6.2 years.
For patients with all three diseases, only 6.7% achieved the therapeutic targets for all three diseases. No current smoker found in this group. No significant factors affecting therapeutic targets for all three diseases were found. However, after adjusting for age, sex, ethnicity, BMI, duration of follow-up, HbA1c yearly monitoring, and FLP monitoring, male sex (AOR 4.26, 95% CI 1.06-17.05, P < 0.05) showed an association with therapeutic target achievement four times higher than in females (Table 3).
Discussion
This study is one of the first to highlight target therapeutic control achievement for DM as well as HPT and DLP control in diabetic patients among the indigenous population ethnicities of rural east Malaysia. The control of DM and DLP in diabetic patients was found to be at least equal or better in rural areas in Malaysia at the national Table 3. Adjusted odds ratio (AOR) and 95% confidence interval (CI) of factors for therapeutic target achievement. *P < 0.05; **P < 0.01. www.nature.com/scientificreports/ level; however, control of HPT was worse. Those receiving HbA1c yearly monitoring were twice as likely to meet the target therapeutic achievement for DM. Good diabetic control has favorable long-term macrovascular and microvascular outcomes 20 . In this study, HbA1c yearly monitoring was performed in 81.3% of patients, who were found to be twice as likely to achieve therapeutic targets of DM. We can only speculate that the association may be due to good vigilance and attention given to the disease by both the patient and healthcare provider. First, the general compliance of the patient; they did not miss the scheduled follow-ups so that HbA1c tests could be performed. Second, the HbA1c result assisted the doctor in recognizing the patient's average control over the past 3 months, rather than relying on the fasting blood sugar that varies highly based on the activity of the patient in the past few hours. Both factors lead to better treatment compliance and an improved response to changes in the disease. Given its straightforward implementation of making such tests available to perform, the government should consider tightly enforcing yearly HbA1c monitoring to ensure better diabetic control outcomes.
Moreover, older age and shorter duration of DM were found to be two times more likely to contribute to achieving diabetic control. This is in contrast with the common idea that older age is associated with nonadherence to medication, due to factors such as isolation and cost-related non-adherence 21,22 . However, with Malaysia being a family centric culture and medical care being provided practically free of cost, older age may not lead to the same problems found in other countries. The finding of shorter durations of disease leading to better control of DM, on the other hand, was expected and corresponded well with the fact that diabetic control worsens over time 23 .
DM is known to be an independent risk factor for cardiovascular diseases 24 , and DM itself was found to have a magnifying effect on cardiovascular risk for other risk factors, such as DLP and HPT 25 . The mortality of diabetic patients also tends to be higher for coronary diseases 26 , making control of HPT and DLP in diabetic patients important. DLP control was found to be three times greater with BMI < 30 and two times greater among indigenous population ethnicities. The relationship between BMI and DLP has been studied extensively 27 , while the association between ethnicity and DLP control is unclear. A previous study from Malaysia reported that indigenous population Sarawakians had similar metabolic syndrome prevalence as other ethnicities, although the components of the metabolic syndrome were numerous, and a specific identification could not be made 28 . Although another study found that east Malaysians engage in lesser physical activity than Malays and Chinese, the ethnicities were not grouped further in the east Malaysian sample 29 . In the same study, however, higher physical activity was found to be associated with rural communities, which may partially explain the better DLP control in the indigenous population group in this study 29 .
A dedicated description of DM, HPT, and DLP control in diabetic patients in rural east Malaysia was not previously available. Using the same targets as the NDR, this study found the DM and DLP control of diabetic patients in Lundu to be at least equal to national levels but was poor for HPT control. The target achievement of HbA1c at 6.5% or lower was 43.0%, which was significantly higher than the national level of 23.8% and any of the West Malaysian states (14.9-31.1%) 15 , as well as that of the Sarawak State as a whole, 39.1% 15 . The control of DLP in our study was 35.8%, roughly equal to 37.8% in the NDR 15 , while the target blood pressure achieved in our study was 30.9%, which is lower than the national mean of 47.6% in 2012 15 . A major difference between the figures for Lundu and the national figures may be due to the predominant indigenous population demographic in Lundu. Indigenous populations were traditionally rice farmers/fishermen, and those who remain in rural areas mostly still engage in physically demanding agricultural work 30 , and hence indirectly lead a healthier lifestyle. A study from Saudi Arabia supports the findings of this study, noting that rural regions are associated with better control of DM than urban regions 31 . The finding of worse HPT control in rural areas also reinforces the findings of a study in west Malaysia, which found that membership in the rural population is a risk factor for poor control of HPT 32 . A higher emphasis should be placed on HPT during health promotion activities and education programs in rural Malaysia.
As for the two-thirds of patient having all three diseases in this study, it was found that male sex was associated with achieving good control of all three diseases. Sex-specific pathophysiological differences in metabolic syndromes of which DM, HPT, and DLP were part of have been documented 33 , but direct link with control in terms of target therapeutic achievement has not been established. Also, we speculate that influenced by a more traditional male-dominant family and society in the rural area where resources are limited, a more frequent healthcare appointment to optimise control of multiple comorbidities may be out of reach for some females. A bigger study focusing on sociodemographic, sex and compliance to follow-up is needed to fully explore the association.
The results of this study provide suggestions for future research directions. This study found that indigenous population ethnicity was significant in controlling DLP, perhaps due to a genetic predisposition or cultural lifestyle, suggesting that future research that includes east Malaysians should consider ethnicity as a factor in the analysis.
There are limitations to this cross-sectional study. Other important sociodemographic factors such as education level, income, and distance from healthcare centers were not included in this study. The target population was limited to patients with DM. The sampling method may have indirectly excluded patients with poor diabetic control, as patients who defaulted treatment for more than 3 months would not be included in the sampling period. This study was conducted only in one of the twenty-eight rural districts in Sarawak, east Malaysia.
Conclusions
In conclusion, this study revealed that rural east Malaysia may experience a different pattern of NCDs control when compared to the rest of Malaysia. This provides useful data for local healthcare officials to plan health promotion strategies. Ensuring HbA1c monitoring may be a useful tool in achieving target therapy control
|
v3-fos-license
|
2023-10-05T15:17:57.815Z
|
2023-10-01T00:00:00.000
|
263630849
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6694/15/19/4838/pdf?version=1696327482",
"pdf_hash": "5454c36d220f26e72e229c61f0358ef39889b6da",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:587",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4011caee686d2e4f6932dec563d38f9dc6294877",
"year": 2023
}
|
pes2o/s2orc
|
Modern Techniques in Re-Irradiation for Locally Recurrent Rectal Cancer: A Systematic Review
Simple Summary Re-irradiation of locally recurrent rectal cancer presents challenges in terms of treatment options and outcomes. By conducting a systematic review focused on new technologies such as carbon ion radiotherapy, intensity-modulated photon radiotherapy, and stereotactic radiotherapy, we aimed to determine whether the new techniques have led to improvements in both outcomes and toxicities to enable clinicians and researchers to make informed decisions about incorporating new technologies into clinical practice and to identify avenues for further research. Abstract Background: Radiotherapy (RT) plays an important role in the treatment of patients with previously irradiated locally recurrent rectal cancer (LRRC). Over the years, numerous technologies and different types of RT have emerged. The aim of our systematic literature review was to determine whether the new techniques have led to improvements in both outcomes and toxicities. Methods: A computerized search was performed by MEDLINE and the Cochrane database. The studies reported data from patients treated with carbon ion radiotherapy (CIRT), intensity-modulated photon radiotherapy (IMRT), and stereotactic radiotherapy (SBRT). Results: Seven publications of the 126 titles/abstracts that emerged from our search met the inclusion criteria and presented outcomes of 230 patients. OS was reported with rates of 90.0% and 73.0% at 1 and 2 years, respectively; LC was 89.0% and 71.6% at 1 and 2 years after re-RT, respectively. Toxicity data vary widely, with emphasis on acute and chronic gastrointestinal and urogenital toxicity, even with modern techniques. Conclusion: data on toxicity and outcomes of re-RT for LRRC with new technologies are promising compared with 3D techniques. Comparative studies are needed to define the best technique, also in relation to the site of recurrence.
Introduction
Rectal cancer (RC) ranks eighth worldwide in incidence of neoplasia, with an agestandardised rate of 1.73 per 100,000 persons/year.GLOBOCAN 2020 estimates that there are 0.7 million new cases of rectal cancer, and this number is expected to increase to 1.16 million in 2040 [1,2].Neoadjuvant chemoradiation (CRT), total mesorectal excision, and adjuvant therapy have helped to reduce local failure of rectal cancer, but despite this, the incidence of locally recurrent rectal cancer (LRRC) is still 4-8% [3,4].
Although the rate of local recurrence after multimodality treatment, including neoadjuvant CRT, surgery, and adjuvant CRT, is low, 81% of all recurrences occur in the irradiation field or at its margins, and a total of 78% of field recurrences occur in the lower pelvic and presacral regions [5].
To date, the treatment of choice for LRRC is surgery with radical margins (R0).In cases where this is not possible, radiotherapy (RT) in combination with or without chemotherapy (CHT) is a viable alternative that may lead the patient to radical surgery.Resection of LRRC is more difficult due to the altered and diverse anatomy of the organs and critical structures in the pelvis.In addition, the presence of fibrosis after treatment makes surgery more difficult and decreases the chance of an R0 margin [6][7][8][9].
For this reason, re-irradiation (re-RT) may play a role in increasing the rate of radical resection or in the definitive treatment of inoperable patients [10][11][12].
The trouble concerning re-RT in this group of patients is related both in terms of the received dose of the organs at risk (OARs) and the time elapsed between the two irradiations.There are not enough studies on dose constraints for OARs, so radiation oncologists do not have clear guidelines on the doses that can be administered to avoid acute and late side effects [13].Administering a suboptimal dose for fear of side effects can result in failure to control the disease or leave patients permanently inoperable [14][15][16].
There is no doubt that great progress has been made in the radiation treatment of patients with LRRC.Modern techniques and daily imaging monitoring allow highly conformal treatments to be delivered to the target site while avoiding OARs.Nevertheless, there are few studies on the use of these techniques in rectal cancer recurrence re-RT.
Previous literature reviews aimed to evaluate the efficacy of re-RT and determine the optimal treatment for LRRC [17,18].They concluded that re-RT had favorable survival outcomes when combined with surgery and showed good oncologic and palliative efficacy with or without surgery.Unfortunately, most of these studies used 3D techniques.Nowadays, most RT centers have and use advanced technologies, so the literature data based on 3D techniques are no longer reliable for determining the doses that can be used to avoid the side effects of re-RT in this patient population [13].
In addition, radiation therapy is increasingly moving toward the use of new technologies: Carbon ion RT (CIRT), proton therapy (PBR), and MR-Linac-guided adaptive RT.
The aim of our systematic literature review is to examine studies in which these techniques have been used to understand whether they have an impact on oncological outcomes and toxicities in patients treated with re-RT for LRRC.
Materials and Methods
This systematic review was based on Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines [19].We systematically searched the MEDLINE and Cochrane databases through December 2022, with an updated search in April 2023, for the following terms: ((re irradiation) OR (re-RT) OR (re-irradiation)) AND ((rectal cancer) OR (rectal neoplasm) OR (local recurrent rectal cancer) OR (recurrent rectal cancer) OR (locally recurrent rectal cancer)).No publication date restrictions were applied.This study has not been registered.
We included studies published in the English language with at least 10 patients with LRRC treated with re-RT with or without concomitant CHT.Prospective, retrospective, and randomized controlled trials were included.Case reports and reviews were excluded.Studies must have used more than 50% IMRT/Volumetric Modulated Arc Therapy (VMAT) or stereotactic body RT (SBRT) techniques.Studies that used CIRT, PBR, or SBRT or studies conducted with MR-linac were eligible for our review.Studies were included if they reported at least one of the following objectives: primary objectives, including overall survival (OS) and local control (LC), and secondary objectives, including grade 3 complications.
Initial screening was performed using titles to filter out studies that were duplicated in databases or that were not clinical trials, such as reviews, letters, editorials, case reports, and consensus guidelines.Studies that did not involve LRRC patients and brachytherapy studies were also excluded.
We performed a second screening based on abstracts and excluded studies that were not clinical trials, studies with fewer than 10 patients, studies with irrelevant subjects, non-English language studies, and abstracts only.
We then performed a full-text review to identify studies that met the above inclusion criteria.In the case of multiple studies from a single institution, we included the studies with the largest number of patients with LRRC who received re-RT.The above screening processes were performed independently by two authors (EG, SB), and the final inclusion was confirmed by mutual agreement.
Data acquisition was performed by two independent authors.We collected all general information regarding authors, publication year, country, number of patients involved and age, study design, study period, patient population, inclusion criteria, patients excluded, treatment information, re-RT technique, previous RT dose (Gy), interval between RT and re-RT, re-RT total dose and fractionations, cumulative dose, concomitant CHT, and percentage of patients that underwent surgery.We reported acute complications, defined as complications occurring within 3 months after re-RT or those described as acute complications in the relevant study.Late complications were defined as complications occurring after 3 months from re-RT or those described as late complications in the relevant study.Toxicities were classified into five categories: gastrointestinal (GI), genitourinary (GU), neuropathy, pain, and infection.Oncological outcomes included OS, progression-free survival (PFS), described as no progression of disease in the treatment area or outside the treatment field, and LC.
For missing numerical data, PFS, OS, or LC rates were estimated from descriptive graphs, if available.The information regarding tumor control was collected and analyzed as rates at specific time points (e.g., 1-, 2-, 3-, and 5-year LC rates), taking into account the possible recurrence or regrowth after re-irradiation.
If discrepancies in the information were found by the two independent researchers, the differences were resolved through discussion and repeated literature review.
Results
We identified 126 studies in MEDLINE and the Cochrane Library.After records were removed for title review, 59 studies underwent abstract review.A total of 67 did not meet the inclusion criteria: 17 were unsuitable types of publications (reviews, letters, editorials, case reports, trials, and laboratory studies), 12 were on irrelevant subjects, 2 had less than 10 patients, 1 was excluded for RT technique, 1 was not in the English language, and 1 was an abstract only.A full-text review was performed for 25 studies.A total of 12 studies were excluded for the RT technique; 1 study was excluded because multiple studies were published by a single institution, 2 studies were excluded because no outcomes of endpoints were provided for included patients, and finally, 3 studies were excluded because it was not possible to distinguish patients treated for LRRC in them.Overall, seven studies with eight cohorts of 230 patients were included in this review [20][21][22][23][24][25][26].The study inclusion process is summarized in Figure 1.
The analyzed patients were treated from 2003 to 2019 for previously irradiated LRRC.
The use of sequential CHT is reported in four studies [20,21,24,25]: two papers did not administer any CHT [21,25], one administered capecitabine or 5-fluorouracil-based for patients who have undergone re-RT with photons, while no patient who underwent CIRT received CHT [20]; in the last one [24], only 5-fluorouracil-based CHT was administered.Surgery was reported in four studies [20][21][22]24]; three of these reported 0% of patients initiated surgical treatment [21,22,24], and only one [20] reported the percentage of patients undergoing surgery, again for patients who have undergone re-RT with photons (while patients who underwent CIRT had no surgery before or after re-RT).
Only one study had a palliative intent [24].
The analyzed patients were treated from 2003 to 2019 for previously irradiated LRRC.
Four papers specified exclusion criteria, which were metastatic patients or frail clini- Regarding the size of the recurrence, three studies reported the average size of the disease in mm [20,21,23], and three studies reported the average volume in cm 3 [22,25,26].
Discussion
This literature review confirms the role of RT in the re-RT of LRRC and that new technologies have an impact on outcomes and toxicity.
Over the years, the interest in re-RT of LRRC has grown exponentially.The difficulty of planning treatment considering the dose previously received by patients, the establishment of dose limits for OARs, the combination with concomitant CHT, and the definition of a dose have led to a growing interest in this topic, resulting in several studies and literature reviews.At the same time, highly conformal techniques have been developed to circumvent the problems of dose distribution of 3D techniques.
Previously, several literature reviews have shown that re-RT is possible with good results in terms of local disease control and symptom relief [17,18,27].
A previous systematic review by Guren et al. supported the use of re-RT for LRRC, followed by radical resection when possible and hyperfractionation to reduce late toxicities [17].A recent systematic review and meta-analysis by Lee et al. confirmed the oncologic efficacy of re-RT in LRRC and a higher survival rate with concurrent surgery but with a higher risk of late toxicities.The median OS in this study was more than 2 years [18].Both studies showed that re-RT was also effective in the palliative setting.These reviews were mainly based on studies in which 3D techniques were used [17,18].Regarding CIRT, the systematic review by Venkatesulu et al. reports interesting results and describes CIRT as a promising new technology when re-irradiation is required [27] A recent Italian study published by the Italian Association of Radiation and Clinical Oncology for Gastrointestinal Tumors (AIRO-GI) showed that most Italian centers have advanced technologies such as VMAT/IMRT/SBRT and daily image monitoring, which have led to breakthroughs in dose and fractionation [13].
To date, this is the first systematic literature review of re-RT for LRRC to include only highly conformed techniques (VMAT/IMRT) and the first to include not only SBRT treatments but also treatments with CIRT.
Moreover, the RT intent in the Cai et al. study was palliative, and this may have influenced the lowest OS and PFS, especially at two years [24].
The toxicity reported by the studies is extremely exogenous and exhibits wide variability.Two studies reported the absence of acute and chronic G3 toxicity in patients treated with CIRT [22,23], whereas reported acute toxicity in the study by Yamada et al. was 10.4%; chronic toxicity was 37.7%, with GI (11.7%) and infectious (16.9%) side effects predominating [21].Chung et al. also reported data on late GI toxicity (5.7%), indicating that half of the patients treated with CIRT with G3 or higher toxicity received CHT after re-RT [20].In contrast, among patients treated with photons, only the study by DeFoe et al. reported no acute toxicity after re-RT with Cyberknife [26], whereas only two studies reported acute G3 toxicity ranging from 16.5% to 22.7% [24,25]; chronic GI toxicity was reported in two studies ranging from 13.6 to 19.3% [20,24].
As for the systematic review and meta-analysis by Lee et al., for acute complications, the overall grade was 3, a pooled rate of 11.7%.For late complications, the pooled rate was 25.2%.Regarding GI, GU, and skin and soft tissue complications, rates of 13%, 2%, and 9% were found for acute toxicity and 13%, 9%, and 16% for late toxicity, respectively [18].
There is no way yet to determine whether the latest RT techniques result in a reduction in toxicity, although we affirm that with the latest technology, it has been possible to increase the dose to the target with acceptable side effects.To limit the dose to OARs, selected patients had received spacer implantation prior to CIRT or had an omental flap or polytetrafluoroethylene (PTFE) prosthesis inserted through open surgery to limit the dose to the bladder or bowel [20][21][22].Moreover, one of the methods to reduce the side effects is to increase the number of fractions.In the study by Dagoglu et al., the selection of SBRT dose fractionation was based on the relationship of the recurrence to the dose-limiting structures, primarily the bowel.When the bowel was close to the target, five fractions were used, and this scheme was also used when there was no clear plane between the bowel or sacral plexus and the recurrence [25].Regarding dose constraints, the Yamada et al. study set dose constraints for D2cc of the intestine and bladder at 60 Gy (RBE) and 70 Gy (RBE), respectively, when combined with the dose distribution of the previous RT [21].
A viable alternative to the presence of mobile OARs close to the target to be irradiated can be MR-guided radiotherapy.This technique allows daily identification of the target and OARs and dose modulation according to physiological changes in the anatomy [28,29].
Currently, there are no studies reporting the use of MR-linac in LRRC, but studies of adaptive radiotherapy in locally advanced rectal cancer are ongoing, so a role for this advanced technology in LRRC may also be envisioned [30].
It would have been interesting to have a comparison between the re-RT group and the re-RT plus surgery group.In our review, only the study conducted by Chung et al. considered patients who underwent surgery.In this study, 11 patients (30%) underwent resection before or after re-RT.However, they performed a multivariate analysis of factors associated with severe late toxicity, and surgery before or after re-RT was not statistically significant (p = 0.491) [20].Surely, studies are needed to better investigate this aspect, leveraging a larger sample of patients.
The complexity of the treatment of LRRC led to the creation of additional therapeutic strategies developed over the years to meet the requirements of personalized treatment.In this scenario, the combined use of surgery and intraoperative RT also offers a promising approach to improve local control of the disease [31].
In addition, new biological discoveries may influence new drugs in the future that, in combination with RT, could improve curative options.Studies are also being developed to complement non-pharmacologic therapies [32].
The limitations of this literature review should be considered.First, the included studies are few and heterogeneous, in which patients were treated with different techniques, making any comparison difficult.Second, most of the included studies are retrospective.In addition, it is practically difficult to evaluate complications; patients who underwent reirradiation for LRRC usually have a short life expectancy and poor compliance; moreover, most of the included studies described complications independently.Further prospective studies are needed to understand the role of modern technology in the re-RT of LRRC and whether it is possible to find a correlation between the site of recurrence and the best technique to use.A prospective observational multicenter study was recently designed to evaluate whether the combination of total neoadjuvant therapy (TNT) with re-RT in LRRC patients could lead to a better LC rate.The results of this study will hopefully provide a full understanding of the benefit of re-RT in different clinical settings of relapse [33].
Conclusions
Treatment of LRRC is certainly still an open challenge.The multidisciplinary team meeting certainly reflects the first step to be taken to identify operable recurrences in the first instance and to identify patients who could benefit from pre-operative or radical re-RT.
Modern photon RT techniques (VMAT/IMRT/SBRT) have brought a breakthrough in improving dose conformity.They allow higher doses of radiation to be delivered to the tumor while minimizing the dose to surrounding normal tissue.This can lead to better tumor control rates and fewer side effects.
Using advanced technologies such as CIRT and MR-guided RT offers potential advantages that can improve treatment accuracy and outcomes, but both are technologies that are not available in all RT centers.Networking between RT centers equipped with different technologies could be a key step in personalizing treatment.
With the aim of investigating whether the combination of total neoadjuvant therapy (TNT) with re-RT in LRRC patients could lead to a better LC rate and to evaluate the role of CIRT followed by CHT in patients considered inoperable, a new multicenter observational study has been started [33].
New prospective data on integrating different anti-cancer treatments, the RT technique, the observed toxicity, and the outcomes are needed to better tailor treatments.
13 Figure 1 .
Figure 1.Search flow chart according to the PRISMA guidelines.
Figure 1 .
Figure 1.Search flow chart according to the PRISMA guidelines.
Table 1 .
Patient and study characteristics.
|
v3-fos-license
|
2023-11-15T16:08:39.173Z
|
2023-03-31T00:00:00.000
|
265207209
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://quimicahoy.uanl.mx/index.php/r/article/download/324/289",
"pdf_hash": "9efbc2a7f9c35c1a82753f18d7d2a1bddd83ce2a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:590",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Materials Science"
],
"sha1": "375e03793a85865f32d2e2de95ba0977bfe81307",
"year": 2023
}
|
pes2o/s2orc
|
Efficient removal of lead(II) ions in water using functionalized poly(styrene) oligomers
.
Introduction
Adsorbent materials based on waste polymers such as polypropylene (PP)', polyethylene terephthalate (PET)?, poly(vinyl chloride) (PYC)?, polycarbonate (PC)*, polyethylene (PE) and poly(methyl-methacrylate) (PMMA)® have received great attention due to its low cost, high -adsorption -capacity, relatively easy regeneration, and the possibility to shape them into most suitable morphology-like spheres, fibers, films or membranes".According to recently literature on the removal of heavy metals, various adsorbent based on organic and mineral structures, biological and polymeric materials have been used so far.In fact, we have recently demonstrated that introduction ofacrylamide units inside of PS oligomer chains provide an interesting adsorbent material with a strong affinity for binding lead®.In spite of their demonstrated efficacy, the most of studies have been focused mainly on the study of adsorption properties and elucidation of adsorption mechanism while that optimization of variables that affect this process have been scarcely explored".In this regarding, optimal -adsorption conditions are -prerequisite, particularly in the case of scaled up applications, to minimize cost and maximize the adsorption efficiency towards heavy metals.Classical optimization, in which only one factor is changed at a time with the purpose to measure its effect, takes a lot of time and requires a large number of experiments.Conversely, the response surface methodology, -in -which key -parameters -are simultaneously optimized, overcomes the deficiencies of single factor optimization.It is important to remark that using RSM substantially reduces the number of experiments necessary to predict the optimal adsorption conditions!".Moreover, modeling the process refines the interpretation of complex phenomena and provides a basis for process scaling.In this research, the adsorption process of Pb(Il) from aqueous solution using an adsorbent material based on PS oligomers was simultaneously optimized using factorial design with the percentage removal as variable response.With the aim to select the best conditions for the removal of Pb(II), the 241 factorial design with replicates at the central point was complemented with the central composite design with axial points to build a quadratic polynomial model and to investigate the response surface space.
Materials and methods
Synthesis and characterization of chemically modified PS oligomers and their preliminary adsorption studies 1), where the independent variables under analysis were pH (A), initial adsorbate concentration (B), adsorbent dosage (C) and temperature (D) (See table 2).Optimal conditions for adsorption experiments based on the Batch mode were determinate by using the optimal predictor quadratic model ( 1): Where Y is the predicted response, b, is the constant coefficient, by; are the quadratic coefficients, b;; are the interaction coefficients, x; and x; are the coded factors of the independent variables considered for optimal conditions for adsorption of Pb(ID) ions using chemically modified PS oligomers.
On the other hands, the analysis of variance was used to measure the magnitude of the influence of the independent variables on the response factor.The adjusted determination coefficient (RÁ4;) was used as a measurement of the proportion of the total observed variability described for 1. Quadratic models were used to build the surface response of the adsorption of Pb(II) ions and to find the maximum value of response by using the statistical software Design Expert 9.0.6.2 (Stat-Ease Inc., USA).
Adsorption experiments
The adsorption experiments were carried out in a batch method'3, which consists of shaking 20.0 mL of Pb(11) in stoppered glass tubes according to the adsorbent dosage, pH, temperature, and initial adsorbate concentration indicated in table 1.The coded factors and their actual levels are given in Table 2.The percentage of removal (%) was calculated by equation (1) as follows: Where C, and Cy are the concentration at initial and equilibrium states (mg L), respectively.
The equilibrium amount qe (mg g) adsorbed per unit mass of adsorbent material was evaluated using equation @) 3 Where qe is the equilibrium amount of Pb(II) adsorbed per unit mass, V (L) is the volume of solution and W (g) is the mass of chemically modified PS oligomers.percentage adsorption increase by increasing pH from 2.00 to 5.80.This behavior could be attributed to the fact that at higher pH, the acrylamide units linked to PS oligomers would favored the protonated form, which will increase the number of protonated species and generates the electrostatic repulsion forces among the adjacent protonated terminal amide groups (See Scheme 1) .A similar tendency has been reported by Sarkar et al.,'5 for removal of malachite green dye using biodegradable graft copolymer derived from amylopectin and poly(acrylic acid).The author found that at pH > pHyze of adsorbent, the electrostatic attraction between negatively charged surface of adsorbent and positively charged adsorbate is enhanced, resulting in high malachite green dye removal.given in fig. 3. The micrograph analysis revealed that PS oligomers (Fig. 3a) exhibit many pores throughout the surface, while functionalized PS oligomers (Fig. 3b) show the formation of spherical aggregates with a wide distribution of sizes, and an apparent low porosity due to introduction of acrylamide units inside oligomer chains.
33
Statistical analysis With the aim to determine the ideal conditions for the removal of Pb(ID), the effect of four adsorption variables were studied using central composite design (CCD) and response surface methodology (RSM).The analysis of variance (ANOVA) used to determine the significance of curvature for absorption of Pb(II) ions at a confidential level of 95% is given in table 3. The analysis of these data revealed that the curvature is significant, which means that there is an inflection point on variable set under study.Hence, the linear model is not able to represent the design space and, for this reason, FCCD design was selected to fit the quadratic model considering the 18 experimental runs.
According to the ANOVA data for removal of Pb(1I), the pH resulted to be the most important quadratic term as well as the most significant individual factor while that adsorbent dosage (p = 0.0768) and initial adsorbate concentration (p = 0.0863) exhibited both less significance for adsorption of Pb(II) ions in comparison with the pH (Eq.( 3)).This is consistent with the results reported by Meenakshi ef al. '5 for adsorption of Cd(Il) and Pb(II) from aqueous solutions using poly(aniline) grafted chitosan copolymer as an adsorbent.In the case of other independent variables, double interactions as well as quadratic terms, the ANOVA analysis showed a non-significant with p-value in the 0.11-0.89range.On Enero -Marzo, 2023 María Concepción García López, er al base of these findings, we can assume that the binding of Pb(ID) with chemically functionalized PS oligomers is dependent of pH, adsorbent dosage, and initial adsorbate concentration as was suggested by analysis of variance.
Percentage ofremoval % = 23.13+ 3034 + 632x, + 4.14x,x; + 0.92x,x, +62402x5 + 11.652x) + 38725%, +33397 -( The adjust determination coefficient (R;) for removal Efficient removal of Iead(II) ions in water using functionalized poly(styrene) oligomers On the other hand, the mathematical model was used to build response surface plots and investigate the interactions among the independent variables as well as determine the optimal condition of each factor for the maximum adsorption of Pb(II).Fig. 5a) shows that the percentage removal of Pb(II) is increased at pH values above 5.26 when the adsorbent dosage and imitial adsorbate concentration are fixed.Moreover, fig. 5 b) and c) show that at adsorbent dosage below of 10 g and temperatures above 35°C, the percentage adsorption of Pb(I1) ions exhibits remarkable decrease.34 Optimization of Batch adsorption mode To determine the optimal conditions, equation ( 3) was used to maximize the percentage removal of Pb(II).Under these conditions, the maximum percentage removal of Pb(IN) (93.12 %) was predicted to occur at 38 °C, pH 5.80, an initial adsorbate concentration of 36.40 mg L', and an adsorbent dosage of 10.77 mg with a desirability function of 0.926.The analysis of response variable revealed that the percentage adsorption of metal ions is enhanced above pH 5.63 due to increase of electrostatic interactions between the surface of functionalized PS oligomers and positively charged ions.But if the temperature is lesser than 35°C, a decrease on the percentage adsorption of Pb(II) could be attributed to low mobility of heavy metals ions.On the contrary, at temperatures greater than 35°C, the percent of adsorption is reduced because increasing the kinetic energy decreases the efficiency of the removal process.Likewise, the effect of the initial adsorbate concentration on the percentage adsorption at concentrations below to 10.74 mg L' shows that the active sites on the adsorbent material are mot completely occupied.Thus, the interaction of heavy metal ions through the surface is not enhanced, decreasing percentage removal.To validate the predicted response, experimental assays were carried out at the optimal conditions giving a percentage removal of Pb(I) equal to 91%, which is similar or greater than other adsorbent materials derivatives from waste polymers (Table 4).
Conclusions
In summary, this study demonstrated the usefulness of a central composite design to determine the optimal conditions to enhance the percent of removal of Pb(Il) from aqueous solutions.
After the simultaneous optimization by quadratic model, the optimal conditions were obtained at 37.71%, an initial adsorbate concentration of 36.37 mg mL"', a pH of 5.80 and an adsorbent dosage of 10.75 g.At these conditions, the predicted response for the removal of Pb(II) was 93.12%, which is similar to the experimental value (91.23%).The percentage of removal determined under the optimal conditions by the simultaneous optimization showed a high percentage, which is similar or higher compared to with other adsorbent materials based on waste polymers.Lastly, the surface charge distribution, the structural and morphological -features -corroborate its -successful obtention and confirm it is use as an inexpensive, efficient, and available adsorbent for removal of heavy metal from aqueous systems.
Fig. 1 pH drift method to obtain pHrzc value for functionalized PS oligomers.
Fig. 4
Fig. 4 Predicted against experimental data plots for adsorption of Pb(II).
Fig. 5
Fig. 5 Response surface plots for adsorption of Pb(II) aqueous solutions.
Table 2
Independent variables and coded levels used in the experimental design
|
v3-fos-license
|
2023-05-24T15:09:54.025Z
|
2022-08-30T00:00:00.000
|
258858485
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://shanlaxjournals.in/journals/index.php/sijash/article/download/5255/5350",
"pdf_hash": "96d9bb2b2213f05b431f8bd18ddc9ea2bee48984",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:591",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "a5caaef6343c93459ee94ce2b7ebbf33a03f5468",
"year": 2022
}
|
pes2o/s2orc
|
A Study on Characteristics of the Graceful Labelling
The origin of graph theory can be traced back to the work of the Swiss mathematician Leonard Euler (1707-1783) who in 1735 solved a problem that came to be known as the ‘Seven Bridges of Konigsberg’ [21]. In 1736, Leonard Euler published the first paper on graph theory, where he reported the solution to the above problem. He mathematically proved that it is impossible to find a route traversing all the seven bridges. Graph theory belongs to the well known area in mathematics called ‘discrete mathematics’. There exist a lot of fundamental differences between the problems and mode of solutions in the area of discrete mathematics and continuous mathematics. Discrete mathematics constitutes the number of objects whereas in continuous mathematics their size is measured. Discrete mathematics had evolved as early as man learned counting but continuous mathematics has long dominated the history of mathematics. It began to change in twentieth century. It is seen that the first development occurred in mathematics when it began to reach people. Its core point has varied from the concept of numbers to the concept of set. Set theory is more appropriate to the mode of discrete mathematics than to that of continuous mathematics. The hike in the use of computers was the dramatic point. As graph theory belongs to the area of discrete mathematics, it has got a number of attractive applications in computer science and many other allied streams including engineering and commerce. Various interesting field of research in graph theory includes the labeling of discrete structures, decomposition of graphs, topological graph theory, algebraic graph theory, fuzzy graph theory and domination in graphs. Graph labeling is a prospective research area due to its vital applications that could challenge our mind for eventual solutions. OPEN ACCESS
The origin of graph theory can be traced back to the work of the Swiss mathematician Leonard Euler (1707-1783) who in 1735 solved a problem that came to be known as the 'Seven Bridges of Konigsberg' [21]. In 1736, Leonard Euler published the first paper on graph theory, where he reported the solution to the above problem. He mathematically proved that it is impossible to find a route traversing all the seven bridges. Graph theory belongs to the well known area in mathematics called 'discrete mathematics'. There exist a lot of fundamental differences between the problems and mode of solutions in the area of discrete mathematics and continuous mathematics.
Discrete mathematics constitutes the number of objects whereas in continuous mathematics their size is measured. Discrete mathematics had evolved as early as man learned counting but continuous mathematics has long dominated the history of mathematics. It began to change in twentieth century. It is seen that the first development occurred in mathematics when it began to reach people. Its core point has varied from the concept of numbers to the concept of set. Set theory is more appropriate to the mode of discrete mathematics than to that of continuous mathematics. The hike in the use of computers was the dramatic point. As graph theory belongs to the area of discrete mathematics, it has got a number of attractive applications in computer science and many other allied streams including engineering and commerce.
Various interesting field of research in graph theory includes the labeling of discrete structures, decomposition of graphs, topological graph theory, algebraic graph theory, fuzzy graph theory and domination in graphs. Graph labeling is a prospective research area due to its vital applications that could challenge our mind for eventual solutions.
Graph Labeling
The concept of labeling of graph was first introduced by Rosa [65] in mid sixties. In labeling, distinct nonnegative integers are associated to the vertices of a graph G as vertex labels, so that each edge receives a distinct nonnegative integer as an edge label. If the domain is the vertex set then it is the vertex labeling. If the domain is the edge set then it is called the edge labeling. In total labeling, the domain is both vertex set and edge set.
The field of graph labeling has been creating a lot of interest and motivation among many researchers and this branch of mathematics has found several applications. Labeled graphs have its wide variety of applications in designing communication network addressing systems, determining ambiguities in X-Ray crystallographic analysis, determining the radio astronomy and optimal circuit layouts.
Additionally, graph labeling is also relevant in additive number theory and in coding theory, particularly for missile guidance codes, design of good radar type codes and convolution codes with optimal autocorrelation properties [12,37,45,50,83].
More than two thousand research papers have been published so far in various graph labeling. Some of them have been studied such as graceful labeling, cordial labeling, (n k)-equitable labeling, E -cordial labeling, totally magic cordial labeling, multiplicative labeling, multiplicative divisor labeling and strongly multiplicative labeling.
The ongoing research in graph labeling is reorganized and papers listed since last one decade by Gallian [23] through his 'dynamic surveys'.
Graceful Labeling
In the year 1967, Rosa introduced a new type of graph labeling which he named as -labeling or one given by Graham and Sloane [31] in 1980. Let G be any graph and q be the number of edges in G. Rosa introduced a function f from the set of vertices of G to the set of integers{ 0, 1, 2, . . . , q }, so that each edge uv is assigned the label f u − f(v) , with all labels distinct. It was believed that these labeling would help solve Ringel's conjecture [64], which involves decomposing a complete graph into isomorphic sub graphs. Golomb [30] independently studied the same type of labeling and named this labeling as graceful labeling [2,33,38,39,41,43,47,51,84].
Gnanajothi [29] defined and studied odd graceful graphs. The challenge in odd graceful labeling is to find out whether a given graph is odd graceful, and if it is then how to label the vertices. The common approach in proving the odd gracefulness of special classes of graphs is to either provide formulas for odd graceful labeling in the given graph, or construct desired labeling from combining the famous classes of odd graceful graphs.
Multiprotocol Label Switching
The Multiprotocol Label Switching (MPLS) working group was first formed in 1997. The First MPLS Layer 3 Virtual Private Networks (L3VPN) and Traffic Engineering (TE) deployment was done in 1999. Recently, MPLS transport profile was introduced in 2011. Before MPLS, a number of different technologies were deployed such as Frame Relay and Asynchronous Transfer Mode (ATM). Frame Relay and ATM uses frames or cells throughout a network.
MPLS technologies have evolved with the strengths and weaknesses of ATM in mind. Many network engineers have agreed that ATM and Frame Relay will be replaced with a protocol that requires less overhead, providing connection-oriented services for variable-length frames. MPLS has been replacing some of these technologies in the marketplace. It is highly possible that MPLS will completely replace these technologies in the future, thus aligning these technologies with current and future technology needs [55, 61 and 66].
MPLS uses labels in the packets to transport the data. It allows the core network devices to switch the data packets instead of looking at the destination IP address in the routing table. MPLS label path is preestablished from source to destination. MPLS data packets can be run on other layer 2 technologies such as Frame relay, ATM, Ethernet.
MPLS improves the performance of data packet forwarding in the network. It provides highly scalable network which is topology driven instead of flow driven. It supports Quality of service (QOS) by prioritizing the critical applications and processing them faster. MPLS network has the capability to restore the failed connections at very high speed than the traditional network.
Objectives and Scope of the Study
The objectives of this research work are the following: i) To study the different types of graceful labeling and find new classes of graceful graphs. ii) To investigate the possibilities of introducing new types of graceful labeling and find the corresponding classes of graceful graphs. iii) To investigate the relations among different types of graceful labeling. iv) To investigate the relation between different types of graceful labeling and other labeling. v) To study the applications of the different types of graceful labeling. This research work will introduce definitions for new types of graceful graphs in graph theory and also new classes of graceful graphs will be established. Further, the study of relation between different types of graphs, graceful or otherwise will open a new branch of study in graph theory. This research has enormous scope for further studies.
Organization of the Thesis
This thesis contains six chapters. The organization of the contents in each of these chapters is given below.
The first chapter of the thesis contains a brief introduction to graph labeling, graceful labeling and their applications. Literature review is carried out briefly. This chapter also contains the objectives and the scope of the thesis.
The second chapter provides the basic definitions and theorems on graphs, graph labeling, graceful labeling and Multiprotocol Label Switching communication networks, which are needed for discussion in the subsequent chapters.
The third chapter is divided into three sections as follows. The first section illustrates some of the results on odd graceful labeling. Odd gracefulness of the graph 1, , 1, : ∀ , , the corona graph C 4 (K l,n ), and k1, the Dutch windmill graph m,D4 when m 2 are discussed.
The second section investigates the even gracefulness of the spider graph , ,2 S n n " the coconut tree CT(n,m)"n,m, the graph , : , , 1, 1, K K w n m n m " all caterpillar graph, the corona graph (), 4 1,n C K where n ³ 1, the shadow graph ( ), 2 n D P where n ³ 2, the shadow graph D K n n ( )" 2 1, and the graph , * n+k P where n ³ 2 and k ³ 1.
The inferences from the above studies on graceful labeling motivated the development of a new labeling technique termed as even-even graceful labeling. This study is carried out in the third section.
Definition A (p,q)-graph G is said to be eveneven graceful if there is a bijection f from the edge set E(G) to the set { 2, 4,. . . , 2q } such that the induced mapping * f from the vertex set V(G) to the set { 0, 1, 2, . . . , (2k −1 )} defined by f (v)= (Σ f (uv))(mod 2k) *, sum taken over all edges incident to v , where k = max{p, q} makes all edge labels distinct. It has been proved that the following well known families of graphs namely (i) path graph P n n (ii) star graph n K 1, when n is even (iii) the graph P n n (4) (iv) the graph K K w n n , : 1, 1, n (v) perfect m -ary tree when m is even (vi) cycle graph n C when n is odd (vii) wheel graph n W when n 0(mod 4) (viii) dumbbell graph D(n, n) (ix) the graph, n n C where n 3 and n is odd integer (x) Cartesian product graph P C n n are all even-even graceful.
The fourth chapter begins with a detailed study on the relation between different types of graceful labeling. Further, the relation between even-even graceful labeling and other labeling such as E -cordial labeling, totally magic cordial labeling, multiplicative labeling, strongly multiplicative labeling and multiplicative divisor labeling are investigated. The concept of complementary edgeodd graceful labeling has been introduced with an example.
The fifth chapter introduces a new concept of k -even-even edge graceful labeling of a graph and derives some family of graphs that are k -even-even edge graceful. Many variations and generalizations of labeling of graphs have been studied by many authors in many ways. Graceful labeling of graph has a lot of applications. In this chapter an application of k -even-even edge graceful labeling is also established.
Definition A (p,q)-graph G is said to be k -eveneven graceful (k >0) if there is a bijection f from the edge set E(G) to the set (2k, 2k 2, . . . , 2k, 2q, 2) such that the induced mapping *f from the vertex set V(G) to the set (0, 2, . . . , 2z -2) defined by f (v) = (cf (uv) (mod 2z) *, sum taken over all edges incident to v , where z = max{p,q}makes all edge labels distinct.
It shows that, the well known graphs namely the star graph n K 1, when n is even and k = 1(mod n = 1), the friendship graph n F when n is odd, the prism graph , n Y n > -3 and the Cartesian product graph , m n C .C where n=3 and m ≡1(mod 4) and the corona graph ( ), m 1,n C K when m is odd, n is even and m divides n are k -even-even edge graceful. Further, an application of k -even-even edge graceful labeling in MPLS (Multiprotocol Label Switching) communication networks is carried out. A graph G = (V , E) represents a MPLS communication network. The vertices are routers and the edges are typically the links between routers.
In a MPLS communication network, it is useful to assign an even label to each destination IP networks. Labels are assigned and distributed before arrival of data traffic. This means that if a route exists in the IP forwarding table, a label has already been allocated for the route and so traffic arriving at a router can be label swapped immediately.
The sixth chapter contains the summary of this research work. The promising areas of future research are also discussed in this chapter.
Conclusion
This chapter comprises a general introduction to graph theory, graph labeling, graceful labeling and multiprotocol label switching. The objectives of this research work and the scope of the study are given explicitly. Further, a brief discussion on the organization of the thesis is carried out chapter wise.
The basic concepts in graph theory frequently used in this thesis and also a brief review of literature are given in the next chapter.
Summary
The comprehensive goal of this study is to scrutinize the amplification of graph labeling, graceful labeling and various types of graceful labeling. This thesis employs a mathematical approach comprising aggregated and solid level of analysis using recognized data to explore the relationship between various graph labeling of graphs. Further it throws a special focus on how keven-even edge graceful labeling is applied to Multiprotocol Label Switching communication network.
One of the objectives of this research work is to study the different types of graceful labeling and find new classes of graceful graphs. Keeping the basic concepts and ideas of graph theory, constructions are given and the following theorems are established. The odd gracefulness of the graph ,,:,1,1,mnwKK mn ∀ the corona graph ), (1,4 n KC where 1 ≥n, the coconut tree , ,)( , m nmnCT ∀ the graph , * + n k P where 2 ≥n and 1 ≥k , the Dutch windmill graph ( ) mD 4 when 2 ≥m is successfully derived. Even gracefulness of the spider graph , ,2 nS n ∀ the coconut tree ,,) ( , mnmnCT ∀ the graph , , :, 1,1, mnwKK mn ∀ all caterpillar graph, the corona graph), ( 1,4 n KC where 1, ≥n the shadow graph ), (2 n PD where 2 ≥n , the shadow graph n KD n ∀ ) (1,2 and the graph , * + n k P where 2 ≥n and 1 ≥k has been established. Also, a new type of graceful labeling called even-even graceful labeling is introduced and the corresponding classes of graceful graphs are given. The path graph 1),(2 1 ≥+ nP n the star graph n K 1, when n is even, the graph (4) n P for every n, the graph wKK nn : , 1,1, for every n, perfect m-ary tree when m is even, the cycle graph n C when n is odd, the wheel graph n W when 4) (mod0≡n , dumbbell graph ( , ) nnD for every n , the graph ( ) n n C where 3 ≥n and n is odd and the Cartesian product graph nCP n∀×2 are even-even graceful. There may be many more classes of graphs that are even-even graceful.
Further the relations among different types of graceful labeling and also the relation between different types of graceful labeling and other labeling are investigated. A graph has an odd-even graceful labeling if and only if it has a graceful labeling. In addition, the study established that graceful graphs are even graceful. On the other hand, even graceful graphs are graceful if they have an even graceful labeling whose vertex labels are all even.
The relation between different types of graceful labeling and other well known types of labeling namely graceful labeling and signed product cordial labeling of a tree, even graceful labeling and ) ( k n + -equitable labeling of the graph , * + n k P even-even graceful labeling and E -cordial labeling of a perfect m-ary tree, even even graceful labeling and totally magic cordial labeling of a tree, even-even graceful and multiplicative labeling of an m-ary tree, eveneven graceful and strongly multiplicative labeling of the m-ary tree and even-even graceful and modular multiplicative divisor labeling of the m-ary tree are established. The concept of complementary edge-odd graceful labeling is introduced.
This thesis further defines a new labeling technique namely k -even-even edge graceful labeling. The star graph n K 1, when n is even and 1), (mod1 +≡ nk the friendship graph n F when 0 >k and n is odd, the prism graph , n Y where 0,>k 3 ≥n and the Cartesian product graph , nm CC × where 0, >k 3 =n and 4)(mod1≡m and the corona graph ), ( 1, nm KC when 0, >k m is odd, n is even and m divides n are k -even even edge graceful.
There are several papers on graph labeling that observed and identified its usage towards communication network. MPLS is a technology in high performance telecommunication network that sends data packet from one network node to the next network node, depending on the short path labels instead of looking at the long network IP addresses, thus avoiding complex lookups in an IP routing table. This thesis demonstrates how k -even-even edge graceful labeling is applied to Multiprotocol Label Switching communication networking. This concept is used to create a unique label to each destination network, which enables one to do the communications at very high speed in the modern technology.
Analogous study of other graph families, their different labeling techniques and its purpose is an open area of research.
|
v3-fos-license
|
2021-07-03T20:44:05.335Z
|
2021-01-01T00:00:00.000
|
235720232
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ccsenet.org/journal/index.php/ibr/article/download/0/0/45479/48315",
"pdf_hash": "1b71ab82f3a45b03f853fbce372c7ed9b08323ea",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:594",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"sha1": "1b71ab82f3a45b03f853fbce372c7ed9b08323ea",
"year": 2021
}
|
pes2o/s2orc
|
Public Policy and Youth Employment: An Empirical Study of Cameroon's Experience
The aim purpose of this paper is to assess the contribution of public policies on youth employment in Cameroon. To do this, we used the multinomial Logit model that is being followed up for our employment equation. The maximum probability method is the estimation technique used and applied to data extracted from the EISS database (2011). Three main results emerge from this study: (1) young people who wish to self-employment do not have adequate training and the technical and financial support offered to them by the government is insufficient; (2) the incentives proposed by the State to private operator to encourage them to recruit young people do not always contribute to this objective and (3) the massive recruitments carried out by the State fail to pay off all unemployed young people. In this situation, the Cameroonian state should further strengthen the professionalization of training and, above all, guide training offers in the areas that present opportunities in our country. It also needs to strengthen the facilities afforded to private companies to encourage them to recruit more young people. We also suggest that the Cameroonian government provide more technical and material support to young people who are seeking it and, on the other hand, to raise more funds for the bankable projects presented by these Last.
Introduction
At the end of January 2014, at the initiative of the International Development Research Centre (IDRC) and Thinks Tank, an international conference was held on the issue of youth employment in Sub-Saharan Africa. It brought together development partners, national and regional decision-makers. Indeed, statistics produced by specialized institutions show that the issue of youth employment spares no regardless of its level of development. For example, and for example, in France, the youth unemployment rate reached 24% in 2017. In Sub-Saharan Africa, it is higher and varies, according to the AfDB in 2016, between 30% and 40%.
The situation in Cameroon follows this general trend as young people find it difficult to access decent employment. Unemployment and underemployment of young people are reaching very high levels. According to the results of the Employment and Informal Sector Survey (EISS 2011), about 23.8% of young people aged 17 to 35 are unemployed, especially in urban areas (46%). According to the ILO, overall underemployment is about 89%. Rural youth are the most affected, with about 94% under-employed compared to 85% of urban youth, with an increased number of highly skilled youth. However, this last tranche is the major major target.
In response, the Government of Cameroon undertook several actions, including the creation in 1990 of the National Employment Fund (NEF), whose role is to promote employment by facilitating the meeting between offers and applications. In December 2004, a ministerial department was created and entirely dedicated to employment and vocational training. In order to ensure transparency in the labour market and transform the informal sector, the National Observatory for Employment and Vocational Training (NOEVT) was set up and the Integrated Support Project for Actors in the Informal Sector (ISPAIS) was implemented. In all regions of the country, experimental projects for disadvantaged young people are operational, such as the Youth and Associative Life for Social Insertion (YALSI) programme. In order to integrate young people, MINJEC has implemented the Rural and Urban Youth Support Programme (RUYSP) and the Youth Socio-Economic Integration Project through the creation of micro-enterprises for the Manufacture of Sports Materials (PIFMAS).
Aware of the place of young people in the total population (36.28%), the dynamism of this age group (17-35 years) and its ability to match current technological developments, the Government of Cameroon has undertaken these various actions with the aim of significantly reducing the unemployment of these young people and ensuring the socio-economic development of Cameroon. But despite all these different actions, the expected results are below expectations. Thus, the government's concern about the evaluation of these policies is legitimate. As a result, several questions emerge: what are the main challenges faced by young job seekers? Do the various actions defined by the Government take these difficulties into account? Are these actions accessible to all young people? Which of these actions is most favourable to the employment of cameroonian youth?
In view of the above and in the light of the literature that points out that public action interventions for youth employment concern coaching, financial support, labour taxation, the minimum wage and the performance of the training system (World Bank, 2012;Nicoletti Scarpetta, 2005;Nunziata, 2002;Nickell, 1997), this study aims to ascertain whether all the measures initiated by the Cameroonian Government promote youth employment. Specifically, we are talking about assessing the effects of these measures on access to employment or not on the one hand and on the type of employment on the other. The originality of this study is based on the methodology adopted. It is based on the use of an embonomial multinomial logit model that has the advantage of taking into account both levels of evaluation.
After this introduction, which is the subject of the first section, the rest of the article is organized in the following way. The second section is devoted to a brief review of literature. The third presents the characteristics of the labour market and the various public policies in favour of youth employment in Cameroon. Methodology, data and estimation techniques are presented in the fourth section. The fifth section presents and analyzes the results. The sixth concludes and suggests some economic policy recommendations.
Public Policy and the Labour Market: A Selective Review of Literature
Faced with the persistence of problems of access to employment for young people, economists of all theoretical currents have tried to revise their interpretations. The neoclassicals have proposed new explanations of unemployment, either in terms of dysfunction as in the standard theory, or in terms of the rationality of the unemployed. Keynesians continued to explain unemployment by insufficient demand. Since the 1990s, in the search for key measures to facilitate access to employment in general and access to employment for young people in particular, the role of public policy has increasingly been considered in analyses. Our readings show that these public actions in the labour market have three main orientations. The first group of programmes focuses on the demand side of the labour market and the second on the supply side, while the third group aims to improve the functioning of the labour market.
On the applicant side, governments have attempted to encourage recruitment by reducing the cost to employers of providing employment (Cré pon al., 2012). The analysis of a reduction in the cost of labour centred on a particular category (for example, young workers) leads to the distinction of three main mechanisms. The impact on employment may first result from a micro-economic substitution effect, the extent of which depends on the elasticity of substitution between the different factors of production (youth work, the work of experienced people and capital). Also at the micro-economic level, the lowering of the cost of a factor has a mechanical impact on the cost of production of the company. The latter can pass this decrease on to its selling price, and thus see the demand addressed to it increase. At the same time, price adjustments can lead to substitution effects at the macro level, passing through the relative prices of different products according to their relative intensity in different factors of production: for example, a fall in the cost of youth labour should lead to an increase in the relative demand for goods and services whose production is intensive in young work (and thus , symmetrically, a decline in the relative demand for capital intensive goods and services of experienced people). However, in order to fully appreciate the overall impact of such a measure on employment (not only low-skilled), it would also be necessary, beyond the effects we have just mentioned, to take into account the effects of macroeconomic "closure", and in particular those related to the financing arrangements of the measure.
Empirical studies show that the issue of the relationship between labour costs and youth employment was far from unanimous in the economist community. The debate focused in particular on the effects of the minimum wage. In the United States, the decline in the federal minimum wage (in relative and real terms) during the 1980s, and its sharp increase between 1989 and 1991, had not had a significant impact on employment (Card Krueger, 1997).
In addition to the minimum wage, other studies have considered the reduction of social security contributions. Whether it is work based on forward-looking assessments (Laroque Salanié , 1999), computable general equilibrium models, estimated microsimulation models based on wage and participation equations (Laroque Salanié , 1999) or retrospective evaluations (Cré pon al., 2012), the results are those of a significant positive effect of the reduction of social contributions on total employment. For labour offerers, public policy involves several actions (Ryan, 2001). First, education policy measures are based on the idea that youth unemployment is due to a problem of inadequate or inadequate training. The main objective of these schemes is to bring education systems closer together in order to improve the adaptation of skills to the needs of the productive apparatus. This group includes alternating training contracts: apprenticeship contract, adaptation contract and qualification contract (Sabina Issehnane, 2009;Nickell, 1997). Research has suggested that higher levels of education and cognitive skills are associated with economic growth (Hanushek Woessmann, 2012) and the employment of a larger share of young people in the modern non-agricultural wage sector (Lee Newhouse, 2012). If increased educational attainment is not associated with a greater accumulation of skills, schooling will have a limited effect on overall growth and the composition of employment.
Second, governments can intervene directly in the labour market by creating non-market jobs and facilitating the conditions of creation, particularly in the field of new technologies. Indeed, the development of new technologies has increased the dependence of the countries of the South, as they have been created and patented in developed countries. Moreover, it is essentially the multinational companies that hold the rights. In recent decades, the demand for new technologies has gained new dynamism and contributed to the creation of new jobs. However, many of these jobs are in developed countries, although they now tend to be more evenly distributed as a result of the increasing internationalization of production processes controlled by large companies. New technologies generally lead to labour savings. This should not lead us to refuse them, but rather to seek to put in place an active policy of ownership and development of their applications, exploiting the progress that often manifests itself at the national level.
Finally, assisted merchant contracts are part of deregulation policies. The political economy model of unemployment argues that radical labour market reform is needed to combat structural unemployment. Some studies conclude that the level of unemployment compensation and its duration have a significant impact on unemployment (Scarpetta, 1996;Nickell, 1998). Similarly, a number of empirical studies find that heavy taxation of labour tends to increase the unemployment rate (Bassanini & Duval, 2009;Nickell, 1997), although other studies are less conclusive in this regard (Scarpetta, 1996;Elmeskov al., 1998;Nunziata, 2002). In addition, some macro-term studies identify a favourable effect on overall labour market expenditure (MTPA) spending on active labour market policies and an adverse effect of homeownership, but they do not agree on the extent of these effects (Scarpetta, 1996;Green Hendershott, 2001, Nickell al., 2005. Regarding measures to improve the functioning of the labour market, (Nickell, 1997) confirms in a study examining the effects of institutions on fluctuations in the employment rate on several countries in Europe that the factors influencing mainly the duration of unemployment benefits, unionization rates, coordination between employers and employees, taxation on labour, the minimum wage and the performance of the training system. Similarly, the World Bank states that labour market regulation provides important social protections for workers in terms of: employment contracts, severance pay, unemployment benefits, dismissal grounds, trade union rights, or the scope of collective bargaining.
Some Stylized Facts about the Labour Market in Cameroon
Two main concerns are ours in this section. The first relates to the socio-professional situation of young people in Cameroon. The second refers to the variety of actions initiated by the Cameroonian Government on behalf of these young people.
A Labour Market Dominated by High Youth Unemployment
The labour force participation rate is one of the indicators that allows us to assess the dynamics in the labour market. Its fluctuations give an indication of the market's ability to open up or close to job creation. In addition to this indicator, we also have the unemployment that characterizes the labour market in Cameroon.
High Unemployment among Cameroonian Youth
According to the results of the Employment and Informal Sector Survey (EISS, 2011), about 23.8% of young people aged 17 to 35 are unemployed. This rate hides disparities by gender, age, region of residence and degree.
By gender, women are more affected (26.3%) men (19.6%). Comparing these rates with those of overall unemployment (18.8% for women and 11.6% for men) there is a greater discouragement among women in finding a job. At the regional level, in addition to the major metropolises, notably Douala and Yaounde with about 40.4%, young people from the Adamaoua and South-West regions are the most affected by unemployment with rates above the national average. It should also be noted that the phenomenon is more pronounced in urban areas where the average rate is 27.4% compared to 14.3% in rural areas. The rise in educational attainment does not appear to be a parade against unemployment. The unemployment rate is generally higher among those with the higher level. Indeed, it is 13.7% among those holders of the BAC/GCEAL/BEP, 22.8% for those with BTS/DUT/DEUG/NHD, 15.7% for licensees/BD and 10.4% for master's/DEA/Master/MBA holders. The majority of these unemployed are first-time job seekers. In fact, they are 59.5% first-time applicants compared to 40.5% for those who have already held a job. Moreover, unemployment is long-term, with 56.4% of them looking for work for more than a year, with an average duration of unemployment of 34.3 months. This is higher in rural areas (46.2 months) than in urban areas (31.2 months). The majority of unemployed people, for job security reasons, have a preference for employment in the public or formal private sectors. Among the unemployed, 65.9% prefer to be employed in the public or formal private sector, 21.6% are in self-employment and 12.4% are indifferent. Compared to the level of education, almost 8 out of 10 unemployed at the higher level prefer a salaried job, compared to those at the secondary level, who are 26.2% in favour of self-employment. It is therefore not out of the question that some, already active in the informal sector, are candidates for public policy. For young Cameroonians in the [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] age group with at least one degree, the average wage claims are 119,412 CFA francs per month, 156,323 CFA francs per month for men and 99,484 CFA francs for women. These wage claims grow with the diploma. In fact, they go from about 70,728 CFA francs for those with the CEP or a diploma equivalent to 400,000 CFA francs among those with a Doctorate to 201,416 CFA francs for those with a BTS or equivalent degree. On the other hand, regardless of the degree, men have consistently higher wage claims than women.
Public Policies for Youth Employment
Economic theory points out that public policies in favour of employment can be grouped into two main categories: active and passive policies. This second group of measures is to make unemployment bearable by the unemployed by guaranteeing them an income and to reduce the unoccupied labour force. These include unemployment insurance, the compensation of the unemployed, which aims to provide some income security for people who are not employed. Unable to implement these measures, the Government of Cameroon has opted for active policies aimed at improving young people's access to paid employment and encouraging their autonomy. The first group of programmes focuses on the demand side of the labour market and the second on the supply side, while the third group aims to improve the functioning of the labour market.
Cameroonian State Actions on Behalf of Labour Force Actors
Knowing that the demand for work in the Cameroonian labour market is the work of public and private actors, we present here the measures taken by the government to boost it.
Measures for private demand
The Cameroonian government has taken a number of actions to remove barriers to private sector development and business creation, and to enable the private sector to act as a driving force for growth and economic development. These actions focus on the liberalisation and simplification of administrative procedures and taxation. The Cameroonian government has also undertaken, with the help of external partners, to provide the SME landscape with new support instruments more in line with the new liberal economic orientations. Specific actions are also being taken towards SMEs and are coordinated within the Priority Sme Promotion Programme (PSPP). This programme aims, among other things, to promote the private sector through the creation and development of SMEs, to create jobs and to strengthen the capacity of women, young people and people with disabilities to start businesses.
The new investment code provides incentives for businesses to create new jobs. Thus, the SME scheme is granted to companies that create at least one permanent job for Cameroonians increments of less than or equal to five (5) million CFA francs of investments planned by the company and of which at least 35% of the capital is held by Cameroonians or Cameroonian legal entities. These measures, which enable the development of private enterprises, will ultimately encourage the recruitment of young people into private companies. We have the case of companies working in different sectors of activity that recruit young people, notably the Breweries of Cameroon, Orange Cameroon, MTN, to name but a few.
Measures for public demand
These measures include the recruitment in 2011 of 25,000 public officials between the ages of 17 and 40 in the Public Service. This decision, which is very far-reaching politically and socio-economically, has been welcomed by all populations. They see it as a beginning to solve the thorny problem of unemployment, especially that of young graduates, some of whom have been desperate for employment for more than a decade; Direct recruitment competitions in the various public administrations open to young people aged 17 to 35; the development of the partnership with the socio-professional circles provided by the Ministry of Higher Education; Advocacy for youth employment and internships in public and private services provided by the Ministry of Youth and Civic Education; Etc.
The Actions of the Cameroonian State in Favour of the Actors of the Labour Offer
Knowing that competence is the key to employment, the Cameroonian government has invested in both university and vocational training for young people. While vocational training is a long-standing concern of the Cameroonian government, its regulation and diversification were strengthened by Law 76/12 of 08 July 1976. These texts officially create initial and continuing vocational training. This legislation, which was deemed outdated, has adjusted to the new socio-economic context. A new policy on orientation and vocational training is being developed to address a number of issues, including measures to support adjustment, ways and means to reform and optimise the infrastructure of the training system, the choice of training methods and means, the financing of vocational training, new vocational training courses to be developed. , the renovation of learning, etc.
Beyond competence, the Cameroonian government has in recent years created several programmes to facilitate the self-employment of young people. The actions of these structures concern, among other things, financial, technical and material support to young people who wish to start their own business. These structures include the Rural and Urban Youth Support Programme (RUYSP); Youth Insertion Fund (YIF); National Youth Insertion Fund (NYIF); Youth Insertion Project by the creation of micro-companies for the Manufacture of Sports Equipment (PIFMAS); Integrated Informal Sector Actor Support Project (IISASP); Multi-media Development Relay Project (MDRP); Young Farmers' Settlement Support Program (YFSSP); Development of entrepreneurship and youth self-employment; Piloting and coordinating the activities of youth coaching structures; Commonwealth Youth Credit initiative; Annual Employment and Career Orientation Show; Strengthening professional integration; Socio-professional integration of girls and women (SIGW); Improved Rural Family Income (IRFI); Quantitative increase in the supply of vocational training, etc.
Cameroonian State Actions to Improve the Functioning of the Labour Market
Similarly, the definition of the policy for promoting private investment is the responsibility of MINDIC through the direction of industrial development. This directorate is also responsible for the evaluation and adaptation of the investment code, monitoring the activities of the National Investment Corporation (NIC), the National Office of Industrial Free Zones, industrial promotion or investment organizations, etc. The establishment of an incentive framework to attract private investment has been made effective by the Investment Code, the Free Zones Act, the Labour Code, the creation of an Investment Code Management Unit and the National Office of Free Zones and tax-customs reform.
It is clear from the above that the Cameroonian government has defined and implemented several measures in favour of youth employment. All these actions demonstrate the government's commitment to providing employment for young Cameroonians. But by looking at the number of candidates applying for these different offers, both private and public, we can safely say that young Cameroonians are quite interested. But these measures remain insufficient because not all young people who apply are always positive. As proof, direct recruitment competitions in large schools such as the 180-seat open ENAM receives nearly 6000 applications. This leads us to say that there is necessarily an imbalance between the different offers and the number of applicants. The state's limited offer is justified by the lack of means to pay these new workers. On the other hand, the low recruitment by private companies can be justified either by the lack of coincidence between the training of young people and the qualifications required, or the absence or inadequacy of the facilities granted by the State to them in order to encourage them to recruit more young people or the gloom of the economic situation which does not stimulate the employment of a large quantity of labour. Despite all these measures, it is clear that the expected results are not always achieved, as a large proportion of young Cameroonians are still unemployed. We are therefore talking about assessing the effects of these various measures taken by the government on the youth employment situation.
Study Methodology
In this section we present the interlocking multinomial logit model and the justification for its choice on the one hand, the data and variables of this model on the other.
Introducing the Interlocking Multinomial Logit Model and Justifying Its Choice
Historically, the study of models describing the modalities taken by one or more qualitative variables dates from the 1940s-1950s. The most striking works of this period are undoubtedly those of Berkson (1944Berkson ( , 1951. It was from the 1970s that these models were used to describe economic data, including the work of McFadden (1974) and Heckman (1976). Most of these used simple dichotomy models (Logit and Probit models). But when the variable to be explained has more than two modalities, dichotomous models become inappropriate, justifying the use of multinomial logistics models (Guadagni Little, 1983).
However, this type of model assumes proportional substitution frameworks (ownership of the independence of irrelevant alternatives, IIA), i.e. the ratio of probability choices of two alternatives (Pj/Pk) is not dependent on the presence or absence of other alternatives in the model. If the IIA test is inconclusive, an alternative model to the multinomial logit model (LM) should be used. The natural alternative to the latter is a multivariate probit model, the estimate of which is however complex in the current state of knowledge. Another more operational model has been developed to partially relax the strong IIA hypothesis; it is the interlocking multinomial logistics model (Guadagni Little, 1998).
This interlocking multinomial logit (LME) model is a combination of standard logit models that differs from the latter in that the components of the alternative choice error do not necessarily need the same distribution. In addition, the LME model admits, more general alternative frameworks. The idea of this model lies in a grouping of similar alternatives within subsets or subgroups, with the aim of creating a hierarchical structure of alternatives (Ben-Akiva Lerman, 1985; Train, 2003) which does not necessarily require that the process of individual choice be sequential. The error terms of alternatives within a (same) subset are correlated to each other, while those of alternatives in different subsets are not correlated. Thus, the IIA hypothesis is maintained within each subset, but the variance may differ between the different subsets. The interlocking multinomial logit model process thus accommodates a violation or partial loosening of IIA ownership. This model can be considered a two-tiered (or more) choice problem.
In this study, we are talking about assessing the impact of public policies on youth employment in Cameroon. However, we know that in the search for employment, young people face the difficulties that are observed at two levels. The partition of these difficulties into sub-sets or sub-groups is thus easily achievable, since one can naturally distinguish the situation of absence of employment from all other difficulties which are all employment situations. Thus, if a young person cannot find a job, then he is unemployed. On the other hand, if he gets a job, he works either as an employee of the public sector, the private sector or as a self-employed person (self-employment). This hierarchical structure of our model can be reproduced in the form of the following decision tree: Let's look now at how it will be mathematically specified. Like the multinomial logit model, the mathematical specification of the nested multinomial logit model reflects the probability that a young person will be in either sector knowing that they have landed a job, and are assessed using the following equation: (1) The difference between the LM and LME models occurs in the process of assessing the likelihood of being unemployed rather than having access to employment. This probability becomes: In this formulation, the zi0 vector corresponds to a set of variables specific to the explanation of access to employment or not. These may differ from the explanatory variables of obtaining employment in a sector (zi). The term Ii represents the inclusive value to the sector concerned (public, private or self-employment). In this formulation, if equal to 1, the LME model is reduced to a standard LM model. Thus, it is by allowing the term to differ from the unit that the LME model releases the IIA hypothesis through the different "branches" of the decision tree. It is maintained between choices belonging to the same subgroup, but is relaxed between subgroups. The probability of getting a job in a particular sector is written as follows: The parameters of the LME model thus defined can be estimated by the usual techniques of maximum likelihood.
It is hypothesized that residues ε in the stochastic utility function have independent distributions of the type GEV (generalized extreme-value). It should also be noted that it is also possible to estimate an LME model sequentially using a step-by-step method (Maddala 1983). It is then estimated at first the 'j' in the equation (1), and then the inclusive value ii is calculated. Finally, we can calculate the '0' through the equation (2). However, this method results in some loss of effectiveness in this case. However, it is very useful in evaluating large models for which the maximum likelihood method becomes difficult to use. The setting can be used to test the IIA hypothesis. Indeed, a test of the zero hypothesis -1 will be an effective test of the relevance of the latter in the LM model.
LME Data and Variables
The data used in this work are extracted from the EISS database, i.e. the Informal Sector Employment Survey conducted in 2011 by Cameroon's National Statistical Institute (NSI). The survey base used consists of the consolidation of Cameroonian households. Renter counts the requirement of the study, individuals no age is greater than 35 years or less than 17 years were not selected in the survey database. This restriction resulted in a total population of 13110 individuals scattered throughout the national territory (see Table 5 in appendix). In order to take into account the countries of the region, the Cameroonian territory has been divided into 12 regions: Far North, North, Adamaoua, East, South, West, Southwest, Central except Yaounde, Coastal except Douala, and the cities of Douala and Yaounde are considered to be separate regions and assimilated respectively to the departments of Wouri and Mfoundi. The data used in this work are extracted from the EISS database, i.e. the Informal Sector Employment Survey conducted in 2011 by Cameroon's National Statistical Institute (NSI). The survey base used consists of the consolidation of Cameroonian households. Renter counts the requirement of the study, individuals no age is greater than 35 years or less than 17 years were not selected in the survey database. This restriction resulted in a total population of 13110 individuals scattered throughout the national territory (see Table 5 in appendix). In order to take into account the countries of the region, the Cameroonian territory has been divided into 12 regions: Far North, North, Adamaoua, East, South, West, Southwest, Central except Yaounde, Coastal except Douala, and the cities of Douala and Yaounde are considered to be separate regions and assimilated respectively to the departments of Wouri and Mfoundi. We know that one of the objectives of statistics is to observe phenomena in a homogeneous group of individuals. Thus, the exerbene Douala and Yaounde of the respective provinces Littoral and Centre will reduce the bias on the calculated indicators because these two cities are the most populated.
The multi-sector model of access to employment that we will implement is a discrete choice model. As a result, it first requires a change in the shape of the data usually used for estimating "classic" models. Indeed, the characteristics of individuals are no longer presented "on a single line", but in the form of panel data requiring each individual to write a line of characteristics for each choice available to him. This form, although more complex, has two significant advantages. On the one hand, it allows, if necessary, to choose different explanatory variables depending on the alternative that one wishes to explain. On the other hand, it allows consideration of attributes related to the different choices that the individual can make. This means that it allows different variables to be taken into account, for the same individual, depending on the choice he or she makes.
This specific form of data can be particularly interesting for the study of access to employment. Indeed, it can be considered that the different possibilities offered to each individual are not necessarily explained by the same variables. Thus, it can be argued that non-access to employment is most often linked to variables specific to the socio-economic structure and characteristics of the individual, while access to an employment sector depends on the actions defined and implemented by the Government and on the other hand to the potential of the young person.
Our model therefore requires the use of several variables that we can group in the following way: a Variable dependent on the equation of the first level or top level. It identifies the alternatives at this level, namely the two options available to young people who are looking for work: having access to a job or not; a variable dependent on the second-level or bottom-level equation. An "employment-type" variable that identifies the different opportunities available to young people who have found a job ; variables of access to employment or not (first level or top level). These variables are mainly related to the specific characteristics of the young person (age, education level, gender, social capital); institutional (favourable youth employment policies) and economic (economic) environment in the country; explanatory variables of the choice of an employment sector (second level or bottom-level). These variables mainly relate to the individual characteristics of different employment sectors. Here we can remember the various public policies defined and implemented for young people (massive recruitment of young people to the civil service, the facilities granted to private companies that recruit young people, the technical and financial support provided to young people who wish to settle for themselves); the competent characteristics of young people (type of training, qualification area, entrepreneurial ability, professional experience, concern for stable situation).
Results and Interpretations
Before presenting the results of the estimates of our interlocking multinomial logit model using the maximum likelihood method, we highlight the variables selected for this model our data from descriptive statistics.
Descriptive Statistics of Our Variables
Our variable dependent on the top level consists of two modalities: either the young person has access to employment or not (unemployment). Table 6, appendix, shows that the majority of our sample is unemployed (62.01%) compared to only 37.91% of those who are employed. Table 7 in appendix shows our variable dependent on the second level. We find that the majority of young people who work are public sector employees (16%), followed by private sector employees (15%). We find that young people who move in on their own account account only 6.91%. This situation can be explained by several factors including insufficient financial resources, the inability to set up a good project or the complexity of the procedures for setting up a business.
These dependent variables are explained by a set of independent variables whose average values and standard deviation (in parenthesis) are presented in the table below according to the level of equation on the one hand and the different employment sectors on the other.
Results of our Estimates
Tables 2 and 3 below give the results from the LME and LM models respectively. It should be noted at the outset that in these two tables, the coefficients are interpreted differently from those of a "usual" multinomial logistics model. In the latter, the coefficients are interpreted as the effect of a unit increase in the variable on the chances of making a particular choice rather than the "excluded" or "reference choice" choice for which the coefficients are arbitrarily kept at zero for technical reasons. Here, the factors on access to employment are interpreted as the marginal effect of a unitary variation of the variable considered on the non-employment (unemployment) rather than the obtaining of that job. The coefficients on access to an employment sector give the marginal effects of a unitary variation of the variable considered on access to an employment sector.
Knowing that the primary advantage of the LME model is to release, at least partially, the strong IIA hypothesis specific to the LM model, a Hausman-McFadden specification test of the IIA hypothesis was conducted on the LM model. The result of this test does not disprove the IIA hypothesis. The possible "superiority" of the LME model will therefore have to be measured in hindsight since the IIA hypothesis is not yet formally questioned. The I parameter in the LME model could be used to test the IIA hypothesis (see above, equation (2)); a test of hypothesis I-1 may indeed be an effective test of the relevance of the latter in the LM model (and therefore of the possible interest of using an LME model). This result is much more interesting. The parameter I has a significant different value of the unit (2,124) and above all the zero hypothesis I -1 is rejected at 1%.It can be concluded that in this case, the LM model is not an appropriate tool for assessing the explanatory factors of access to employment. The results suggest that the proposed alternatives in the employment sectors are closer substitutes.
For the sake of rigor other tests to verify the superiority of the LME over the LM can be performed.
Tables 2 and 3 below show that the results obtained from the LME model are interesting in that they provide a large number of significant variables, both in the explanation of access to employment and in the explanation of access to employment and in the analysis of access to different employment sectors. The chosen specification option, which is to assume that access to employment depends on the individual characteristics of the young person, institutional and cyclical factors, while the orientation towards an employment sector depends on the intrinsic skills of the young person and the integration mechanisms proposed by the government, seems to allow for a better explanation of the different situations. Indeed, variables such as age, gender, highest degree, social capital, institutional and cyclical head of household can be considered to influence obtaining employment, but not necessarily access to a particular sector of employment. On the other hand, the variables related to the young person's skills and the various offers of the State will rather play on his orientation towards a sector if he ever has the chances to get a job. Notes for Tables 2 and 3: the values in parentheses represent the student's t; *** = significant at 1%, ** = significant at 5%, * = significant at 10%; (1) test of the likelihood ratio calculated according to RV = 2 (L1-L0) where L1 corresponds to the -2 Log Likelihood of the unconstrained model and L0 corresponds to the -2 Log Likelihood of the constrained model; it follows a chi² law at N-(2k-1) degrees of freedom, k being the number of variables; (2) this is an adjusted pseudo-R² equal to: 1 -(L1 / L0) The age of individuals increases the chances of participation in the labour market in a very significant way. The negative sign means that increasing age decreases the chances of getting a job. This is all the more true for the Cameroonian case, especially in the recruitment of the public service, since the age limit for participation in a competition is 32 years. The quadratic aspect of the age-access to employment relationship is demonstrated by the negative sign and the strong significance of the age-related coefficient divided by one hundred. The coefficient of "sex" of individuals shows that being a man, all other things being equal, increases the chances of getting a job. The variable "highest degree" has a positive and significant sign at 1%. This means that a highly educated young person is more likely not to get a job. This is due to the fact that most employers in Cameroon who are looking at reducing costs prefer to recruit young people who do not have enough degrees since the remuneration will also be lower. The coefficient of the variable "social capital" is negative and significant at 5%. This result reflects the reality of Cameroon since obtaining a job is very often conditioned by the existence of a strong relationship network that supports the young person's candidacy. The variable "employment policy" has a negative and significant sign at 5%. This means that the young person who benefits from the government's facilitating measures does not find himself unemployed. Also, when the economic situation is favourable, the obtaining of employment is guaranteed if we take note of this coefficient.
With regard to access to one sector of employment, we observe that young people who have received academic training are more likely to work in the public sector, their chances in other sectors are negative; the significance of these coefficients ranges from 1% to 5%. Also, the fact that a young person has a management qualification significantly increases his chances of being in the private sector by simultaneously reducing his chances of being employed to the public. Having entrepreneurial skills greatly increases the chances of self-employment, at the expense of the chances of accessing the other two segments. Young people who are concerned about stable employment significantly increase their chances of ending up in the public sector at the expense of the other two sectors. Young people with work experience are more likely to work in the private sector than in the other two sectors. The values of these coefficients and their level of significance tell us about the effects of individual skills on access to any other sector of employment in Cameroon.
We are currently analysing the effects of public policies on youth employment. Young people who benefit from the support of investment structures (such as the NEF, BMO, etc.) increase their chances of accessing the private sector more than to self-employment and decrease the chances of access to the public sector. Young people who benefit from direct recruitment measures significantly increase their chances of accessing the public sector, particularly in the public sector, at the expense of the private sector and self-employment. Also, young people who receive technical, material and financial support from structures such as ISPAIS, RUYSP, etc. significantly increase their chances of self-employment and decrease their chances of being employed in the public and private sector.
Finally, tax exemptions and government subsidies increase the young person's chances of accessing the private sector rather than self-employing and reduce the young person's chances of being employed in the public sector.
These results show us that the various public policies defined and implemented by the Cameroonian government are bearing fruit. But the low value of these coefficients indicates that these measures are not effective.
Comparison of marginal effects from the two estimated models, allows for a more direct comparison of the results of the LME and LM models. It appears that there are significant discrepancies between the two estimates. The marginal effects on access to employment do not show the notable differences for the two models. The differences are more or less pronounced when one considers the marginal effects of public policies on the choice of an employment sector. The role of direct recruitment action, for example, is more pronounced on the LME than on the LM. This succinct comparison of the marginal effects estimated by our two models, which reveals a number of significant discrepancies, is in the direction of a more careful consideration of the possibility of implementation of models releasing the IIA hypothesis. Italic values correspond to non-significant variables in the models concerned; they are therefore reproduced only as an indication and cannot give rise to any interpretation.
Conclusion
The aim of this article was to assess the effects of public policies on youth employment in Cameroon. To do this, we first conducted a literature review which allowed us to see that state interventions are a group of passive and active policies. The latter focus first on measures relating to education policy because the idea is that youth unemployment is due to a problem of inadequate or inadequate training. Second, governments can intervene directly in the labour market through mass recruitment. Finally, incentives (tax exemptions, various subsidies) to private partners to encourage the recruitment of young people. To evaluate these public policies in Cameroon, we used the interlocking multinomial logit model, which not only corrects the limitations of the multinomial logit model, including IIA ownership, but also considers multi-level alternatives. The chosen estimation technique is the maximum likelihood.
After conducting the Hausman-McFadden specification test of the IIA hypothesis on the LM model, the result of this test did not allow us to disprove the IIA hypothesis. We found the result satisfactory by testing the parameter I which did have a significant different value of the unit (2,124) and especially the zero hypothesis I-1 which was rejected at 1%. This justifies the use of the LME. Our estimates show that the various public policies defined and implemented by the Cameroonian Government allow young people to be integrated into different sectors, but not substantially. Many young people are still looking for a job because they have not had the chance to join the civil service despite the massive recruitments carried out by the State, they are unable to benefit from recruitment in private companies or to settle on their own account. Faced with this situation, the Cameroonian State should further strengthen the professionalisation of training and, above all, direct training offers in areas that present opportunities in our country, including agriculture, computer engineering and agri-food. Similarly, the government needs to strengthen the facilities granted to private enterprises to encourage them to recruit more young people, it must improve the business climate that is conducive to the growth of these enterprises, which will eventually enable the recruitment of young people. It should also increase investment in ICT infrastructure to enhance the opportunities offered by their use. Finally, we suggest that the Government of Cameroon provide more technical and material support to young people who are seeking it and, on the other hand, to mobilise more funds for the financing of bankable projects presented by these young people.
Note In Cameroon's policy of vocational integration of young people, the State considers anyone between the ages of 17 and 35. However, from the point of view of the World Bank and other entities such as the European Union, it is the age group [15-25] that is chosen. 1 Under employment 1 Products derived from innovation and technological change, for example 1 The first applications were then mainly carried out in the field of biology, sociology and psychology 1 The Anglo-Saxon term is "independence of irrelevant alternatives" (IIA). It is sometimes presented more explicitly as the "red-bus/blue-bus problem": in a three-choice model of transport, it implies that the relative probabilities between the choice of car or red bus are always specified in the same way, whether the third possible choice is a blue bus or the train. This property also means that the percentages (predicted by the model) of individuals choosing each of the alternatives will decrease in proportion to their initial importance if an additional choice is introduced into the model (regardless of that choice). 1 Three tests of the IIA hypothesis, based respectively on the use of a Lagrange multiplier, a likelihood test or a Wald statistic, were proposed by McFadden (1987). 1 As McFadden (1984) and Amemiya (1985) and Greene (1997) point out. Although the model is now computable, it still poses quite a lot of difficulties in interpreting the coefficients. 1 The first to present this model was Ben-Akiva (1973). 1 The situation we are defining here is obvious. However, when this is not the case, it is possible to store the alternatives in subgroups. Thus, when the IIA hypothesis holds (or is respected) between two alternatives, they can be stored in the same subset or subgroup. 1 In a way, the coefficient is a measure of the "independence" of the choices of the subgroup consisting of the three opportunities in the employment sectors compared to the previous possibility, that of non-access to employment 1 The latter, two of them, were also proposed by McFadden (1984). This is a test based on the statistic of the Lagrange multiplier and another implementing a test of the probability ratio. Source: author calculations from EISS database, 2011.
Copyrights
Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).
|
v3-fos-license
|
2017-12-30T18:31:28.122Z
|
2017-12-01T00:00:00.000
|
27026746
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/ma10121397",
"pdf_hash": "b515ff1b653cb1aba80be96ac0b15d8511874613",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:595",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"sha1": "b515ff1b653cb1aba80be96ac0b15d8511874613",
"year": 2017
}
|
pes2o/s2orc
|
Chemical Stability between NiCr2O4 Material and Molten Calcium-Magnesium-Alumino-Silicate (CMAS) at High Temperature
NiCr2O4 as a potential protection for thermal barrier coatings (TBCs) against the attack of molten calcium-magnesium-alumino-silicate (CMAS) was studied by a CMAS-contacting experiment. Atmospheric plasma sprayed coatings and sintered bulk materials were fabricated, covered with CMAS deposits, and exposed to 1200 °C for 24 h. Nano-sized CMAS-NiCr2O4 mixed powder was manufactured by ball milling and then conducted heat treatment under the same condition. The results show that no reacting product was found at the border between molten CMAS and NiCr2O4 and no element transportation occurred. It can be inferred that NiCr2O4 has outstanding chemical stability with the molten CMAS.
Introduction
The use of ceramic thermal barrier coatings (TBCs) on hot-section metallic components in gas-turbine engines used to propel aircraft enabled it to operate at higher temperatures [1][2][3], which in turn has generated several new issues. One of them is the degradation and spallation of the TBCs caused by melting calcium-magnesium-alumino-silicate (CMAS) deposits, which originates from sand, dust, fly ash, and volcano ash on the hot TBC surface [4,5]. Typical 7 wt % Y 2 O 3 -stabilized ZrO 2 (7YSZ), which is widely used as a TBC material, is highly susceptible to CMAS attack at high temperatures. The molten CMAS can penetrate the 7YSZ coating through its pores/cracks, and the 7YSZ grains can be dissolved in molten CMAS. The relatively low solubility of Zr 4+ in molten CMAS compared with Y 3+ leads to the reprecipitation of Y-depleted ZrO 2 grains [6]. During the cooling stage of the engine, the destabilized ZrO 2 grains transform from tetragonal phase to monoclinic phase and this period is combined with significant volume expansion, which could lead to the delamination and spalling of the TBC [7]. This issue has attracted a lot of attention in recent years, because the attack caused by molten CMAS decreases the service life of the engine by about a half [8].
For the protection of TBCs from molten CMAS attack, many methods have been applied. According to the patents presented by Hasz et al., the protective coating against CMAS attack can be assorted into three types, i.e., impermeable, sacrificial, and nonwetting types [9][10][11]. Nowadays, the research emphasis in this field has been put on sacrificial and impermeable protective coatings. Rai et al. [12] investigated dense Pt film as a protective layer that renders TBCs impermeable to CMAS attack. L. Wang et al. compared Pt film and Gd 2 Zr 2 O 7 (GZO) as typical sacrificial coatings in a molten CMAS penetration test [13], and the result showed that Pt film exhibited much better anti-CMAS ability than GZO coating. However, the mismatch between Pt with ceramic TBCs restricts its efficacy in common use.
It is obvious that finding some ceramic impermeable protective coatings is vital for the protection of TBCs from CMAS attack. In the analysis of reacting product of bond coating material (NiCrAlY) and substrate material for TBC (nickel-based super alloy Inconel 738 material) reacting with CMAS, C.S. Ramachandran et al. [14] found that NiCr 2 O 4 formed in the reactive layer as a reactive product. On the other hand, optical basicity (OB) [15,16], which was first reported by Duffy et al. [17], based on Lewis acid-base theory, can be used to determine the chemical stability between two materials. The difference between the OBs (∆∧) of CMAS and coating material can be regarded as the reacting ability of molten CMAS infiltration reaction, where a higher ∆∧ infers more reactivity. It can be inferred that NiCr 2 O 4 (∆∧: 0.75) and CMAS (∆∧: 0.65) [18] have favorable chemical stability because their OBs are relative close. In addition, the use of NiCr 2 O 4 on TBCs will not increase the thermal conductivity too much because the thermal conductivity of NiCr 2 O 4 is 3.3 Wm −1 K −1 .
In this work, in order to obtain a potential impermeable top layer material for the protection of TBCs, we conducted high-temperature interactions of molten CMAS contact with three different types of NiCr 2 O 4 to investigate their chemical stability: coatings, bulk samples, and original powders. These three types have different research targets: coating samples are used to test their interaction with the morphology adjusting to the reality; bulk samples are used to achieve the pure phase of NiCr 2 O 4 to omit the influence of impurities that are easily contained in coating samples; and original powders are made to obtain a sample with NiCr 2 O 4 and CMAS contacted extensively at the nano-scale.
Experiment
A simulated CMAS glass frit of composition 39.2 CaO-5.2 MgO-4.1 Al 2 O 3 -51.5 SiO 2 (mol %) was prepared for the molten CMAS penetration experiment. This CMAS composition is similar to typical sand deposits that are found in engines. The CMAS glass was synthesized by mixing single oxides according to a method recorded elsewhere [6].
The original NiCr 2 O 4 powder was synthesized by the solid phase method under 1200 • C. NiCr 2 O 4 powders with a spherical shape and proper flowability were fabricated by dried spraying technology. A NiCr 2 O 4 coating with porosity of 4.6% was sprayed on a series of stainless steel substrates (22 mm long, 8 mm wide, 2.8 mm thick), grit-blasted by atmospheric plasma spraying (APS) using a SG100 plasma gun (Praxair, Danbury, CT, USA). The parameters used in the plasma spray process are listed in Table 1. For of high temperature heat treatment with CMAS, the substrates coated with NiCr 2 O 4 were bent by pliers to obtain free-standing coatings with a thickness of 0.4 mm. A series of NiCr 2 O 4 bulk samples with 84.6% density were formed by solid sintering (1600 • C, 10 h). The simulated CMAS glass frit was mixed in alcohol (AR500, Weizhicheng Chemical Ltd., Nanjing, China) and then distributed on the upper surface of the free-standing NiCr 2 O 4 coating and NiCr 2 O 4 bulk sample with a concentration of 35 mg/cm 2 . After the alcohol evaporated, both the bulk sample and free-standing NiCr 2 O 4 coating with the well-distributed CMAS glass frit cover were heat-treated in an electric furnace at 1200 • C for 24 h. A sample of 50%CMAS-50%NiCr 2 O 4 original powder was made by ball milling and then heat-treated under 1200 • C for 24 h. Both NiCr 2 O 4 and CMAS powders were ball milled with 400 rpm for 6 h to achieve nano-size particles and then mixed well to form CMAS-NiCr 2 O 4 with a relatively large interface between the two materials.
The phase analysis of the NiCr 2 O 4 bulk sample and coating was conducted by X-ray diffraction (XRD, RIGAKU D/Max-rB, Rigaku International Corp., Tokyo, Japan) with CuKα radiation at a scan rate of 4 • /min. The cross-section samples were polished and analyzed by using scanning electron microscopy (SEM, S-4800, Hitachi Ltd., Ibaraki, Japan) equipped with an energy dispersive spectrometer (EDS, D-8 advance, Bruker, Karlsruhe, Germany) for elemental analysis, and both operated at 10 kV accelerating voltage. Raman spectroscopy (HR800, JobinYvon Horiba, Oberursel, Germany) with a 532-nm exciting wavelength was also used in the analysis of the distribution of materials in the bulk samples CMAS-contacting experiment. The beam power was 14 mW and the spectral resolution was 3.5 cm −1 . The microstructure of 50%CMAS-50%NiCr 2 O 4 original powder was analyzed with a transmission electron microscope (TEM, Tecnai F20, Waltham, MA, USA) with an energy dispersive spectrometer (EDS) and selected area electron diffraction (SAED, Tecnai F20, Waltham, MA, USA) equipment. All were operated at 200 kV accelerating voltage. Germany) with a 532-nm exciting wavelength was also used in the analysis of the distribution of materials in the bulk samples CMAS-contacting experiment. The beam power was 14 mW and the spectral resolution was 3.5 cm −1 . The microstructure of 50%CMAS-50%NiCr2O4 original powder was analyzed with a transmission electron microscope (TEM, Tecnai F20, Waltham, MA, USA) with an energy dispersive spectrometer (EDS) and selected area electron diffraction (SAED, Tecnai F20, Waltham, MA, USA) equipment. All were operated at 200 kV accelerating voltage. Figure 1 shows the XRD patterns of NiCr2O4 original powder, coating, and bulk sample. The XRD pattern of NiCr2O4 original powder (after sintering at 1200 °C) coincides with the standard spectrum of NiCr2O4 with a spinal structure (23-1271) in space group of Fd-3m. The XRD pattern of NiCr2O4 coating exhibits peaks corresponding to the mainly spinal structure NiCr2O4 as well as peaks corresponding to a small amount of Cr2O3 (about 18.4%, according to intensity of the peaks). It is suggested that the decomposition of the NiCr2O4 occurred during the APS procedure. The XRD patterns show that the NiCr2O4 bulk samples consist of pure NiCr2O4 phase in the space group of I41/amd. It can be inferred that sintering at 1600 °C changes the structure of NiCr2O4. Figure 2B shows that the microstructure, including the pores and lamellar structure of the NiCr2O4 coating, remained after CMAS interaction. There was no Figure 2B shows that the microstructure, including the pores and lamellar structure of the NiCr 2 O 4 coating, remained after CMAS interaction. There was no penetration of molten CMAS found at the border. Molten CMAS seems to be non-wetting with the NiCr 2 O 4 coating, because the wetting angles of residue CMAS are obtuse angles and many spherical CMAS residues could be found on the surface of the NiCr 2 O 4 coating. Figure 2B depicts an SEM image of the cross-section of the NiCr 2 O 4 coating interacted with 35 mg/cm 2 CMAS. Figure 2C−F are the EDS elemental maps of Si, Ca, Cr, and Ni in Figure 2B. It can be seen in the elemental maps that the interface between CMAS and NiCr 2 O 4 is clear, and no elemental transportation gradient was observed from the border to each side. EDS cation compositions (at %) of the upper square (residue CMAS) and lower square (edge of the NiCr 2 O 4 coating) in Figure 2B are listed in Table 2 penetration of molten CMAS found at the border. Molten CMAS seems to be non-wetting with the NiCr2O4 coating, because the wetting angles of residue CMAS are obtuse angles and many spherical CMAS residues could be found on the surface of the NiCr2O4 coating. Figure 2B depicts an SEM image of the cross-section of the NiCr2O4 coating interacted with 35 mg/cm 2 CMAS. Figure 2C−F are the EDS elemental maps of Si, Ca, Cr, and Ni in Figure 2B. It can be seen in the elemental maps that the interface between CMAS and NiCr2O4 is clear, and no elemental transportation gradient was observed from the border to each side. EDS cation compositions (at %) of the upper square (residue CMAS) and lower square (edge of the NiCr2O4 coating) in Figure 2B are listed in Table 2. The atomic ratio of Ni/Cr in the NiCr2O4 coating was 0.36 and the theoretical atomic ratio was 0.5. The atomic ratio of Ca/Si in residue CMAS on the surface was also similar to the composition of the simulated CMAS glass frit. The element distributions of both NiCr2O4 and CMAS after the heat treatment are similar to their original states. Figure 3A is a high magnification SEM image of the border of interaction between residue CMAS and NiCr2O4 coating. The upper area is CMAS and the lower area shows the structure of the NiCr2O4 coating. At the contacted part of the border shown in Figure 3A,B, a split line with a width of less than 1 μm can be seen. The micro pores in the NiCr2O4 coating with a size of 3-5 μm are not filled and there is no any reaction product at the border. Some NiCr2O4 particles could be found at the edge of the residue CMAS, but no dissolving phenomenon of these particles was detected. Figure 4A,B are cross-sectional SEM micrographs (low and high magnification) of NiCr2O4 bulk sample that were heat-treated with 35 mg/cm 2 CMAS coating. The NiCr2O4 bulk sample has a high porosity and the pores were filled with epoxy during the SEM sample preparation. The CMAS did not infiltrate the bulk sample, but just melted and spread on the surface. The distance is less than 2 μm between letters A and B indicated in the figure, areas for which Raman spectroscopy was conducted. Figure 4C,D show the Raman spectra of CMAS and NiCr2O4 bulk samples that were marked A and B in Figure 4B. The compared standard materials of CMAS and NiCr2O4 are original powders that were heat-treated under the same conditions of 1200 °C for 24 h. The results show that the Raman spectra of the CMAS in the contact sample (area A) have similar peaks compared to the standard CMAS, and the Raman spectra of NiCr2O4 bulk samples (area B) are similar to the standard NiCr2O4. However, the peaks of the Raman spectra for CMAS and NiCr2O4 are completely different. Figure 3A is a high magnification SEM image of the border of interaction between residue CMAS and NiCr 2 O 4 coating. The upper area is CMAS and the lower area shows the structure of the NiCr 2 O 4 coating. At the contacted part of the border shown in Figure 3A,B, a split line with a width of less than 1 µm can be seen. The micro pores in the NiCr 2 O 4 coating with a size of 3-5 µm are not filled and there is no any reaction product at the border. Some NiCr 2 O 4 particles could be found at the edge of the residue CMAS, but no dissolving phenomenon of these particles was detected. Figure 4A,B are cross-sectional SEM micrographs (low and high magnification) of NiCr 2 O 4 bulk sample that were heat-treated with 35 mg/cm 2 CMAS coating. The NiCr 2 O 4 bulk sample has a high porosity and the pores were filled with epoxy during the SEM sample preparation. The CMAS did not infiltrate the bulk sample, but just melted and spread on the surface. The distance is less than 2 µm between letters A and B indicated in the figure, areas for which Raman spectroscopy was conducted. Figure 4C,D show the Raman spectra of CMAS and NiCr 2 O 4 bulk samples that were marked A and B in Figure 4B. The compared standard materials of CMAS and NiCr 2 O 4 are original powders that were heat-treated under the same conditions of 1200 • C for 24 h. The results show that the Raman spectra of the CMAS in the contact sample (area A) have similar peaks compared to the standard CMAS, and the Raman spectra of NiCr 2 O 4 bulk samples (area B) are similar to the standard NiCr 2 O 4 . However, the peaks of the Raman spectra for CMAS and NiCr 2 O 4 are completely different.
Results and Discussion
It was found that both the bulk sample and free-standing coating of NiCr 2 O 4 were non-wetting with CMAS under 1200 • C. Because the morphologies of the coating samples and bulk samples are quite different and the results of CMAS contacting experiments for both the coating sample and bulk sample are similar, it can be inferred that morphology has less influence on the CMAS barrier ability for NiCr 2 O 4 material when the CMAS has a non-wetting surface.
The XRD result of NiCr 2 O 4 coating shows that Cr 2 O 3 was found in the coatings, mainly because of the melting and recrystallization during the APS procedure. It is obvious that the residue Cr 2 O 3 does not affect the arresting ability of the coating, probably because the optical basicity of Cr 2 O 3 (∆∧: 0.70) is similar to that of NiCr 2 O 4 , so we can assume that it has a reacting ability with CMAS similar to NiCr 2 O 4 .
For remove of the influence of Cr 2 O 3 impurities, NiCr 2 O 4 bulk samples were fabricated by solid sintering and used to react with CMAS coating under the same conditions. The Raman spectra of CMAS and NiCr 2 O 4 bulk samples were compared with a standard material, as shown in Figure 4, which gives us a new way to identify the phase structure and materials found at the micro-scale, such as at the border between CMAS and NiCr 2 O 4 bulk samples. The NiCr 2 O 4 bulk sample exhibited a tetragonal structure belonging to the space group I41/amd. The factor group analysis predicted the following modes in NiCr 2 O 4 : 2A 1g (R) + B 2g (R) + 3B 1g (R) + 4E g (R) + A 2g + 2B 1u + 2A 1u + 4B 2u + 6E 2u (IR) + 4A 2u (IR) + B 1u [19]. There are 10 Raman active modes (2A 1g + B 2g + 3B 1g + 4E g ) for the tetragonal structure of NiCr 2 O 4 , and some were detected in both area B in Figure 3B and in the the standard NiCr 2 O 4 material. In contrast, the Raman spectra of area A in Figure 4B and of the pure CMAS conducted under the same heat treatment without any reaction showed high consistency, as the crystallization of CMAS occurred without any Raman active modes of NiCr 2 O 4 in it. These results indicate that NiCr 2 O 4 has no reaction with the CMAS bulk sample under 1200 • C, and the material transportation at border of the two materials is negligible (less than 2 µm). Figure 5A shows a bright-field TEM image of 50%CMAS-50%NiCr 2 O 4 powder interacted under 1200 • C for 24 h. The grain marked A was identified as NiCr 2 O 4 using SAEDP with a best match with PDF (Powder Diffraction File) No. 23-0423 for ideal composition in the space group of I41/amd. The grain marked B was identified as a glass structure of CMAS as its SAEDP showed a typical circle pattern for amorphous glass that contains several crystals separated out from CMAS. The elemental composition for both A and B area were detected by EDS, and the results are listed in Table 3.
From the results, it can be seen that the interface between the two nano-sized granules marked A and B is clear and the SAEDPs show that granule A is complete NiCr 2 O 4 crystal and granule B is an amorphous structure of CMAS. The element composition dictated by EDS, as shown in Table 3, also proves that result, because the granule A is composed of 99% Ni, Cr, and O, and granule B is composed of 99% Ca, Mg, Al, Si, and O. This indicates that the chemical stability between NiCr 2 O 4 and CMAS is relative high for such a strong contact (nano-size) and long interacting period (24 h).
The previous results show that NiCr 2 O 4 ceramic has good chemical stability with molten CMAS, so they do not react under 1200 • C for a relatively long time (24 h). Chemical stability and non-wetting ability are the dominant factors that enable NiCr 2 O 4 to work as an anti-CMAS material. The advantage of this kind of material is that it does not permit the formation of a reacting zone, thus arresting CMAS penetration and reducing the stress in the coating system. Furthermore, the thermal barrier ability of the coating can be maintained after CMAS attack, as it maintains a porous structure. the two materials is negligible (less than 2 μm). Figure 5A shows a bright-field TEM image of 50%CMAS-50%NiCr2O4 powder interacted under 1200 °C for 24 h. The grain marked A was identified as NiCr2O4 using SAEDP with a best match with PDF (Powder Diffraction File) No. 23-0423 for ideal composition in the space group of I41/amd. The grain marked B was identified as a glass structure of CMAS as its SAEDP showed a typical circle pattern for amorphous glass that contains several crystals separated out from CMAS. The elemental composition for both A and B area were detected by EDS, and the results are listed in Table 3.
Conclusions
The results show that three different types of NiCr 2 O 4 material reveal many details on this issue. The reaction of nano-sized CMAS-NiCr 2 O 4 mixed powder shows that NiCr 2 O 4 has outstanding chemical stability with molten CMAS under 1200 • C, as predicted by the optical basicity analysis. No reaction was detected for a relatively long time (24 h). The morphology difference between the coating and bulk sample does not affect the chemical stability between CMAS and NiCr 2 O 4 . Also, the impurity of Cr 2 O 3 (about 18.4%, according to intensity of peaks) was detected in the coating sample, indicating that a small amount of Cr 2 O 3 in NiCr 2 O 4 does not influence the chemical stability of NiCr 2 O 4 with CMAS. Finally, both of the CMAS-contacting experiments the for coating and bulk sample show that NiCr 2 O 4 and molten CMAS are non-wetting at 1200 • C, and there is not any element transportation across the border between them. The results indicate that NiCr 2 O 4 can be used as a potential impermeable anti-CMAS material for the protection of TBCs. The methods of optical basicity and the analysis of CMAS-TBC reacting products could also be used in the search for other impermeable materials.
|
v3-fos-license
|
2018-12-17T02:58:12.845Z
|
2012-04-24T00:00:00.000
|
56320888
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/ijrm/2012/957421.pdf",
"pdf_hash": "bbe13931672413b8d5ed36af7f04f3dc1bdb946a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:596",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"sha1": "bbe13931672413b8d5ed36af7f04f3dc1bdb946a",
"year": 2012
}
|
pes2o/s2orc
|
Aerodynamic Losses in Turbines with and without Film Cooling , as Influenced byMainstream Turbulence , Surface Roughness , Airfoil Shape , andMach Number
The influences of a variety of different physical phenomena are described as they affect the aerodynamic performance of turbine airfoils in compressible, high-speed flows with either subsonic or transonic Mach number distributions. The presented experimental and numerically predicted results are from a series of investigations which have taken place over the past 32 years. Considered are (i) symmetric airfoils with no film cooling, (ii) symmetric airfoils with film cooling, (iii) cambered vanes with no film cooling, and (iv) cambered vanes with film cooling. When no film cooling is employed on the symmetric airfoils and cambered vanes, experimentally measured and numerically predicted variations of freestream turbulence intensity, surface roughness, exit Mach number, and airfoil camber are considered as they influence local and integrated total pressure losses, deficits of local kinetic energy, Mach number deficits, area-averaged loss coefficients, mass-averaged total pressure loss coefficients, omega loss coefficients, second law loss parameters, and distributions of integrated aerodynamic loss. Similar quantities are measured, and similar parameters are considered when film-cooling is employed on airfoil suction surfaces, along with film cooling density ratio, blowing ratio, Mach number ratio, hole orientation, hole shape, and number of rows of holes.
Introduction
Numerous investigations consider parameters and phenomena which affect turbine blade and vane aerodynamic losses, such as turbulence intensity, surface roughness, blade row interactions, and blade and vane geometry.Also important are Mach number variations, airfoil camber, and film cooling.
A number of these recent studies focus on aerodynamic losses downstream of subsonic turbine airfoils with no film cooling.Of these investigations, Hoheisel et al. [1], Gregory-Smith and Cleak [2], and Ames and Plesniak [3] examine the influences of inlet turbulence on losses across turbine cascades.Hoheisel et al. [1] also consider the effects of blade boundary layers, and Ames and Plesniak [3] demonstrate important connections between wake growth and level of freestream turbulence.Moore et al. [4] indicate that more than one third of total losses develop downstream of airfoil trailing edges.The authors attribute total pressure losses to deformation work and dissipation of secondary kinetic energy.
Zhang et al. [5][6][7], Zhang and Ligrani [8], Xu and Denton [9], Mee et al. [10], Izsak and Chiang [11], Michelassi et al. [12], Joe et al. [13], and Bohn et al. [14] present aerodynamic loss results for transonic turbine airfoils with no film cooling.Xu and Denton [9] investigate mixing losses from turbine blades with different trailing edge thicknesses, including the influences of blade boundary layers on downstream mixing.According to Mee et al. [10], boundary layers, shock waves, and wakes mixing all contribute to overall losses in relative amounts which depend upon the Mach number.In addition, most of the mixing losses are generated immediately downstream of the trailing edges of blades where gradients in properties across the wake are largest.Izsak and Chiang [11] present experimental data and numerical predictions which account for turbulence, transition, as well as transonic expansion fans.Michelassi et al. [12] test turbulence and transition models, which include the effects of separation bubbles, for a cascade flow with shock-boundary layer interactions.Bohn et al. [14] and Joe et al. [13] present aerodynamic loss data measured downstream of airfoils at different Reynolds numbers and exit Mach numbers.
The influence of surface roughness on adjacent flow behavior has been of interest for researchers for almost 100 years.The use of equivalent sandgrain roughness size, k s , to characterize and quantify rough surfaces was first proposed and utilized by Nikuradse [15] and Schlichting [16].This quantity represents the size of sand grains which give the same skin friction coefficients in internal passages as the roughness being evaluated.This measure of roughness size continues to be used widely in empirical correlation equations (which are based on experimental data) to represent rough surface behavior, and for closure models employed in a variety of numerical prediction codes.Sigal and Danberg [17,18] made important advances in accounting for roughness geometry considerations for uniformly shaped roughness elements spread in a uniform pattern over a test surface.For this type of "two-dimensional roughness", the authors provide equations for the dependence of the ratio of equivalent sand grain roughness to mean roughness height, k s /k, upon a roughness parameter, Λ s , which is determined from roughness geometry.Van Rij et al. [19] give a modified version of the Sigal and Danberg correlation for the dependence of k s /k on Λ s for randomly placed, non uniform, and three-dimensional roughness with irregular geometry and arrangement.Also described are analytic procedures for determination of roughness height k and Λ s from roughness geometry.With this approach, magnitudes of equivalent sandgrain roughness size k s are determined entirely from three-dimensional roughness geometry.
In a paper published in 1975, Bammert and Sandstede [20] describe the influences of different manufacturing tolerances and turbine airfoil surface roughness characteristics on the overall performance of turbines and indicate that losses increase sharply above a certain roughness size.Measurements of the boundary layer development along blades with varying roughness are carried out by the same authors [21] who show that rough surface momentum thickness is up to three times greater than values present in boundary layers on smooth surfaces.Kind et al. [22] investigate the effects of partial roughness coverage of the blade surfaces and conclude that roughness on the suction surface can cause large increases in profile losses.Flat plate surfaces with coneshaped elements are used by Bogard et al. [23] to simulate the roughness present on vane surfaces.According to these investigators, the effects of surface roughness and high freestream turbulence are additive.Abuaf et al. [24] find that tumbling and polishing reduce the average roughness size and improve overall performance when they quantify heat transfer and aerodynamic performance characteristics of turbine airfoils with different surface finish treatments.From experiments conducted using a compressor cascade, Leipold et al. [25] indicate that surface roughness has no effect upon the presence or location of laminar separation, but that roughness causes the turbulent boundary layer to separate at locations further upstream at higher Reynolds numbers.Guo et al. [26] report on the influences of localized pinshaped surface roughness on heat transfer and aerodynamic performance of a fully film-cooled engine aerofoil and indicate that substantial loss increases are present when the pins are located on the pressure side of the airfoil.
Of the investigations which also examine the effects of augmented freestream turbulence levels, Gregory-Smith and Cleak [2] show that the mean flow field is not affected significantly by inlet turbulence intensity levels as high as 5%.Zhang and Ligrani [27] show that Integrated Aerodynamic Losses (IALs) change significantly as the level of surface roughness changes, where alterations due to different freestream turbulence levels are relatively small.Results presented by these investigators also indicate that thicker rough surface boundary layers are more sensitive to changes of freestream turbulence level than thinner boundary layers that develop over smooth surfaces.Hoffs et al. [28] investigate different surface roughness characteristics at different Reynolds numbers on a turbine airfoil, with turbulence intensity levels as high as 10 percent and show that heat transfer gradually increases and that laminar-to-turbulent transition moves upstream as the Reynolds number and turbulent intensity increase.The effects of strong secondary flows, laminar-to-turbulent transition, and variations near the stagnation line are investigated by Giel et al. [29] using an active blowing grid of square bars as a turbulence generator at the entrance of a transonic cascade.Boyle et al. [30] provide turbine vane aerodynamic data at low Reynolds numbers made at midspan locations downstream of a linear cascade with inlet turbulence intensity levels as high as 10 percent.Nix et al. [31] describe the development of a grid which produces freestream turbulence characteristics which are similar to ones produced by the flow exiting combustors of advanced gas turbine engines.
In a study of aerodynamic losses downstream of turbine airfoil with no film cooling, Jouini et al. [32] present detailed measurements of midspan aerodynamic performance characteristics of a transonic turbine cascade at offdesign conditions.Measurements of blade loading, exit flow angles, and trailing edge base pressures at different Mach numbers show that profile losses at transonic conditions are closely related to base pressure behavior.Radomsky and Thole [33] present measurements of time-averaged velocity components and Reynolds stresses along a turbine stator vane at elevated freestream turbulence levels and present data which show that transition occurs further upstream on the suction side, as the freestream turbulence level increases.Arts [34] describes experimental aerodynamic performance data for a three-dimensional annular transonic nozzle guide vane.Coton et al. [35] investigate the effects of Reynolds number and Mach number on the profile losses of a conventional low-pressure turbine rotor cascade and report that the exit Mach number affects the losses through a modification of the pressure gradient imposed on the boundary layer.Boyle et al. [36] provide aerodynamic data for a linear turbine vane cascade, including surface pressure distributions and aerodynamic losses for different Reynolds numbers, Mach numbers, and levels of inlet turbulence.
Investigations by Zhang et al. [5] and Zhang and Ligrani [8] employ symmetric airfoils with no camber, and without significant flow turning.Of these, Zhang et al. [5] investigate the effects of surface roughness and turbulence intensity on the aerodynamic losses produced by the suction surface.Their results show that the effects of different inlet turbulence intensity levels are generally relatively small and that diffusion from the wake to surrounding freestream flow results in broader wakes with more uniform aerodynamic wake loss distributions.The data of Zhang and Ligrani [8] show that magnitudes of Integrated Aerodynamic Losses change by much larger amounts as either the freestream Mach number or turbulence intensity are altered, when the airfoil is roughened (compared to smooth airfoil results).Other recent investigations are described by Zhang et al. [37,38] and by Zhang and Ligrani [39,40].
In recent years, designers have devoted increased attention to the aerodynamic penalties associated with film cooling of turbine airfoils in gas turbine engines.This is because of increased awareness of the drops in efficiency associated with such penalties, and because the improvements in thermal protection provided by newest film hole configurations may be offset by the total pressure aerodynamic losses which accumulate downstream of the airfoils.Thus, it is paramount that such aerodynamic losses be quantified, especially for transonic turbine airfoils, where total pressure losses often develop from shock waves, boundary layers, and wake mixing.Film cooling is employed to maintain gas turbine hot-section components with acceptable temperatures and temperature gradients in order to increase engine performance by allowing operation at higher gas inlet temperature with increased component life.Acceptable temperature levels are a vital characteristic because they lead to less susceptibility to high-temperature oxidation, creep, corrosion, and thermomechanical fatigue.
Of recent investigations in this area, Denton [41] indicates that total pressure losses are connected to entropy creation.Consequently, mixing across gradients in the flow can result in increased losses even without the action of frictional forces.Additional entropy increases and losses of stagnation pressure also often result due to separation bubbles which act to thicken boundary layers.Because boundary layer losses depend upon the cube of the ratio of the blade surface velocity to the upstream reference velocity integrated over the surface of an airfoil, losses originating in the suction surface boundary layer are dominant [41].
Other investigators who consider the influences of film cooling on aerodynamic losses from turbine blades and vanes include Jackson et al. [54] and Ito et al. [42].Of these investigations, Ito et al. [42] indicate that total pressure losses in incompressible flow can increase or decrease due to film injection from a single row of holes placed either on the suction surface or the pressure surface.Haller and Camus [43] present losses due to film cooling for five separate cooling hole locations on a transonic airfoil, using carbon dioxide to simulate an operating engine density ratio.Their results show that ejection downstream of the passage throat does not necessarily give greater losses compared to situations with injection upstream of the throat.Kollen and Koschel [44] also utilize carbon dioxide for the film on a transonic airfoil.From this study, losses increase with blowing ratio for film cooling from the leading edge and decrease with blowing ratio when the film cooling is located on the suction surface, except when the blowing ratio is very small.
In an investigation performed by Day et al. [45] which considers the effects of film cooling, a cascade is employed which operates at conditions similar to those which exist in gas turbine engines.The authors show that utilizing cylindrical holes increases aerodynamic losses by 6.7 percent, and that utilizing fan-shaped holes increases losses to 15 percent compared to a no-injection condition.They also show that film cooling at the leading edge, and early pressure surface region can actually increase aerodynamic efficiency, most likely because of shock/boundary layer interactions.Hong et al. [46] examine the effects of film cooling from a single row of holes located either on the suction surface, pressure surface, or leading edge.Results indicate that the suction surface film cooling has the biggest influence on aerodynamic losses, and that pressure surface film cooling has the smallest influence on losses.
Mee et al. [10], Michelassi et al. [12], Bohn et al. [14], Kapteijn et al. [47], Vlasic et al. [48], Sieverding et al. [49], and Tanuma et al. [50] investigate losses downstream of transonic airfoils with trailing edge ejection.Mee et al. [10], Day et al. [45], and Osnaghi et al. [51] all indicate that density ratio influences are well correlated using the momentum flux ratio.According to Day et al. [45], film cooling causes thickening of airfoil wakes, as well as changes to the flow field near the hub in an investigation of aerodynamic losses from a transonic airfoil with multiple rows of film cooling holes on the leading edge and suction surface.Kubo et al. [52] compare numerical predictions with total pressure losses measured downstream of a low-speed cascade containing a vane with film-cooling holes located on the leading edge, suction surface, pressure surface, and trailing edge.As the mass flow rate ratio varies, the largest loss increases relative to the flow with no film cooling are due to injection from holes located near the passage throat on the suction surface and from holes on the leading edge.In another experimental and numerical investigation which employs transonic airfoils, Urban et al. [53] also show that losses are greatly increased by film cooling from the suction surface, whereas pressure side and trailing edge ejection produces only small changes to aerodynamic loss magnitudes.Density ratio variations are shown to have insignificant influences on loss magnitudes by these investigators.
Within the present investigations, losses from friction and expansions/compressions are considered as they result from flow separations, viscous effects within boundary layers, shear augmentation in wakes, mixing processes within wakes and boundary layers, shock waves, and the generation, growth, and mixing of vortices.According to Denton [41], for a complete turbomachinery stage, these phenomena can also be categorized as "profile loss," "endwall loss," and "leakage loss," where "profile loss" is due to boundary layer growth and separation from the trailing edge.Of particular interest here are the effects of Mach number, International Journal of Rotating Machinery mainstream turbulence intensity, surface roughness, film cooling, and airfoil shape on such phenomena, as quantified using Integrated Aerodynamic Loss [5, 8, 27, 37-39, 54, 55].
Aerodynamic Loss Determination
2.1.Primary Loss Coefficient and Thermodynamic Loss Coefficient.Nondimensional loss coefficients are key performance metrics in the analysis of turbomachines.According to Raffel and Kost [56], Kost and Holmes [57], and others [10,58], a primary loss coefficient, which is also referred to as an enthalpy loss coefficient [41], is given by which is also equivalent to the following: where W is the local relative velocity.A local thermodynamic loss coefficient is then employed to account for the different energy input of coolant flow relative to the mainstream flow [56][57][58] Equation (3) can also be expressed using an equation given by The challenge in utilizing (4) to represent a film-cooled environment (especially when the flows are compressible) lies in the difficulty of estimating appropriate values of either temperature or enthalpy which are correctly representative of energy content.For example, ambiguity always exists regarding choices of idealized values of h oi and h oc , to appropriately represent isentropic kinetic energy values for the mainstream and coolant, respectively.In addition, magnitudes of h oe within (4) must be representative of overall, mixed values.
Entropy Rise Coefficient.
According to Denton [41], an entropy rise coefficient can also be utilized to characterize turbomachinery stage losses.Such a coefficient is defined using For small losses in incompressible flow, this is then equivalent to which is, thus, also a second-law loss coefficient.Such coefficients are useful because they allow appropriate comparisons to be made between different airfoils, with different filmcooling arrangements, at different flow velocities.
Omega Aerodynamic Loss
Coefficients.Within investigations which generally involve low-speed turbine cascades [3,[59][60][61][62][63], a total pressure loss coefficient Ω = (P oi − P oe )/(P oi − P se ) is employed, which is equivalent to the Y P quantity given by (6).Within this definition, each term is generally determined locally at a particular spatial location.
Here, the stagnation pressure loss is normalized by idealized dynamic pressure, which is equivalent to the sum of the stagnation pressure loss and the local exit dynamic pressure.
In the investigations of Ames et al. [59] and Johnson et al. [61], cross-passage mass-averaged and full exit massaveraged magnitudes of aerodynamic loss (determined by integrating distributions of loss coefficient Ω) are also determined.With mass averaging, the (P oi − P oe ) quantity within the total loss coefficient Ω is multiplied by local mass flow rate.The result is normalized by overall mass flow rate multiplied by (P oi − P se ).This gives the massweighted stagnation pressure loss, relative to overall ideal dynamic pressure for the equivalent of one blade passage.Ames et al. [59], Johnson et al. [61], and Fiala et al. [63] also consider local and cross-passage mass-averaged magnitudes of turning angle, which is defined as the outlet flow angle measured relative to the inlet axial direction.
An example of smooth airfoil (P oi − P oe )/(P oi − P se ) total pressure loss coefficient profiles for different turbulence intensity levels is presented in Figure 4.Here data from Ames and Plesniak [3] are included, along with data from Zhang et al. [5] for smooth, symmetric airfoils.Additional discussion of the trends and physical significance of the results presented in Figure 4 are given later in the present paper.[64] and Boyle et al. [30] employ an area averaged loss coefficient, Y A , in their analysis, which is defined using an equation of the form
Area-Averaged Loss Coefficients. Boyle and Senyitko
The form of ( 7) is similar to the form of (6), except, here, p oe,A and p se,A are area averaged exit total pressure and static pressure, respectively.These are determined using equations, respectively, given by Here, P oe,m and q e,m are mass-averaged exit total pressure and mass-averaged dynamic pressure, respectively.These two parameters are defined with equations that are given by respectively.Y p data from Kind et al. [22] are presented and compared with some data from Zhang et al. [38] in Figure 22.
The Kind et al. data are measured 0.4 of an axial chord length downstream of their airfoil.Here, Y p loss coefficient data are given as they vary with normalized mean roughness height k/cx since sandgrain roughness height, k s , is not available from Kind et al. [22].Figure 22 is also discussed in greater detail later in the present paper.The approach utilized by Kind et al. is similar to methods employed by Friedrichs et al. [65], where, instead of massaveraging, the P oe and P se quantities are determined as massaveraged, "mixed" quantities within total loss coefficients (6).These investigators also employ three different approaches for determination of the reference inlet stagnation pressure, each with a different means for inclusion of contributions from the film coolant supply.
Local Total Pressure Loss Coefficient and Integrated
Aerodynamic Loss.C p is the normalized inlet total pressure minus exit total pressure [5, 8, 27, 37-39, 54, 55], which is expressed using an equation of the form With this approach, the local stagnation pressure loss is normalized by a quantity which does not vary with cascade exit location.Zhang et al. [5,37,38], Jackson et al. [54], Chappell et al. [55], and Zhang and Ligrani [8,27,39,40] employ integrated aerodynamic loss IAL to quantify aerodynamic losses in turbine components.Dimensional magnitudes of Integrated Aerodynamic Loss, IAL, are determined by integrating profiles of (P oi − P oe ) with respect to y in the transverse flow direction across the wake for one single vane spacing, from −p/2 to p/2 [54].In equation form, IAL is thus given by IAL = p/2 −p/2 (P oi − P oe )dy.(13) Here, IAL magnitudes are determined from measured distributions of the local total pressure loss coefficient, C p , which are measured downstream of the airfoil.Consequently, IAL magnitudes represent mixing losses which have accumulated through the wake and airfoil boundary layers [54].These forms for IAL and C p are employed because they are directly related to local entropy change and to local entropy creation.When normalized using either P oi p or (P oi − P se )p, IAL magnitudes can then be compared to data sets obtained at different velocities and with different airfoil configurations.In addition, for the compressible transonic and subsonic results which are presented later in the paper, IAL-based correlations which account for film cooling are simpler, more consistent, and more physically meaningful than when other types of loss coefficients are employed.According to Osnaghi et al. [51] and Drost and Bölcs [58], direct connections exist between the wake coolant distribution and total pressure losses (provided analogous behavior between coolant mass diffusion and momentum diffusion is assumed).
Examples of profiles of local total pressure loss coefficient, C p , are given in Figure 5(a).These particular results illustrate the effects of surface roughness for an inlet turbulence intensity level of 0.9 percent [8].Data are given for k s /cx values of 0, 0.00069, and 0.00164, which correspond to the smooth, small-sized roughness, and large-sized roughness, respectively.The inlet total pressure is kept constant at 195 kPa when different airfoils are employed.The results in this set of figures are given for an exit freestream Mach number 0.9, and for a low value of inlet turbulence intensity, because variations due to roughness are often generally more apparent than for higher inlet turbulence intensity levels.Figure 5(a) shows that total pressure losses increase at each y/cx location as k s /cx increases.This is apparent at peak value locations and is accompanied by increases in the width of the profiles as roughness size becomes larger.These trends, including their significance are discussed in greater detail later in the present paper.
Examples of IAL data are given in Figure 6, which are normalized using the airfoil passage effective pitch p and test section inlet stagnation pressure P oi .These particular data are given as they depend upon k s /cx for different Tu values, for the airfoils with the smooth surfaces (k s /cx = 0), small-sized roughness (k s /cx = 0.00069), and large-sized roughness (k s /cx = 0.00164).The results which are presented in Figure 6 are also discussed in greater detail later in the present paper.
Second Law Losses.
Local and global aerodynamic losses can also be further quantified using additional second law analyses.For adiabatic and isothermal flow, the entropy International Journal of Rotating Machinery change from inlet to outlet of the present cascade arrangements, shown in Figures 1 and 12, is given by Here, C p is the local total pressure loss coefficient, given by (12).According to Denton [41], pressures, temperatures, and densities which are used to determine entropy changes can be either all static values or all stagnation values because, by definition, the change from static to stagnation condition is isentropic.Equation ( 14) then omits any changes due to stagnation temperature variation since this quantity is constant through adiabatic blade row arrangements.From a general perspective, irreversibilities and local entropy creation from second law losses occur from three overall sources, namely, heat transfer, friction, and expansions/compressions [66].Only the latter two effects are considered within the present investigation as they result from flow separations, viscous effects within boundary layers, shear augmentation in wakes, mixing processes within wakes and boundary layers, shock waves, and film cooling.As mentioned earlier, these phenomena can also be categorized as "profile loss," "endwall loss," and "leakage loss," where "profile loss" is due to boundary layer growth and separation from the trailing edge [37,41].These effects generate entropy, and anything, that generates entropy, always destroys exergy.To estimate the amount of energy that can be extracted as useful work, or the useful work potential of a given amount of energy at some specified state, exergy (which is also called the availability or available energy) is employed [66].From ( 12) and ( 14), the local entropy creation is then represented by where T 0 is the laboratory ambient temperature (T 0 = 300 K).The exergy destroyed is then proportional to the entropy created.The mass-averaged overall exergy destruction is subsequently expressed using ρus gen dy.(16) This then represents the overall lost work potential.Multiplying x dest,o by the appropriate mass flow rate then gives the overall rate of exergy destruction.
Examples of mass-averaged magnitudes of overall exergy destruction are given in Figure 26 for a cambered vane and a symmetric airfoil, as they vary with surface roughness.These data are given for different measurement locations downstream of different airfoils with different exit Mach numbers, for freestream turbulence intensity values from 0.9 to 1.5 percent.Dramatic increases of x dest,o are generally apparent as either k s /c or as M ex increases.Additional discussions of these results are also given later within the present paper.
Present Investigations
When symmetric airfoils and cambered vanes are employed, both with and without film cooling, effects of mainstream turbulence intensity, surface roughness, exit Mach number, and airfoil camber are considered as they influence local and integrated parameters which quantify aerodynamic losses.Of particular interest is the effect of such parameters on C p total pressure loss coefficients, Ω total pressure loss coefficients, and IAL or Integrated Aerodynamic Loss.The associated results are presented in five sections: (i) symmetric airfoils with no film cooling, (ii) symmetric airfoils with film cooling, (iii) cambered vanes with no film cooling, (iv) cambered vanes with film cooling, and (v) second law analyses of turbine aerodynamic losses with and without film cooling.
Symmetric Airfoils with No Film Cooling
Investigations of symmetric turbine airfoils (with no film cooling) consider the effects of surface roughness, freestream Mach number, and mainstream turbulence intensity on aerodynamic losses downstream of the airfoils in compressible, high-speed flow.Combined and coupled effects of these different phenomena are considered [5,6,8].The presence and development of shock waves for the transonic case are also discussed, along with shock wave changes that occur as the level of surface roughness changes.Three symmetric airfoils are employed, with different rough surfaces which are characterized using equivalent sandgrain roughness size.Magnitudes of equivalent sandgrain roughness size for each surface are determined using three-dimensional optical profilometry data, and procedures described by Van Rij et al. [19].Exit freestream Mach numbers measured one chord length downstream of the airfoil are 0.6, 0.8, and 0.9.The magnitudes of longitudinal turbulence intensity used at the inlet of the test section are 0.9 percent, 5.5 percent, and 16.2 percent, where the latter values are produced using a mesh grid and cross-bars, respectively.Additional details on experimental apparatus and procedure details, including the Transonic Wind Tunnel, are provided by Zhang et al. [5][6][7], Furukawa and Ligrani [67], and Zhang and Ligrani [8], including discussion of techniques to determine rough-surface skin friction coefficients from wake profile measurements [7].
Test Section and Test Vane.
A schematic diagram of the nonturning airfoil cascade test section is shown in Figure 1.The inlet of the test section is 12.70 cm by 12.70 cm.The two side walls are flat, whereas the top and bottom walls are contoured to form a converging-diverging shape which produces the desired Mach number distribution along the symmetric test airfoil.Because significant flow turning is not included, the camber curvature, present in many cascades with multiple airfoils, is not present.The airfoil chord length is 7.62 cm, the leading edge diameter is 0.3 cm, the effective pitch is 5.08 cm, the span is 12.7 cm, and the trailing edge of the symmetric airfoil is a 1.14-mm-radius round semicircle.
The present test section is useful and advantageous over cascade arrangements with multiple airfoils and significant flow turning because (i) the test section produces Mach numbers, pressure variations, Reynolds numbers, passage mass flow rates, and physical dimensions which match values International Journal of Rotating Machinery along airfoils in operating engines, (ii) the airfoil provides the same suction surface boundary layer development (in the same pressure gradient without flow turning) as exists in operating engines, (iii) aerodynamic loss data are obtained on airfoil surfaces without the complicating influences of vortices present along airfoil pressure surfaces, and (iv) only one airfoil is needed to obtain representative flow characteristics.Thus, the present experiment is designed to isolate the effects of Mach numbers, surface roughness, and turbulence intensity on wake aerodynamic losses, while matching Reynolds numbers, Mach numbers, pressure gradients, passage flow rates, boundary layer development, and physical dimensions of airfoils in operating engines.
Test Section Flow Characteristics and Mach Number
Distributions.Three different arrangements are used at the inlet of the test section to produce three different levels of mainstream turbulence intensity: (i) no grid or bars, (ii) fine mesh grid, and (iii) crossbars.With no turbulence grid employed at the test section inlet, the magnitude of the longitudinal turbulence intensity is 0.9 percent.With the fine mesh turbulence generating grid, the intensity and length scale are 5.5 percent, and 15.24 mm, respectively.With the cross bar turbulence generating grid, the intensity and length scale are 16.2 percent, and 19.70 mm, respectively.These values are measured at a location which is 87 percent of one chord length upstream of the airfoil leading edge.For each of the three different values of inlet total pressure, magnitudes of turbulence intensity and turbulence length scale are about the same because they are mostly a result of the specific turbulence generator employed [5,8].Mach number distributions, measured along the airfoil, are shown in Figure 2 and are similar to values present on turbine airfoil suction surfaces.These values are based upon measurements of total pressure at the test section inlet and static pressures measured along the midspan line of a smooth airfoil, which is employed especially for this task.The eight measured values shown in Figure 2 are based upon measurements made along the top of surface of the smooth airfoil at freestream Tu = 0.9%.These values are in excellent agreement with Mach numbers measured at three different locations on the bottom surface of the airfoil.
During each test, the total pressure at the inlet of the test section, P oi , is kept constant at one of the different values, at either 114 kPa, 140 kPa, or 195 kPa.Corresponding exit freestream Mach numbers, measured one chord length downstream of the airfoil trailing edge, are 0.6, 0.8, and 0.9, respectively, and chord Reynolds numbers (based on exit flow conditions) are 1.02 × 10 6 , 1.38 × 10 6 , and 1.96 × 10 6 , respectively.
With the highest inlet total pressure and the smooth airfoil, the flow in the passage is transonic and the trailing edge Mach number is 1.1.With this arrangement, a finite region of supersonic flow exists near the downstream portion of the airfoil, and a pair of oblique shock waves are present at the airfoil trailing edge.Numerical predictions of the Mach number distribution through the test section with this flow arrangement are presented in Figure 3 from Jackson et al. [54].Here, the angle produced by the strong oblique shock waves at the airfoil trailing edge is about 73 • (measured from the airfoil symmetry plane).This value and measured total pressures downstream of the oblique shock waves are in good agreement with theoretical values for a 5 • flow deflection angle from Anderson [68].The positions and shapes of the oblique shock waves from Schlieren images, also from Jackson et al. [54], show good qualitative agreement with the Mach number distribution which is given in Figure 3.
Rough Surface Characterization.
The magnitudes of equivalent sandgrain roughness are determined for all three surfaces tested (smooth, small-sized roughness and largesized roughness) using procedures which are described by Van Rij et al. [19], Zhang et al. [5], and Zhang and Ligrani [8].
The first step in this approach is a detailed determination of surface contour coordinates using a Wyko high-resolution optical Surface Profilometer.These optical profilometry data show that the rough surface from the pressure side of a turbine blade with particulate deposition from a utility power engine has irregularity, nonuniformity, and three dimensionality, including irregular arrangement.Equivalent sand grain roughness size of this real turbine blade surface is about 62.3 μm, which is close to the size of test surface of small-sized roughness elements (52.59 μm), as indicated in Table 1.This table also shows that magnitudes of other surface roughness statistics from the utility power engine turbine blade are similar to test surfaces employed with small-sized and large-sized roughness elements.
The next step in the procedure to determine equivalent sandgrain roughness magnitudes is numerical determination of a modified version of the Sigal and Danberg roughness parameter Λ s [17][18][19].The procedures to accomplish this are described by Van Rij et al. [19] and involve determination of the rough surface flat reference area, the total roughness frontal area, and the total roughness windward wetted surface area.With Λ s known, the ratio of equivalent sandgrain roughness size to mean roughness height, k s /k, is determined using a correlation for three-dimensional, irregular roughness with irregular geometry and arrangement, which is given by Van Rij et al. [19].The mean roughness height k is then also estimated by taking the distance between the maximum point of the ensemble average of all of the roughness peaks in any roughness sample, and a base height.Determination of this base location is based on analytic procedures which are also given by Van Rij et al. [19].
With this approach, magnitudes of equivalent sand grain roughness size for the three-dimensional, irregular roughness of the present study are determined.Values are given in Table 1 (which are based on an average of 8 profilometry scans), along with magnitudes of Λ s and k s /k.
Comparisons between Investigations. Figure 4 compares
smooth airfoil Ω or (P oi − P oe )/(P oi − P se ) total pressure loss coefficient profiles for different turbulence intensity levels to ones from Ames and Plesniak [3].For both studies, peak coefficients decrease, with broader distributions over larger ranges of y/cx values, as Tu increases.Similar qualitative trends are thus evident, even though quantitative values are different.Quantitative differences are due to a number of factors, including different airfoil configurations (curved, straight) and different flow conditions (low-speed, highsubsonic-compressible).In contrast to the present study, Ames and Plesniak [3] also report substantial losses in the freestream, as mentioned.In their low-speed cascade experiments, Ames and Plesniak [3] also observe wake broadening with increasing mainstream turbulence intensity and associate these with smaller peak velocity deficits.As Figure 4: Comparisons of total pressure loss coefficients for symmetric airfoils [5], with ones from Ames and Plesniak [3] for smooth airfoils (k s /cx = 0).mentioned, increased diffusion from the wake to surrounding freestream flow plays an important role in producing such trends.Note that aerodynamic loss contributions in the freestream are also substantial in the results obtained by Ames and Plesniak.This is a partial result of the streamline curvature which is present in their experiments.As a result, elevated freestream turbulence intensity levels substantially increase overall loss magnitudes in their experiments.In the present study, the total pressure at inlet of test section is almost the same as the total pressure in the freestream at the outlet.This means that losses in the freestream are negligible in the present investigation.Boyle et al. [36] also find negligible freestream losses in their high-speed cascade experiments, but they present significantly different trends from Ames and Plesniak [3], as well as the present study, since peak total pressure loss magnitudes increase as the level of mainstream turbulence increases.profiles, and profiles of the Ω total pressure loss coefficients for an inlet turbulence intensity level of 0.9 percent [8].Data are given for k s /cx values of 0, 0.00069, and 0.00164, which correspond to the smooth, small-sized roughness, and largesized roughness, respectively.The inlet total pressure is kept constant at 195 kPa when different airfoils are employed.The results in this set of figures are given for an exit freestream Mach number 0.9, and for a low value of inlet turbulence intensity, because variations due to roughness are often generally more apparent than for higher inlet turbulence intensity levels.Figure 5 shows that total pressure losses, Mach number deficits, deficits of kinetic energy, and Ω total pressure losses all increase at each y/cx location as k s /cx increases.This is apparent at peak value locations and is accompanied by increases in the width of the profiles as roughness size becomes larger.This is largely due to increased thickening of the boundary layers along the airfoil surfaces as k s /cx increases, which is accompanied by higher magnitudes of Reynolds stress tensor components, higher magnitudes of local turbulent transport, and higher surface skin friction coefficients.The broader wakes with increased roughness size in Figure 5 are then the result of (i) augmentations of mixing and turbulent transport in the boundary layers which develop along the roughened airfoils, (ii) thicker boundary layers at the airfoil trailing edges of the roughened airfoils, and (iii) increased turbulent diffusion in the transverse direction within the wake as it advects downstream [8].Note that the Ω profiles in Figure 5(d) are qualitatively similar to the C p profiles in Figure 5(a).This is because the static pressure through the wake P se varies by only a small amount, with a maximum decrease relative to the freestream value P se∞ of only about 8.5 percent.
Local Aerodynamic
For the present airfoil shape and configuration, numerical predictions by Zhang and Ligrani [39] show that flow separation regions (as well as associated form drag contributions) are about the same for all three airfoils, regardless of their k s /cx value (of either 0, 0.00069, or 0.00164).Numerical results also show that boundary layers are almost entirely turbulent along the entire length of all three tested airfoils.From these numerical predictions, values of k + s just before the airfoil trailing edge for M e,∞ = 0.6, normalized by friction velocity and kinematic viscosity, are approximately 24.8 and 60.9 for the small-sized roughness and large-sized roughness, respectively [39].
Other changes due to surface roughness are apparent in the C p,∞ values in Figure 5(a) for M e,∞ = 0.9, which are measured in the flow outside of the wake.With the smooth airfoil, the C p,∞ values in the freestream are approximately 0.003-0.007which correspond to (P oi − P oe,∞ ) values of 0.4-1.4kPa.The resulting difference in stagnation pressure between the inlet and exit of the test section is due to a pair of oblique shock waves present at the trailing edge of the airfoil [8].With a similar experimental condition, Jackson et al. [54] report similar C p,∞ values which result from strong oblique shock waves at the airfoil trailing edge with angles relative to the axial direction of about 73 degrees.With roughness on the airfoil surface, the boundary layers which develop along the airfoil are much thicker.As a result, blockage of the flow in the airfoil passage is increased, with less flow expansion as it advects through the test section passage.Consequently, the Mach number along the roughened airfoil is different, the flow in the passage is entirely subsonic, and the maximum Mach number is about 0.9.No trailing edge shock waves are then present at the trailing edge of the roughened airfoil, freestream C p,∞ values are zero, and P oi is approximately equal to P oe,∞ [8].
Integrated Aerodynamic Losses.
In the present investigations, IAL data are normalized using the airfoil passage effective pitch p and test section inlet stagnation pressure P oi in Figure 6.These data are given as they depend upon k s /cx for different Tu values, for the airfoils with the smooth surfaces (k s /cx = 0), small-sized roughness (k s /cx = 0.00069), and large-sized roughness (k s /cx = 0.00164).The total pressure at the inlet of the test section P oi is 195 kPa, and the corresponding exit freestream Mach number, measured one chord length downstream of the airfoil trailing edge, is 0.9.Note that when the turbulent intensity levels increase from 0.9 percent to 5.5 percent, dimensional IAL magnitudes decrease for all three cases with airfoils with different surface roughness.Different trends for various k s /cx values are found as the turbulent intensity levels increase from 5.5 percent to a much higher value, 16.2 percent.For the smooth airfoil, dimensional IAL values continuously decrease as the magnitude of the inlet turbulence intensity increases.For the airfoil with the small-sized roughness (k s /cx = 0.00069), the dimensional magnitudes of IAL become less sensitive to Tu and are nearly kept constant while Tu increases from 5.5 percent to 16.2 percent.For the airfoil with large-sized roughness (k s /cx = 0.00164), dimensional IAL magnitudes increase slightly as the inlet turbulence intensity level gets larger [8].The overall trends of the normalized data in Figure 6 illustrate the dominating influences of airfoil surface roughness on aerodynamic losses and weak dependence of these losses on inlet freestream turbulence intensity level.The data in Figure 6 also show larger normalized IAL variations with Tu at the largest k s /cx value, which provides additional evidence that thicker rough-surface boundary layers are more sensitive to changes of freestream turbulence level than thinner boundary layers which develop over smooth surfaces [8].
Figure 7 shows how normalized IAL data vary with exit freestream Mach number for different values of k s /cx for Tu = 0.9 percent.Here, IAL values increase as the exit freestream Mach number increases for each value of k s /cx.This is consistent with results from Arts [34] and Xu and Denton [9], whose experimental and analytical results show that total pressure losses increase approximately with the square of the Mach number.Figure 7 also shows that the largest IAL magnitude increases are present with the largesized roughness (k s /cx = 0.00164).Overall, such data further illustrate different dependence of aerodynamic losses on exit freestream Mach number, which occur as the level of airfoil surface roughness changes.
Symmetric Airfoils with Film Cooling
This investigation employs a test section especially designed to investigate the effects of suction surface film cooling on aerodynamic losses, because of their dominating importance in relation to overall downstream loss magnitudes [41,46,52,53].A symmetric airfoil is employed with the same transonic Figure 7: For Tu = 0.9%, comparison of normalized Integrated Aerodynamic Loss as dependent upon the exit Reynolds number for the smooth airfoil (k s /cx = 0), the airfoil with small roughness (k s /cx = 0.00069), and the airfoil with large roughness (k s /cx = 0.00164) for the symmetric airfoil investigations [8].
Mach number distribution on both sides.Mach numbers along the airfoil surface range from 0.4 to 1.24 and match values on the suction surfaces of airfoils from operating aeroengines.Thus, the distribution of Mach numbers is similar to the distribution for P oi = 195 kPa, which is presented in Figure 2. The magnitude of longitudinal turbulence intensity at the test section inlet is 0.9 percent.Integrated Aerodynamic Loss IAL magnitudes are determined from measurements of total pressure loss coefficients, which are made one chord length downstream of the airfoil [54].
Film cooling holes are located on one side of the airfoil near the passage throat where the freestream Mach number is nominally 1.07.Two different film cooling configurations are investigated (CDH, conical diffused holes, and RCH, round cylindrical holes), with density ratios from 0.82 to 1.23 over a range of blowing ratios.Results are given for both "ambient" and "cold" film cooling, which correspond to coolant to mainstream density ratios of 0.82-0.95,and 1.01-1.23,respectively [54].
The results thus provide insight into the mechanisms for total pressure losses due to suction surface film cooling, in order to isolate these phenomena without the complicating influences of flow turning or the collection of vortices ordinarily present in turbine passages due to this turning [54].
Test Section and Film Cooling Hole Geometries.
A schematic diagram of the nonturning airfoil cascade test section is shown in Figure 8(a).This is the same facility arrangement that is utilized by Zhang and Ligrani [8], and Zhang et al. [5][6][7].A schematic diagram of the cross-section The airfoil chord length is 7.62 cm.The effective pitch is 5.08 cm.The trailing edge of the symmetric airfoil is a 1.14 mm radius round semicircle, designed to produce the wake flows of turbine airfoils employed in operating engines.The symmetric airfoil shape is employed to provide sufficient interior space for a plenum for film cooling injection, while maintaining appropriately scaled injection hole diameter and trailing edge thickness.The two hole configurations studied are CDH-conical diffused holes, and RCH-round cylindrical holes.In each case, one row of 21 holes is employed with spanwise spacing of 4 hole metering diameters, at a location 3.73 cm or 0.49cx from the airfoil leading edge.Each hole in the CDH geometry is diffused axisymmetrically about its axis.The entrance diameter and length to entrance diameter ratio of both types of holes are the same, .068cm and 2.26, respectively.
The present test section is useful and advantageous over cascade arrangements with multiple airfoils and significant flow turning for the same issues mentioned earlier, and because the results obtained with the arrangement are not configuration dependent.As such, the present experiment is designed to isolate the effects of suction surface film cooling on wake aerodynamic losses, while matching Reynolds numbers, Mach numbers, pressure gradients, passage flow rates, boundary layer development, and physical dimensions of airfoils in operating engines.and calibrated copper-constantan thermocouples are used to sense pressures and temperatures at different locations throughout the facility, including throughout the injection air supply system.Signals from the transducers are processed by Celesco Model CD10D carrier demodulators.All pressure transducer measurement circuits are calibrated using a Wallace and Tiernan FA145 bourdon tube pressure gage as a standard.A United Sensor PLC-8-KL pitot-static probe with an attached copper-constantan thermocouple and a fourhole conical-tipped pressure probe with an attached copperconstantan thermocouple are used to sense total pressure, static pressure, and recovery temperature at the inlet and exit of the test section, respectively, during each blow down.The conical probe is aligned using two yaw ports placed on either side of the probe.As a blow down is underway, the probe is located one chord length downstream of the airfoil.It is traversed using a two-axis traversing sled with two Superior Electric M092-FF-206 synchronous stepper motors, connected to a Superior Electric Model SS2000I programmable motion controller and a Superior Electric Model SS2000D6 driver.These are interfaced and controlled by a Hewlett-Packard 362 series computer.Additional details are provided by Jackson et al. [54].
Aerodynamic Losses due to Trailing-Edge Shock Waves.
Here, aerodynamic losses due to trailing-edge shock waves are considered for the uncooled airfoil, as well as for the film cooled airfoil.Measurements of C p distributions with y/c show that wake total pressure deficits extend to y/c = ±0.07 from the airfoil symmetry line at y/c = 0. Outside of this region, in the freestream, magnitudes of P oi∞ − P oe∞ are nonzero, ranging from 0.3 to 1.6 kPa (which correspond to C p from 0.002 to 0.007) because of the pair of trailing edge oblique shock waves shown in Figure 3. Thus, the influences of the trailing-edge shock waves on stagnation pressure losses are determined based on variations of data measured in the freestream flow.
Magnitudes of freestream pressure coefficient C p∞ , deduced from these freestream pressures, are given as dependent upon blowing ratio in Figure 9.These data thus represent aerodynamic losses due to the trailing edge oblique shock waves only.The most important trend in this figure Theory, m = 0 Figure 9: Normalized total pressure losses in the freestream flow due to trailing edge, oblique shock waves as dependent upon blowing ratio for round cylindrical holes (RCHs) and conical diffused holes (CDHs), for the symmetric airfoil with film cooling [54].
is the decrease of C p∞ with m for "ambient" and "cold" CDH injection as well as for "cold" RCH injection.Values with film cooling are also lower than the m = 0 no-film cooling value, which is in agreement with oblique shock wave theory given by Anderson [68].As C p∞ decreases with increasing m, shock wave angles (measured relative to the airfoil symmetry plane) become increasingly larger than 73 • (the no-film cooling value evidenced by the numerical results in Figure 3).This happens because the film increases the effective thickness of the airfoil trailing edge, which results in less expansion through the airfoil passage.As a result, the flow deflection angle at the trailing edge decreases, along with Mach numbers downstream of film hole locations.The trailing edge shock waves are thus weakened by CDH film cooling and "cold" RCH film cooling as blowing ratio increases since normalized total pressure drops across them are smaller.Eventually, the oblique shock waves disappear as shock wave angles approach 90 • .Figure 9 shows that the largest C p∞ decreases with blowing ratio occur with CDH films rather than with RCH films.This is because larger concentrations of CDH film remain closer to the surface as the film advects to the airfoil trailing edge.Lower C p∞ values and weaker oblique shock waves also result as the density ratio increases since lower momentum flux ratios enhance this effect.
Integrated Aerodynamic Losses.
In determining magnitudes of the integrated aerodynamic losses, IAL, contributions from the oblique shock waves in the freestream are removed from total pressure profiles.This is accomplished by subtracting off the constant freestream value of (P oi∞ − P oe∞ )/P oi∞ throughout each C p profile.This gives corrected profiles of (P oe∞ − P oe )/P oi∞ , which, of course, equal zero in the freestream.Profiles of (P oe∞ − P oe ) are then integrated with respect to y to determine IAL values at each film injection flow condition.IAL magnitudes thus represent mixing losses only without the contributions of the shock waves [54].
Such IAL magnitudes are presented in Figure 10 as dependent upon Mach number ratio, M c /M ∞ .Included is the integrated aerodynamic loss magnitude for no-film cooling at M c /M ∞ = 0. Here, M ∞ is the local freestream Mach number at the exit locations of the film cooling holes.In all cases, values measured with film cooling are higher than the no film cooling value.Figure 10 shows important variations with film cooling hole geometry since integrated losses measured downstream of RCH are substantially higher than CDH magnitudes (when compared at the same Mach number ratio M c /M ∞ ).The two hole configurations additionally show different dependencies on the density ratio.IAL values measured downstream of CDH increase continuously with M c /M ∞ with about the same magnitudes for "ambient" and "cold" film injection.In contrast, RCHintegrated aerodynamic losses increase with density ratio at each M c /M ∞ [54].
The same data are shown in normalized form after subtracting off the IAL value for no-film cooling in Figure 11.Values are compared with the one-dimensional mixing loss equation given by where C 1 = 0.76 for CDH and C 1 = 2.48 for RCH, and This equation is similar to one given by Denton [41].In both cases, M ∞ and u ∞ are freestream values at the streamwise locations of the film cooling holes.Equation (17) shows good agreement with CDH data in Figure 11, including its dependence on density ratio, for blowing ratios m greater than 0.3.The correlating equation also shows agreement with most of the RCH data, including its dependence on density ratio since "ambient" values are higher than "cold" values at each m.The decrease of RCH "ambient" data as m increases (for m > 0.6 in Figure 11) is consistent with data presented by Kubo et al. [52], as well as with measured C P profiles [54].
Cambered Vanes with No-Film Cooling
The present work on turbine vanes with no film cooling is unique because new data are provided to clarify the separate and combined influences of Mach number and freestream turbulence level on aerodynamic performance of a cambered turbine vane.Considered are the effects of freestream turbulence on aerodynamic losses downstream of the smooth vane for three different Mach number distributions, one of which results in transonic flow (and matches flow conditions in the application).A fine mesh grid and crossbars are used to augment the magnitudes of longitudinal turbulence intensity at the inlet of the test 12.The flow rate of each bleed duct is regulated using an adjustable ball valve.Following these, the test section walls have the same pressure side and suction side contours as the test vane.The exit area and exit flow direction from the cascade test section can be altered by changing the angles of the two exit tailboards, which are also shown in Figure 12.The true chord of the vane is 7.27 cm, the axial chord is 4.85 cm, the pitch is 6.35 cm, the span is 12.7 cm, and the flow turning angle is 62.75 degrees.Respective values for an exit Mach number of 0.71 are then 1.1, 5.4, and 7.7.Here, turbulence intensity is defined as the ratio of the root-mean square of the longitudinal fluctuation velocity component divided by the local streamwise mean component of velocity [37].
Test Section Flow
Figure 13 shows the Mach number distributions along the turbine vane pressure side and along the vane suction side for each of the three operating conditions.Figure 13 shows that the Mach number distributions on pressure and suction sides for M ex = 0.71 are in excellent agreement with data from an operating gas turbine engine.This particular Mach number distribution is transonic on the vane suction side and subsonic on the pressure side.The Mach number distributions in Figure 13 for the other two operating conditions are completely subsonic [37].
Pressure and Temperature Measurements.
As tests are conducted, Validyne Model DP15-46 (36) pressure transducers (with diaphragms rated at 13.8, 34.5 kPa or 344.7 kPa, resp.) and calibrated copper-constantan thermocouples are used to sense pressures and temperatures at different locations throughout the facility.A United Sensor PLC-8-KL pitot-static probe with an attached, calibrated Watlow standard type-K copper-constantan thermocouple and a four-hole conical-tipped pressure probe also with a similar thermocouple are used to sense total pressure, static pressure, and recovery temperature at the inlet and exit of the test section, respectively, during each blowdown.Mach numbers, sonic velocities, total temperatures, and static temperatures are determined from these data.The four-hole probe has a tip which is 1.27 mm in diameter, and a stem which is 3.18 mm in diameter.Each port has a diameter of 0.25 mm.The overall response time of the pressure measuring system is about 0.2 seconds.The conical probe is aligned using two yaw ports placed on either side of the probe.The probe is located downstream of the vane.The position in the streamwise direction is adjustable.As a blow down is underway, it is traversed across a full pitch using a two-axis traversing sled with two Superior Electric synchronous stepper motors, connected to a Superior Electric programmable motion controller and a Superior Electric driver.Commands for the operation of the motion controller are provided by LABVIEW 7.0 software and pass through a serial port after they originate in a Dell Precision 530 PC workstation.Each profile is measured through the wake from minus y/cx locations to positive y/cx locations, and then repeated as the International Journal of Rotating Machinery Comparison of vane wake total pressure loss coefficient profiles with similar data from Ames and Plesniak [3].The present data are measured 0.25 axial chord lengths downstream of a cambered vane with M ex = 0.35 [37].
probe is traversed in the opposite direction.The resulting data are subsequently averaged at each wake measurement location.
Voltages from the carrier demodulators and thermocouples are read sequentially using Hewlett-Packard HP44222T and HP44222A relay multiplexer card assemblies, installed in a Hewlett-Packard HP3497A low-speed Data Acquisition/Control Unit.This system provides thermocouple compensation electronically such that voltages for type T thermocouples are given relative to 0 • C. The voltage outputs from this unit are acquired by the Dell Precision 530 PC workstation through its USB port, using LABVIEW 7.0 software and a GPIB-USB-B adaptor made by National Instruments.
Local Aerodynamic
Performance. Figure 14 presents local aerodynamic performance data (Ω total pressure loss coefficients) in the wake for three freestream turbulence intensity levels, which are measured in the wake at 0.25 axial chord lengths downstream of the vane.
Figure 15 shows the effects of surface roughness on normalized local total pressure losses C p , normalized local Mach numbers M e /M e,∞ , normalized local kinetic energy KE, and Ω total pressure loss coefficients for M ex = 0.71.Rough surface characteristics for the vane are given in Table 2. Here, surface roughness characteristics are determined in the same manner as for the symmetric airfoil.These data are measured in the wake at one axial chord length downstream of the vane.To provide an appropriate standard of comparison, each profile is measured over one complete exit pitch spacing (or one complete exit vane spacing).The different profiles provide information on local wake deficits of total pressure, Mach number, and kinetic energy.Data are given for a smooth vane, and vanes with uniform small-sized roughness (k s /cx = 0.00108), uniform large-sized roughness (k s /cx = 0.00258), and variable roughness.The inlet total pressure is kept constant at 106 kPa to maintain the same operating condition.
The wake profiles shown in Figures 14 and 15 are asymmetric [37].Suction side wakes (at negative y/cx) are thicker than the pressure side wakes (at positive y/cx).The asymmetry in the wake is due to loading on the vane surface, and the past history of the flow.In addition, the growth and development of boundary layers on the suction and pressure sides are different.On the suction side, where local freestream velocities are higher, the boundary layers continue to become thicker up to the trailing edge.The thicker boundary layers then separate from the suction surface of the vane, which affects wake behavior immediately downstream of the trailing edge.In contrast, on the pressure side, boundary layers decrease in thickness in the back section of the vane contour as a result of locally higher flow acceleration [37].Bammert and Sandstede [21] report data showing that the boundary layer on the suction side is considerably thicker than on the pressure side.According to them, wake profile losses are determined more by suction side events by a factor about 2.5 to 3.5 times compared to events originating near the pressure side.
In their low-speed cascade experiments, Ames and Plesniak [3] also observe wake broadening with increasing mainstream turbulence intensity and associate these with smaller peak velocity deficits.The Ames and Plesniak [3] data are taken approximately 0.3 axial chord lengths downstream of a vane with M ex = 0.27, compared to 0.25 axial chord lengths downstream of a vane with M ex = 0.35 for the Zhang et al. [37,38] results.The vane used by Ames and Plesniak is two times the size of the vane from present study, and has an exit angle of 72.4 • , compared to an exit angle of 62.75 • for the Zhang et al. [37,38] results.Figure 14 shows that the two sets of data have similar trends and similar qualitative variations with Tu.In particular, both sets of data show decreasing Ω magnitudes (see ( 2)) at y/c x = 0, and broader wakes with higher Ω magnitudes at negative y/cx, as Tu becomes larger.The small quantitative differences are due to slightly different vane configurations, flow conditions, and measurement locations relative to the vane trailing edges.Overall, the agreement between the two data sets not only provides verification of procedures and results from the present study, but also confirms the effects of inlet turbulence intensity level on vane aerodynamic losses in near-wake profiles for similar experimental conditions.Note that freestream losses are present outside of the suction side wakes (at negative y/cx) in both investigations when inlet turbulence intensity levels are augmented [3,37,38].
Figure 15 shows that total pressure losses, Mach number deficits, deficits of kinetic energy, and Ω total pressure loss coefficients all increase at each y/cx location within the wake as k s /cx increases, provided that the roughness on the surfaces is uniform.The boundary layers along the vane surfaces are thickened as k s /cx increases, which is accompanied by higher magnitudes of Reynolds stress tensor components, higher magnitudes of local turbulent transport, and higher surface skin friction coefficients.The broader wakes with increased roughness size in Figure 15 are then the result of (i) different boundary layer development with various roughness, such as earlier laminar-turbulent transition, (ii) augmentations mixing and turbulent transport in the boundary layers which develop along the roughened vanes, (iii) thicker boundary layers at the trailing edges of the roughened vanes, and (iv) increased turbulent diffusion in the transverse direction within the wake as it advects downstream.Within Figure 15, the effects of surface roughness are much less apparent for positive y/cx values, or downstream of the pressure sides of the vanes.This is evidenced by profiles International Journal of Rotating Machinery for all three k s /cx values (0, 0.00108, and 0.00258), as well as for the variable roughness vane, which are similar for y/cx > 0.05.This is partially due to the different growth of boundary layers on the pressure and suction sides for different amounts of surface roughness.Note that the Ω profiles in Figure 15(d) are qualitatively similar to the C p profiles in Figure 15(a).This is because the static pressure through the wake P se varies by only a small amount relative to the freestream value P se∞ .
Figure 15 also includes measurements made downstream of the vane with variable roughness [38].The arrangement of variable rough surface is based on observations of roughened turbine vanes from industrial applications.From these observations, the suction side is more or less uniform in roughness and remains at or very close to the "as-cast" condition, even after very long operating times.Pressure side roughness, on the other hand, is more variable.Local roughness magnitudes are often the same as on suction side roughness at the leading edge.Local roughness sizes then vary linearly to the full roughness size, which are typically reached at about 40% of the distance along the pressure surface.Thus, differences in surface roughness characteristics between the suction and the pressured sides can be very significant due to the different flow and operating conditions encountered.There is also a considerable scatter in the roughness patterns both in qualitative and quantitative terms that are present on vanes and blades from operating engines.The configuration of variable roughness investigated is one typical configuration and is described by Zhang et al. [38].For the present vane, magnitudes of k s for different percentages of distance along the pressure side are as follows: first 10 percent: k s = 0 μm, second 10 percent: k s = 0-52.6μm, third 10 percent: k s = 52.6 μm, fourth 10 percent: k s = 52.6-125.2μm, and last 60 percent: k s = 125.2μm.
In most cases, variable surface roughness profile points in Figure 15 lie between the profiles measured with k s /cx = 0, and k s /cx = 0.00108 for y/cx > −0.1.This is partially due to different rates of boundary layer development as different levels of roughness are encountered along the vane pressure surface.This gives different magnitudes of boundary layer mixing and losses, and a different wake initial condition near the vane trailing edge, compared to vanes with uniformly roughened surfaces.As a result, suction side wake profiles at y/cx < 0 downstream of the vane with a variable rough surface are widened.Note that suction side wake profiles in Figure 15 are also widened somewhat for the vane with variable roughness, even this vane has a smooth suction side.Overall, the wakes are pushed toward smaller y/cx values as they are advected downstream (i.e., towards the vane suction side), regardless of the level, uniformity, or variability of the roughness along the surfaces of the vanes [38].recirculation zone, and wake mixing losses that are initially present just downstream of the vanes.
IAL data are normalized using the test section passage pitch p and test section inlet stagnation pressure P oi in Figure 16, which shows how IAL data vary with inlet turbulence intensity level for three exit Mach numbers, as measured 0.25 of one axial chord length, and one axial chord length downstream of the turbine vane.For each exit Mach number and each measurement location, the normalized IAL data in this figure increase slightly as the inlet turbulence intensity level increases.The IAL differences at each M ex for the 1cx and 0.25cx downstream measurement locations are relatively small compared to overall IAL loss magnitudes.This is a result of how momentum and turbulence kinetic energy are budgeted and conserved through the wake.Near the vane trailing edge, most turbulence in the wake is initially produced in the separated and recirculating flow zones, which give the initial condition for wake profile development, as well as initial values of turbulence at beginning of the wake.As the wake continues to develop downstream, turbulence decays with streamwise distance because turbulence production is less than diffusion and advection.As a result, the shape of momentum deficit changes mostly due to the transverse diffusion of momentum [37].Overall magnitudes of total pressure deficits and momentum deficits then do not change greatly as the wake is advected in the streamwise direction because not much mean streamwise momentum is converted into turbulence by local shear and turbulence production [37,38].Such trends in the present data are consistent with results presented by Mee et al. [10], who suggest that most entropy increases take places close to the trailing edge of the airfoils.Additional mixing losses are then only a small fraction of overall loss magnitudes.
IAL dimensional magnitudes, determined from profiles that are measured 0.25 of one axial chord length, and one axial chord length downstream of the turbine vane, are presented in Figure 17 as dependent upon the normalized equivalent sandgrain roughness size for three exit Mach numbers.The overall trends of the data in this figure illustrate the dominating influences, first, of the Mach number distribution along the airfoil (as designated by exit Mach number), and second, of the surface roughness (as characterized by normalized equivalent sandgrain roughness size).For each value of k s /cx, dramatic and important IAL magnitude increases are present as higher Mach numbers are present along the airfoil.IAL magnitudes also increase almost linearly as k s /cx increases for each profile measurement location and for each value of exit Mach number [38].The IAL differences obtained at each M ex for 1cx and 0.25cx are relatively small compared to overall IAL loss magnitudes, which is consistent with the results in Figure 16.Dimensional IAL magnitudes presented in Figure 18 are determined from profiles that are also measured at two different locations downstream of the test vane trailing edge.The overall trends of the data in this figure further illustrate the dominating influences of the surface roughness on aerodynamic losses, and the weaker dependence of these losses on inlet freestream turbulence intensity level and wake measurement location.
IAL data are normalized using the test section passage pitch p and test section inlet stagnation pressure P oi in Figure 19, which shows how IAL data vary with exit Mach number for different values of k s /cx.IAL values increase as the exit Mach number increases for each value of k s /cx.This is consistent with results from Zhang and Ligrani [5,8], whose data for a symmetric airfoil are included in Figure 19 and show similar qualitative trends.When compared at the same exit Mach number, the present normalized IAL data for cambered test vanes are then much higher than data obtained downstream of straight symmetric airfoils without Figure 18: Comparison of dimensional integrated aerodynamic loss as dependent on normalized equivalent sand grain roughness size for different inlet turbulence intensity levels, at two different locations downstream of the test vane trailing edge, for a smooth, cambered vane with M ex = 0.71 [27].
flow turning.This is due to different flow development over the symmetric and cambered airfoils from different pressure gradients and different amounts of streamline curvature which are imposed on airfoil boundary layers.Such imposed pressure gradients are a result of airfoil shape, the imposed Mach number distribution, and streamline curvature and flow turning in the flow outside of the boundary layers.Overall, these results show that greater losses are present with flow turning and cambered airfoils [38].
One data point is included in Figure 19 for the vane with variable roughness.As mentioned, different wake behavior is tied to different rates of boundary layer development, different magnitudes of boundary layer mixing and losses, and a different wake initial condition near the vane trailing edge, compared to vanes with uniformly roughened surfaces.As a result, the corresponding normalized IAL value in Figure 19 is between values for the k s /cx = 0.00108 and k s /cx = 0.00258 uniformly roughened vanes for an exit Mach number of 0.71 [38].
6.6.Area-Averaged, Mass-Averaged Loss Coefficients.Different loss coefficient definitions are sometimes employed by different research groups.Of these, Boyle and Senyitko [64] and Boyle et al. [30] employ an area averaged loss coefficient, Y A , in their analysis, as mentioned earlier.In their investigation, Boyle and Senyitko [64] employ vanes with 5.18 cm axial chord length and 75 • flow turning angle.Their data are based on measurements made 0.35 of an axial chord length downstream of their vane trailing edge.Boyle et al. [30] employ vanes with 4.445 cm axial chord length and approximately 80 • flow turning angle in their numerical prediction.Figure 20
p)
Figure 19: Comparison of normalized integrated aerodynamic loss magnitudes as dependent upon exit Mach number, and measured one chord length downstream of the cambered and symmetric airfoils, for Tu = 1.1-1.6 percent for a cambered vane [38].
data with results from the present study over a range of exit Mach numbers which are measured 0.25cx downstream of the vane in Figure 12.These data indicate that higher Y A losses are generally observed as higher inlet turbulence intensity levels are present.Excellent agreement is shown in Figure 20 between results from Boyle and Senyitko [64] and the present study at low exit Mach number.For the present study, M ex = 0.35 and Re = 0.5 × 10 6 , whereas M ex = 0.3 and approximately Re = 0.52 × 10 6 for Boyle and Senyitko [64].Predicted values at high exit Mach numbers are lower than the present experimental results, but differences are relatively small, and the trends are generally consistent.Here, M ex = 0.7 and Re = 1 × 10 6 for Boyle et al. [30], compared to M ex = 0.71 and Re = 0.95 × 10 6 for the present study.Therefore, to some extent, the present experimental data validate the numerical code and procedures employed by Boyle et al. [30].
Figure 21 shows comparisons of the data of Boyle et al. [30] and Boyle and Senyitko [64] with results from Zhang et al. [38] over a range of exit Mach numbers and different k s /cx values.These data indicate that higher Y A losses are generally observed as either exit Mach number or surface roughness increases.Of particular interest is the dramatic increase in Y A magnitudes that occurs as the exit Mach number increases from 0.5 to 0.7 for the airfoils with k s /cx magnitudes of 0 and 0.0108.Note that Y A magnitudes from smooth vanes from Boyle and Senyitko [64] and Boyle et al. [30] [64], and Boyle et al. [30], for smooth, cambered vanes [37].
differences also partially account for some of the differences between these data and results from the present investigation.Data from Kind et al. [22] are presented and compared with some data from Zhang et al. [38] in Figure 22.The Kind et al. data are measured 0.4 of an axial chord length downstream of their airfoil.In Figure 22, Y P loss coefficient data are given as they vary with normalized mean roughness height k/c since sandgrain roughness height, k s , is not available from Kind et al. [22].Y P loss coefficient results from Zhang et al. as well as from Kind et al. [22] increase as the normalized mean roughness height becomes larger.Similar magnitudes of Y P for the two studies are evident for k/c = 0 (i.e., smooth vane surfaces).However, differences are evident between the two data sets when k/c > 0, which are most likely due to different surface roughness characteristics.For example, the Kind et al. [22] roughness with k/c of 0.002 may be comprised of roughness elements with different density compared to ones employed in the present study.Such roughness could thus give different Y P loss coefficient magnitudes even though mean roughness heights may be comparable.
Cambered Vanes with Film Cooling
This investigation is focused on investigation of the effects of film cooling hole shape, orientation, blowing ratio, as well as the number of rows of holes, and their resulting effect on the aerodynamic losses.The film cooling holes are located on the suction side gill region of a simulated cambered turbine blade.Four different hole configurations are tested at two different blowing ratios, utilizing either a single or two rows of film cooling holes.Carbon dioxide is used as the injectant to achieve a density ratio of 1.9-2.0similar to values present in operating gas turbine engines.A mesh grid is used to augment the magnitudes of longitudinal turbulence intensity 12.70 cm.Appropriate cascade flow conditions are maintained by a pair of adjustable bleed ducts, and two exit tailboards, which are also shown in Figure 12.By adjusting these items, appropriate Mach number distributions along the test vane are obtained.Film cooling configurations and conditions and geometric parameters of the test vane are given by Chappell et al. [55].
Film Cooling Hole Configurations.
Four different hole configurations are investigated: round axial (ra), shaped axial (sa), round radial (rr), and round compound (rc) [55].For each configuration, either the first upstream row of holes only is employed, or two rows of holes are employed.The holes located in each row are staggered with respect to each other.As shown in Figure 23 the hole exits for all four arrangements are located at 15% and 25% of the axial chord.
In addition, the holes in each row have the same spanwise hole diameter spacing of 6d.The row of holes marked a-a in Figure 23 contains 13 film cooling holes, and the row of holes marked by b-b in Figure 23 contains 14 film cooling holes.Note that the rr and rc configurations are unique because of the large compound angles which are employed [55].The inclination and orientation angles of the different hole configurations are given in Table 4.The shaped axial (sa) holes are both laterally diffused and forward diffused.Additional configuration and geometry details for all four hole arrangements are provided by chappell et al. [55].
Secondary Film Cooling Injection
System.Carbon dioxide is used as the injectant to achieve density ratios similar to those experienced in operating gas turbine engines, ρ c /ρ ∞ = 1.9-2.0.With the present secondary injection system arrangement, regulated carbon dioxide leaves its tank and first goes through a sonic orifice, which is used in conjunction with pressure and temperature measurements to determine the injectant mass flow rate.After leaving the sonic orifice, the injectant enters a TV-050 heat exchanger, which uses liquid nitrogen as a coolant.As the injectant is cooled, it is passed through a by-pass valves until the mainstream air is started, to prevent any advance cooling of the vane.Just prior to the mainstream blowdown, valves are arranged so that the injectant flows through the vane and out of the film cooling holes.To approximate a plenum International Journal of Rotating Machinery condition, the injectant enters the injectant passage from both spanwise sides of the vane [55].
Test Section Flow Characteristics and Mach Number
Distributions.During each test, the inlet total pressure at the inlet of the test section (one axial chord length upstream of the vane leading edge) is maintained constant at 94 kPa.This corresponds to an exit freestream Mach number of 0.35 and an exit Reynolds number based on axial chord of 0.5 × 10 6 , as measured 0.25 axial chord lengths downstream the airfoil trailing edge.The magnitudes of the inlet mainstream longitudinal turbulence intensity and length scale for the present experimental conditions are 5.7 percent and 2.2 cm, respectively.Here, turbulence intensity is defined as the ratio of the root-mean square of the longitudinal fluctuation velocity component divided by the local streamwise mean component of velocity.Figure 13 shows the Mach number distributions along the turbine vane pressure side and along the vane suction side for the present operating conditions with M ex = 0.35.The data shown in this figure are based upon measurements of total pressure at the test section inlet, and vane mid-span static pressures.As shown in Figure 13, the Mach number distribution employed in this study is subsonic on the vane suction and pressure sides, with an adverse pressure gradient on the suction side of the vane when Bx/cx > 0.60.As such, the present Mach number distribution is in excellent agreement with data for gas turbine vanes from industry [55].
Integrated Aerodynamic Losses. Figures 24 and 25 present
normalized integrated aerodynamic loss data, after subtracting off the IAL value for no-film cooling.These data are given as they depend upon blowing ratio and show important variations with film cooling hole configuration and arrangement.In Figure 25 (which presents both rows data), differences between the RR, RC, and SA sets of IAL data are quite small, with IAL values which are significantly higher than ones produced by the RA configuration (at particular blowing ratio values).In all cases, IAL values increase continuously with blowing ratio.IAL values in Figures 24 and 25 are compared with the one-dimensional mixing loss equation given by ( 17) [54,55].Here, C 1 values are given in Table 3 for the different film cooling arrangements, and M ∞ , ṁ∞ , and u ∞ are freestream values at the streamwise locations of the film cooling holes.Equation (17) (with C 1 values from Table 3) shows good agreement with much of the data in Figures 24 and 25, including its dependence upon blowing ratio m and number of rows of film cooling holes.In particular, the correlation equation matches most of the both rows data in Figure 25.In Figure 24 for the first row data, the RA, RC, and RR data sets are well aligned with correlation ( 17) [55].
The thermal performance characteristics of the RR, RA, RC, and SA holes are described by Chappell et al. [69].Bunker [70,71] provides information on the thermal performance characteristics of numerous other film cooling hole configurations.3.
Second Law Loss Analyses
Overall, mass-averaged magnitudes of exergy destruction are compared in Figure 26 for the cambered vane and the symmetric airfoil, as they vary with surface roughness.These data are given for different measurement locations downstream of different airfoils with different exit Mach numbers, for freestream turbulence intensity values from 0.9 to 1.5 percent.Dramatic increases of x dest,o are apparent as either k s /c or as M ex increases.Overall, the symmetric airfoil x dest,o results appear to be lower than the cambered airfoil results, when compared at the same k s /c, M ex , and x/cx.Similar conclusions are evidenced by the results which are presented in Figure 27, where overall exergy destruction values are given as they vary with freestream turbulence intensity.However, within this figure, increases of x dest,o with Tu are relatively small.Figure 28 shows how exergy destruction changes with different film cooling conditions and configurations.Here, magnitudes for the symmetric airfoil are significantly higher than values for the cambered airfoil, which is consistent with Integrated Aerodynamic Loss data.
Summary and Conclusions
Experimental and numerical results are presented from a series of investigations which have taken place over the past 32 years.Considered are effects of mainstream turbulence intensity, surface roughness, exit Mach number, and airfoil [55].Symbols are defined in Figure 24.Lines are determined using (17) and C 1 values from Table 3.
camber as they influence local and integrated parameters which quantify aerodynamic losses for symmetric airfoils and cambered vanes, both with and without film cooling.The turbine airfoils are investigated within compressible, high-speed flows with either subsonic or transonic Mach number distributions.
Local and global aerodynamic losses are also quantified using second law analyses.For each experimental arrangement, an overall, mass-averaged magnitude of exergy destruction is determined.Comparisons of these values for the different physical phenomena, which are investigated, provide quantitative assessments of the relative irreversibilities which result from the different phenomena which are considered.Relative to a smooth, symmetric airfoil with no film cooling at low Mach number and low freestream turbulence intensity overall, the largest increases in exergy destruction occur with increasing Mach number, and increasing surface roughness.Important variations are also observed as airfoil camber changes.Progressively smaller mass-averaged exergy destruction increases are then observed with changes of freestream turbulence intensity, and different film cooling conditions.
The effects of surface roughness on the aerodynamic performance of symmetric turbine airfoils (each with a Mach number variation from 0.4 to 0.7) are investigated with inlet turbulence intensity levels of 0.9 percent, 5.5 percent, and 16.2 percent, and with ratios of equivalent sandgrain roughness size to airfoil chord length k s /c of 0, 0.00069, and 0.00164.Results show that changing the airfoil surface roughness condition has a substantial effect on the normalized and dimensional magnitudes of Integrated Aerodynamic Losses produced by the airfoils.Relative to smooth airfoils, this is due to (i) augmentations of mixing and turbulent transport in the boundary layers which develop along the roughened airfoils, (ii) thicker boundary layers at the trailing edges of roughened airfoils, (iii) separation of flow streamlines at the airfoil trailing edges, and (iv) increased turbulent diffusion in the transverse direction within the wakes of roughened airfoils as they advect downstream.In contrast, Integrated Aerodynamic Losses show much weaker dependence upon inlet freestream turbulence intensity level.Also considered are the combined effects of surface roughness, exit freestream Mach number, and turbulence intensity on the aerodynamic performance of symmetric turbine airfoils.According to these results, the dependence of aerodynamic losses on mainstream turbulence intensity and freestream Mach number is vastly different as level of airfoil surface roughness changes.For example, magnitudes of Integrated Aerodynamic Losses change by much larger amount as either the freestream Mach number or turbulence intensity are altered, when the airfoil is roughened.This is partially a result of the thicker boundary layers which develop over the roughened surfaces giving greater blockage and less expansion of the flow through the airfoil passage.With film cooling employed on the symmetric airfoils, two different film cooling hole configurations are located just downstream of the passage throat where the freestream Mach number is nominally 1.07 (i) simple angle, round cylindrical holes (RCH) and (ii) simple angle, conical diffused holes (CDH).Local total pressure loss coefficient data show that the aerodynamic losses due to mixing are significantly greater than those due to oblique trailing edge shock waves.Contributions to total pressure coefficient profiles from the two types of losses are separated by assuming that shock losses are the same magnitude at all y/cx throughout the wake and adjacent freestream flows.In some cases, the cooling films reduce the strength of the shock waves (and the associated aerodynamic losses) by greater amounts as the blowing ratio increases, particularly with higher density ratio films from the CDH configuration.Such decreases are evidenced by reduced freestream pressure coefficients which are accompanied by increased shock wave angles and smaller flow deflection angles.
Integrated aerodynamic mixing losses, measured one chord length downstream of the airfoil (and with the effects of the trailing edge oblique shock waves removed), are higher with film cooling than without.For each hole configuration investigated, integrated losses generally increase with injectant Mach number ratio as the film to freestream density ratio is approximately constant.Integrated aerodynamic losses are significantly lower with conical diffused holes than with cylindrical holes (3 times when compared at the same blowing ratio), which illustrates the strong dependence of aerodynamic mixing losses on film hole geometry.Such behavior is due to different amounts of mixing just downstream of the airfoils and different turbulent diffusion of streamwise momentum in a direction normal to the airfoil symmetry plane, both of which are evidenced by different aerodynamic loss profile magnitudes and levels of symmetry.Downstream of cambered vanes with no film cooling, wake profiles of total pressure loss are asymmetric.This is due to different loading, different boundary layer growth, and different susceptibility to flow separation on the different vane surfaces, which also causes the suction side wakes (at negative y/cx) to be thicker than the pressure side wakes (at positive y/cx).Overall, the wakes are pushed toward smaller y/cx values as they are advected downstream (i.e., towards the vane suction side).The wake downstream of the vane becomes wider at higher exit Mach numbers as a result of higher advection speeds, as well as increased diffusion within the wake.Wake profiles also become broader with increased turbulence intensity levels, especially on the suction side.This partially due to the effects of suction side boundary layers, which are forced into transition farther upstream by the higher magnitudes of freestream turbulence.Increased diffusion from the wake to surrounding freestream flow also plays a role in producing such trends.In general, wake profiles are more sensitive to changes and augmentations of turbulence intensity at lower subsonic flow conditions than when transonic flow is present.
IAL magnitudes increase as higher Mach numbers are present along the cambered vanes.When compared at the same exit Mach number, the present normalized IAL data for cambered vanes are much higher than data obtained downstream of straight symmetric airfoils without flow turning.Overall, this means that greater losses are present with flow turning and cambered airfoils than with symmetric airfoils.Magnitudes of IAL also slightly increase as the inlet turbulence intensity level increases.The IAL differences obtained for each vane Mach number distribution for the 1cx and 0.25cx downstream measurement locations are relatively small compared to overall IAL loss magnitudes.This is because not much mean streamwise momentum is converted into turbulence by local shear and turbulence production as the wake is advected in the streamwise direction.
Magnitudes of area-averaged loss coefficients Y A also generally increase as exit Mach number increases.Excellent agreement is present between results from Boyle and Senyitko [59] and the present study for a low subsonic Mach number distribution along the vane, which gives an exit Mach number M ex of 0.35.Other experimental data from the present investigation show similar qualitative trends and validate the numerical predictions by Boyle et al. [30] for a similar vane configuration when the Mach number distribution along the vane is transonic and M ex = 0.71.
When suction-side gill region film cooling is added to the cambered vanes, effects of film cooling hole orientation, shape, and number of rows are considered for four different hole configurations: round axial (RA), shaped axial (SA), round radial (RR), and round compound (RC).Results show that magnitudes of Integrated Aerodynamic Loss (IAL) increase anywhere from 4 to 45 percent compared to a smooth blade with no film injection [55].The performance of each hole type depends upon the airfoil configuration, film cooling configuration, mainstream flow Mach number, number of rows of holes, density ratio, and blowing ratio, but the general trend is an increase in IAL as either the blowing ratio or the number of rows of holes increase.When both rows of holes IAL data are considered, differences between the RR, RC, and SA sets are quite small, with IAL values which are significantly higher than ones produced by the RA configuration (at a particular blowing ratio).
These integrated aerodynamic loss data for the cambered vane with film cooling are also compared with data from Jackson et al. [54], which are measured downstream of a symmetric airfoil with a single row of conical diffused holes located on one side.In general, the Jackson data are significantly higher than the cambered vane results, when compared at a particular film cooling blowing ratio.This is mostly a consequence of the transonic flow conditions and the symmetric airfoil arrangement with suction surface contours and high Mach number distributions on both sides for the Jackson et al. [54] airfoil.
Figure 3 :
Figure 3: Numerically predicted Mach number distribution through the test section for the symmetric airfoil investigations [54].
Figure 8 :
Figure 8: Schematic diagrams of the (a) test section, and (b) test airfoil, for the investigations utilizing symmetric airfoils with film cooling [54].
System.The air used for film cooling first enters a Norman Filters 5 micron ABS particulate filter, a Fairchild no.10282 pressure regulator, a sonic orifice, a Wilkerson M16-02 F00B E95 coalescing filter, a Wilkerson X03-02-000A J96 Desiccant dryer, a Dwyer Model RMC series rotameter, and a pressure relief valve.For every experimental test condition, the injectant mass flow rate measured with the sonic orifice is in excellent agreement with the flow rate measured with the rotameter.The dryers and filters are required to avoid frost build since a Xchanger Inc. TV-050 heat exchanger uses liquid nitrogen to cool injectant air to temperatures as low as −120 • C. 5.3.Pressure and Temperature Measurements.As tests are conducted, Validyne Model DP15-46 pressure transducers (with diaphragms rated at either 345 kPa or 1380 kPa),
1 ΩFigure 14 :
Figure14: Comparison of vane wake total pressure loss coefficient profiles with similar data from Ames and Plesniak[3].The present data are measured 0.25 axial chord lengths downstream of a cambered vane with M ex = 0.35[37].
Figure 15 :
Figure 15: Profiles measured at one axial chord length downstream of the cambered vane for M ex = 0.71 [38].(a) Normalized local total pressure losses.(b) Normalized local Mach numbers.(c) Normalized local kinetic energy.(d) Ω total pressure loss coefficient.
Figure 17 :
Figure17: Comparison of dimensional integrated aerodynamic loss as dependent upon the normalized equivalent sandgrain roughness size for Tu = 1.1-1.6 percent for a cambered vane[38].
Figure 24 :
Figure24: Normalized Integrated Aerodynamic Loss values for different hole configurations for film injection from the first row of holes only, for cambered vanes with film cooling[55].Lines are determined using(17) and C 1 values from Table3.
Figure 26 :
Figure 26: Overall exergy destruction as it varies with surface roughness condition for different measurement locations downstream of different airfoils with different exit Mach numbers, for freestream turbulence intensity values from 0.9 to 1.5 percent.
Figure 28 :
Figure 28: Overall exergy destruction as it varies with film cooling blowing ratio as measured one axial chord length downstream of different smooth airfoils with different exit Mach numbers, and different film cooling configurations and conditions.
[8]ure6: Comparison of normalized Integrated Aerodynamic Loss as dependent upon the normalized equivalent sandgrain roughness size for different inlet turbulence intensity levels for M e,∞ = 0.9 for the symmetric airfoil investigations[8].
shows comparisons of their oi • are different from each other because of different vane configurations and different operating conditions in the two investigations.Such The test section is designed to match Reynolds numbers, Mach numbers, pressure gradients, passage mass flow rates, boundary layer development, streamline curvature, airfoil camber, and physical dimensions of turbine vanes in operating aeroengines and in gas turbines used for utility power generation.A schematic diagram of the test section with the cambered vane is shown in Figure12.The inlet of the test section is 12.70 cm by
oi (IAL-IAL nfc )/ Figure 25: Normalized Integrated Aerodynamic Loss values for different hole configurations for film injection from both rows of holes, for cambered vanes with film cooling Figure 27: Overall exergy destruction as it varies with surface freestream turbulence intensity for different measurement locations downstream of different smooth airfoils with different exit Mach numbers.
)
Bx: Vane axial chord coordinate cx: Axial chord length of airfoil Cx: Vane axial chord length C 1 : Correlation constant C p : Local total pressure loss coefficient, (P oi − P oe )/P oi C p,∞ : Total pressure loss coefficient in the freestream, (P oi − P oe,∞ )/P oi h: : Equivalent sand grain roughness KE: Normalized local kinetic energy, (P oe − P se )/(P oe − P se ) ∞ Enthalpy loss coefficient or primary loss coefficient ξ th : Thermodynamic loss coefficient.Exit of test section, mainstream.
|
v3-fos-license
|
2018-04-03T05:27:21.244Z
|
1975-12-10T00:00:00.000
|
42815223
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/s0021-9258(19)40682-0",
"pdf_hash": "0749021c76ef647080d8d079a982db7793315d46",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:597",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "d169c056c886faabfe025d6561764cda8a047535",
"year": 1975
}
|
pes2o/s2orc
|
Saccharopine Dehydrogenase SUBSTRATE INHIBITION STUDIES
In the direction of reductive condensation of cY-ketoglutarate and lysine, saccharopine dehydrogenase (W-(glutar-2-yl)-L-1ysine:NAD oxidoreductase (lysine-forming)) is inhibited by high concentrations of a-ketoglutarate and lysine, but not by NADH. NAD+ and saccharopine show no substrate inhibition in the reverse direction. Substrate inhibition by cu-ketoglutarate and lysine is linear uncompetitive uersus NADH. However, when the inhibition is examined with cy-ketoglutarate or lysine as the variable the double reciprocal plots show a family of curved lines concave up. The curvature is more pronounced with increasing concentrations of the inhibitory substrate, suggesting an interaction of variable substrate with the enzyme form carrying the inhibitory inhibition interaction the E.NAD+ 301-307), of
Saccharopine Dehydrogenase SUBSTRATE INHIBITION STUDIES (Received for publication, June 26, 1975) MOTOJI FUJIOKA From the College of Bio-Medical Technology, Osaka University, Toyonaka, Osaka 560, Japan In the direction of reductive condensation of cY-ketoglutarate and lysine, saccharopine dehydrogenase (W-(glutar-2-yl)-L-1ysine:NAD oxidoreductase (lysine-forming)) is inhibited by high concentrations of a-ketoglutarate and lysine, but not by NADH. NAD+ and saccharopine show no substrate inhibition in the reverse direction.
Substrate inhibition by cu-ketoglutarate and lysine is linear uncompetitive uersus NADH.
However, when the inhibition is examined with cy-ketoglutarate or lysine as the variable substrate, the double reciprocal plots show a family of curved lines concave up. The curvature is more pronounced with increasing concentrations of the inhibitory substrate, suggesting an interaction of variable substrate with the enzyme form carrying the inhibitory substrate. These inhibition patterns, the lack of interaction of structural analogs of lysine such as ornithine and norleucine with the E.NAD+ complex (Fujioka, M., and Nakatani, Y. (1972) Eur. J. Biochem. 25,[301][302][303][304][305][306][307], the identity of values of inhibition constants of a-ketoglutarate and lysine obtained with either one as the substrate inhibitor, and the substrate inhibition data in the presence of a reaction product, NAD+, are consistent with the mechanism that substrate inhibition results from the formation of a dead-end E.NAD+ .c~-ketoglutarate complex followed by the addition of lysine to this abortive complex.
A number of pyridine nucleotide-linked dehydrogenases have been shown to be inhibited by high concentrations of the substrates. The substrate inhibition has generally been considered as arising from the formation of a complex of a substrate and an enzyme form which is not supposed to react with. In pyridine nucleotide dehydrogenases, many examples are known in which substrate inhibition is caused by the formation of an abortive enzyme. oxidized coenzyme oxidized substrate or enzyme. reduced coenzyme . reduced substrate complex (l-5). Recently substrate inhibition resulting from the combination of a substrate with central complexes has also been reported (6,7).
It was previously
shown that saccharopine dehydrogenase (W-(glutar-2-yl)-L-1ysine:NAD oxidoreductase (lysine-forming)) which catalyzed a reversible reaction and/or central complexes, they should give uncompetitive patterns with respect to both NADH and noninhibitory substrate. The curved double reciprocal plots in Figs. 2 and 4 show that the mechanism of inhibition is more complex. However, from the dependence of inhibitory effect of a variable substrate on the concentration of a substrate inhibitor, and the linearity of the plots of l/u versus concentrations of a substrate inhibitor at each concentration of a variable substrate, the observed inhibition patterns may most simply be explained as arising from the combination of a variable substrate with the abortive complex(es) carrying a substrate inhibitor.
Substrate Inhibition in Presence of NAD+-Whereas the experiments of Figs. 1 to 4 are consistent with the assumption above, they do not give indication as to whether the inhibition is caused by the combination of a substrate inhibitor with the E.NAD+ or central complexes, or both. These alternatives may partly be distinguished by running the inhibition experiment in the presence of a constant level of an added product, NAD+. In its presence, the binding of a substrate inhibitor to the E.NAD+ complex will cause a slope effect with NADH as the variable substrate, while the binding solely to the central complexes will not. Fig. 5 shows the a-ketoglutarate inhibition in the presence of NAD+. The presence of NAD+ did give a
Substrate
Inhibition by a-Ketoglutarate and Lysine-A previous investigation has shown that saccharopine dehydrogenase is inhibited by high concentrations of a substrate, a-ketoglutarate (8). Substrate inhibition was also noted by lysine at very high concentrations, but no inhibition was observed with NADH up to 0.2 mM (about 8 times the Michaelis constant) at cr-ketoglutarate and lysine concentrations of 0.5 mM and 2.0 mM, respectively.
High concentrations of NAD+ (up to 5.0 mM) and saccharopine (up to 40 mM) showed no inhibition in the forward direction (Reaction 1) at pH 6.8. The possibility that inhibition by either cY-ketoglutarate or lysine was due to inhibitory contaminants was ruled out because precisely the same degree of inhibition was obtained by use of the compounds which had been recrystallized several times. Fig. 1 shows the plots of reciprocal of initial velocities against reciprocal of NADH concentrations at a constant concentration of lysine (1.8 times the Michaelis constant) and several fixed, high levels of cY-ketoglutarate.
As the figure shows, at high a-ketoglutarate concentrations the inhibition was uncompetitive, and the replot of vertical intercepts versus Lu-ketoglutarate was a linear function. When cu-ketoglutarate inhibition was examined with lysine as the variable substrate at a constant level of NADH, the double reciprocal plots gave a family of curved lines concave upward, although at low lysine concentrations the lines were nearly parallel (Fig. 2). The curvature was more pronounced with higher cu-ketoglutarate concentration.
However, when plots were made of l/u uersus a-ketoglutarate concentrations at each concentration of lysine, they were linear at high concentrations of the inhibitor. Values of the apparent inhibition constants for slope (K,,) and intercept (K,,) were determined by fitting the initial velocity data obtained with a-ketoglutarate concentrations of more than 7.5 mM to Equation 2, and were found to be 15.5 & 1.5 mM and 17.6 f 2.0 mM, respectively.
Previous investigations
have shown that the kinetic mechanism followed by saccharopine dehydrogenase is an ordered Ter Bi mechanism (the reverse direction, Reaction 1) and the sequence of addition of substrates is NADH, cu-ketoglutarate, and lysine (8,9,12). In this mechanism substrate inhibition may arise in the following cases. plots is consistent with this idea and excludes the possibility of any partial inhibition.
The discussion above assumed that both cY-ketoglutarate and lysine at high concentrations could form abortive complexes, and the resulting complexes in turn could absorb the other substrate. But the same inhibition patterns will be seen when only one substrate can bind to the E. NAD+ and/or central complexes, and the other adds subsequently to the complex(es). The binding of lysine to the E .NAD+ or central complexes is unlikely from the lack of interaction of lysine analogs with these complexes.
It has been shown that the analogs of lysine such as ornithine, norleucine, and leucine are potent competitive inhibitors of lysine in the reverse direction and produced no inhibition in the forward direction even at concentrations more than 50 times their dissociation constants from the E .NADH . cu-ketoglutarate .analog complex (8). If cr-ketoglutarate binds to the E. NAD+ complex, the product inhibition by this compound in the forward direction should theoretically give a noncompetitive pattern when saccharopine is the variable substrate. The combination with the central complexes would give rise to noncompetitive inhibitions for variable NAD+ and saccharopine.
Under experimental conditions, however, no appreciable slope effect was observed with respect to both NAD+ and saccharopine (12). This could be due to a large dissociation constant of a-ketoglutarate from these complexes (see below).
If an alternate reaction path in which the order of addition of a-ketoglutarate and lysine is reversed (E 4 E.NADH --t E 'NADH .lysine + central complex) operates at high concentrations of lysine and if lysine forms dead-end complex(es) with either E .NAD+ or central complexes, or both, substrate inhibition for which the double reciprocal plots are linear with respect to NADH and curved to a-ketoglutarte would be obtained. In this mechanism, however, the plots of l/u uersus lysine concentrations would not be linear, contrary to the experimental finding (Fig. 4). It is also difficult to visualize nonlinear reciprocal plots when a-ketoglutarate is the substrate inhibitor and lysine the variable substrate, since the concentrations of the latter are kept at low level. Furthermore, the combination of lysine with the E.NAD+ or central complexes may be excluded from reasons mentioned above.
Thus the observed substrate inhibitions may best be interpreted as the result of combination of a-ketoglutarate with the E.NAD+ and/or central complexes and subsequent binding of lysine to the abortive complex(es) inhibiting the dissociation of NAD+ or catalysis. The rate equation for this mechanism may be written as From these equations, by substituting values for C, Q, K,, KIa, Kb, K,,, K,, and K,, (1.38 mM), the approximate value off was calculated as 0.63. The f value of less than unity indicates that a-ketoglutarate binds only to the E .NAD+ complex, not to the central complexes; otherwise the value should be 1.
Although the inhibition patterns by cu-ketoglutarate and lysine are rather complicated, in the absence of further data, it may thus be assumed that the inhibition results from dead-end 8989 combination of the former with the E .NAD+ complex and the secondary binding of the latter to this abortive ternary complex, and the prevention of dissociation of NAD+. This interpretation is consistent with a previous finding that Lu-ketoglutarate and lysine in the concentration range covered in the current investigation failed to show substrate inhibition when the coenzyme was NADPH instead of NADH. NADPH did support the reaction, although with much less efficiency than NADH, but no appreciable binding of NADP+ to saccharopine dehydrogenase could be demonstrated (9). Therefore, in the reaction with NADPH, the steady state concentration of E.NADP+ complex would be too low to allow the formation of an abortive complex with Lu-ketoglutarate.
It is of interest to note that, while the dissociation constant of a-ketoglutarate from the E .NAD+ .c~-ketoglutarate complex (KJ calculated from the data of Table I is about 60 times as large as that from the E .NADH .a-ketoglutarate complex (KJ, lysine can bind readily to the E.NAD+ .a-ketoglutarate complex with a dissociation constant comparable to its Michaelis constant.
Apparently the conformation of enzyme protein induced by a-ketoglutarate binding is of primary importance in lysine binding.
|
v3-fos-license
|
2021-08-27T17:03:57.119Z
|
2021-01-01T00:00:00.000
|
237994384
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/50/e3sconf_stcce2021_03005.pdf",
"pdf_hash": "34b49c784e62175e665af3395479c09846e664fb",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:599",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"sha1": "74b26bd0253ece0e265448c79c03f1a9a70d5f78",
"year": 2021
}
|
pes2o/s2orc
|
The adjustment work canal on the Amudarya in the areas of the damless water intake
The paper presents the results of field research in the area of the damless water intake of the KMC on the Amudarya River. The article developed the optimal route and boundaries of the pioneer digging location, depending on the area of the main stream of the river relative to the point of the damless intake. It also provides hydraulic calculations for improving the condition of a damless water intake and recommendations for the rational use and placement of a dredgers fleet when laying a route of a ditch and cleaning it.
Introduction
In engineering practice, special attention is paid to the issues of scientific research of the negative impact of the canal process development on the reliability and functioning of the damless water intake, the determination of the intensity and direction of canal processes in the river canals and canals of the head structure and the development of methods for their calculation. In this regard, ensuring a guaranteed volume of water withdrawal with a minimum amount of the volume of bottom and suspended sediments at a damless water withdrawal is considered one of the important tasks [1][2][3][4].
The Amudarya is characterized by a very unstable canal and high intensity of erosion of the banks. The main reasons for the active reshaping of the Amudarya River are the presence of relatively large slopes, easy erosion of the soils that make up the bed, large fluctuations in flow rates and water levels in the intra-annual section.
The investigated section of the river. The Amudarya is located in the area of the damless water intake in the KMC (Karshi Main Canal) and is 22 km above the water seal of the Kerki town. The total length of the investigated section is 10-12 km. This area has two characteristic sections: the upper one is 6 km upstream from the head water intake, and the lower one is located between the head water intake and the village of Kzylayak [5][6][7][8][9][10].
During water withdrawal from the Amudarya River to the KMC, difficulties arise due to the rapid siltation and sedimentation of the head section of the canal. Depending on the water content of the year, a flow with a turbidity of up to 5 kg/m 3 annually enters the inlet *Corresponding author: vohidov.oybek@bk.ru of the canal. The annual volumes of sediments range from 8 to 12 million tons. The main stream of the Amudarya River in the area of the KMC water intake flows in a wide floodplain. As a result of field studies 2006-2019 and the materials of previous years of studying the transformations of the river main canal near the damless water intake, the optimal operation mode of the head of the water intake at low river levels during the dry season and dry years was revealed. Initially, the head of the KMC water intake was located on the eroded bank of the Amudarya River, 1.2 km below the stable bank of Cape Pulizindan. The rocky coast of Cape Pulizindan above the head of the water intake prevented the main stream of the river from flowing and constantly directed the flow to the left bank. As a result, in certain periods of the year, unfavorable conditions were created at the head of the KMC water intake for water withdrawal into the inlet part of the canal and did not provide the required water consumption [11, 12].
Methods
The analysis of the results of field studies on the Amudarya River section in the damless water intake area into the Karshi Main Canal, assessment of the state of the Amudarya River in the area of water intake and increasing the reliability of the damless water intake are the methods of research of this work.
Results and discussion
In the context of the Amudarya, with a large volume of irrigated water and intensive wandering of the river flow on a wide floodplain, the maintainig of a guaranteed damless water intake requires the implementation of the adjustment work canal to detach the pioneer dug and its systematic cleaning from entering during operation. The annual volume of such work is 30 ... 40 % of the total volume of the head water intake area.
In the studied area, as a result of the displacement of the main stream of the river in the head water intake at a length of 1200 m, the flow concentrated in one canal, the width of which, depending on the water discharge, varied from 250 to 500 m. An island located between the right and middle canals at a length of 2000 m, displaced to the left, towards the middle canal by 200-280 in the head part of the left canal formed a large sandbank. There was also an expansion to 1000 m and an extension to 1200 m of the island located between the middle and left canals [13][14][15][16]. Due to the systematic cleaning of the middle canal, the discharge in it is 80 ... 85 % of the total discharge of the river. The canal width along the water edge was 320-350 m. Below the water intake point, an erosion of the island located between the left and right canals was observed on the side of the right flow at a width of 80 ... 150 m about the length of the washout strip up to 1500 m. They manifest themselves in different ways: both in the form of erosion, and in the form of entering. The erosion of the river bed in some places reached 0.5-1.5 m, and sediment deposits 0.2-1.4 m (Fig. 1). Planned deformations were more intense than deep ones. During the observation period, the canal width changed from 250 to 500 m, depending on the water discharge and the curvature of the river canal. Planned deformations were more intense than deep ones. The B/H value was 26-63 in the area above the water intakes and 63-77 in the area below the water intake. The value of the surface water flow velocity in the area above the water intake changed from 0.78 to 1.08 m/s, below the water intake -from 0.5 to 0.75 m/s [17-26]. The optimal route and boundaries of the location of the pioneer ditch are assigned depending on the location of the main stream of the river relative to the point of the damless water intake. When the main flow moves away from this point to the opposite bank of the river, the volume of the adjustment work bed increases, while approaching it decreases. Its temporary organization of production, the canal of adjustment work on the separation of the pioneer dug and its systematic cleaning makes it possible to ensure guaranteed water intake during the period of low water levels in the river with intensive wandering of the flow [27,28]. According to the recommended method, the ditch route should be planned based on the location of its head on a straight-line section of the river with an outlet angle of no more than 300. The development of the hole must be carried out during the low-water period (September -April), that is, during the period of the least dependence of the canal by traction [10,16,29]. The length, width and depth of the pioneer ditch should be assigned depending on the planned location of the river canals relative to the point of the damless water intake, the alluvial regime of the river and the technical parameters of the dredgers (Fig. 2). The length of the pioneer dug should be determined based on the planned location of the ducts relative to the water intake point. The depth of excavation of the hole is established from the conditions of silting (Fig. 3).
To determine the effective amount of sediment, when, in accordance with their hydraulic size, they settle to the bottom for a length �, the following formula is recommended: where: � � � total suspended load, kg; � ��average water flow rate, m/s; � -slot length, m; ū -hydraulic particle size, м/с; h -water depth before excavation. The total amount of sediment is determined by the formula: where: � -slot width, m; ��-average sediment concentration, kg/m 3 ; t -time, sec; �water depth after development, m. The amount of sediment deposited in the cut can be determined by the formula: where: � -coefficient taking into account the part of the effective amount of sediment involved in siltation: � � , � � � flow rates in the bottom hole and in the canal , m/s. From the equation of continuity of flows: We have Hence, substituting (1) into (3), taking into account (5), we obtain Taking into account (2), equation (6) is reduced to the form By designating������ � � � , receive, With the same value � � value � � � � � � � � is defined by hyperbole (Fig. 4). The value of � in equation (6) was obtained by Indian researchers during work on deepening the bottom of canals in the India river ports and was equal to 0.29. Formula (8) can be used when choosing the depth (H) of development, especially in places where there is a normal supply of sand fractions.
Establishing the development depth from the condition of a given throughput determines the width of the hole and the average flow rate where � -water discharge in the pit; � -canal roughness coefficient; ��-the slope of the bottom or water surface of the water in the trench.
Conclusions
Based on the discussion of the results of monitoring the dynamics of the river bed morphometry and the hydraulic parameters of the water flow of the damless water intake of the KMC, the following conclusions can be drawn: 1. At present, on the Amudarya, in the areas of the damless water intake, the canal is regulated by the canal operation services, when the pioneer ditch route is planned intuitively, and the dredgers are placed to clean it without taking into account the traffic regime and the intensity of the canal process in the river.
2. In the recommended method, the rational use and placement of a fleet of dredgers during the laying the ditch route and its cleaning are achieved by taking into account the intensity of the canal process course in the river and the introduction of the ditch by entrained sediments. Thus, studies have established that in the river bed, coastal deformations occur more intensely than deep ones.
3. The water discharge in the area of the damless water intake during the year varies in a wide range and has a sharply variable character.
4. The relationship between the morphometric parameters of the canal and the hydraulic parameters of the flow in the area of the damless water intake at the KMC is unstable.
5. The depth of the flow changes intensively in a wide redistribution. 6. To prevent negative phenomena, it becomes necessary to conduct experimental and numerical studies aimed at regulating the direction of the flow and the nature of planned deformation in the area of the damless water intake.
|
v3-fos-license
|
2022-10-12T15:06:20.475Z
|
2022-10-07T00:00:00.000
|
252839946
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-4292/14/19/4992/pdf?version=1665151852",
"pdf_hash": "2547e11662d46281d184c9a55e63ceda6525018c",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:600",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "40bb3c88a84d6a8380b07b4bfee50586517c3f4f",
"year": 2022
}
|
pes2o/s2orc
|
Phenological Changes and Driving Forces of Lake Ice in Central Asia from 2002 to 2020
: Lake ice phenology is an indicator of past and present climate, it is sensitive to regional and global climate change. In the past few decades, the climate of Central Asia has changed significantly due to global warming and anthropogenic activities. However, there are few studies on the lake ice phenology in Central Asia. In this study, the lake ice phenology of 53 lakes in Central Asia were extracted using MODIS daily LST products from 2002 to 2020. The results show that MODIS-extracted lake ice phenology is generally consistent with Landsat-extracted and AVHRR-extracted lake ice phenology. Generally, lakes in Central Asia start to freeze from October to December. The trends in the lake ice phenology show strong regional differences. Lakes distributed along the Kunlun Mountains show overall delayed trends in all lake ice phenology variables, while lakes located in southwestern Central Asia show clear advancing trends in the freeze-up start dates (7.06 days) and breakup end dates (6.81 days). Correlations between the phenology of lake ice and local and climatic factors suggest that the ice breakup process and the duration of its complete coverage depend more on heat, while precipitation mainly affects the freezing time of the ice. Wind speed mainly affects the time of completely frozen of ice. In general, the breakup process is more susceptible to climatic factors, while local factors have strong influences on the freeze-up process.
Introduction
Global climate change is profoundly affecting human survival and development and is one of the major challenges faced by the international community today [1]. Historical records show that the warming trend began in the 1950s and changed abruptly in the mid to late 1980s [2]. In recent years, the global warming trend has become more pronounced. From 2011 to 2020, the average temperature has increased by 1.09 • C over the pre-industrial period (1850-1900), while the 2001-2010 period was 0.99 • C warmer than the pre-industrial period [3]. This abnormal warming trend has increased evaporation from the land surface and accelerated the melting of glaciers. The increased evaporation from the land surface has reduced the areas of lakes at lower elevations and plains, while the melting of glaciers has led to the formation of new lakes at higher elevations [4]. In the past decades, a widespread water loss occurred in the global endorheic system [5] and the total lake area in Central Asia has shown a decreasing trend [6][7][8]. Previous studies have shown that nine lakes in Central Asia have decreased in area by about 45,352 km 2 , accounting for approximately 49.62% of their total surface areas [6] and the number of lakes decreased from 7309 in 2000 to 6585 in 2015, a decrease of 85 lakes per year [7]. Thus, global climate change has greatly influenced the evolution of lakes.
Global climate change also has a significant impact on lake processes, such as changes in lake size, physical and chemical properties, and lake ice phenology. Among the above processes, lake ice phenology is the most important factor, which directly reflects climate change, thus making it a sensitive indicator of climate change [1,[9][10][11][12][13][14]. Lake ice phenology describes the freeze-thaw cycles of lakes and the durations of their ice covers, regulating the chemical, biological, and physical processes in lakes [13,[15][16][17][18]. Lakes are particularly important in arid and semi-arid regions. Lakes and their ice covers can provide a number of ecosystem services for people living in Central Asia, such as water supply, agricultural irrigation, and some recreational activities [19]. In addition, lakes play an important role in social and ecological systems in these inland areas where surface flow is scarce [5].
Prior to the widespread use of remote sensing, studies on lake ice phenology relied on traditional methods, including field observations and hydrological studies. However, data collections using these traditional methods were laborious and the data were spatially sparse [20]. Remote sensing data with different spatial and temporal resolutions can fill the data gaps of traditional methods over long time periods. Moderate-resolution imaging spectroradiometer (MODIS) data have temporal resolution of one day and can be important for the long-term monitoring of lake ice. Several MODIS products have been used to detect lake ice phenology, including the MODIS snow product [9,19] and reflectance product [21,22]. In this study, we used the MODIS Land Surface Temperature (LST) product because it contains both daytime and nighttime images and presents more information than other products. Therefore, we extracted the lake ice phenology from 2002 to 2020 via Google Earth Engine (GEE) and geemap [23]. This study aimed to (1) quantify the changes in lake ice phenology over the past 18 years and (2) analyze the driving factors of changes in lake ice phenology.
Study Area and Lake Selection
The study area is located in the mid-latitudes of the Northern Hemisphere, with a spatial extent of 28°N, 61°N, 44°E, and 97°E ( Figure 1). The weather in Central Asia is controlled by a continental climate with cold winters and severe droughts in the summer. The overall topography of Central Asia is high in the southeast and low in the northwest, and the numerous inland lakes are unevenly distributed. Annual precipitation in Central Asia is higher in mountainous areas (about 1000 mm) and lower in desert areas (less than 100 mm) [6]. However, in this region, potential evaporation is much higher than annual precipitation [24], making the region one of the world's most important arid zones. Due to the unique climate, most of the lakes in Central Asia rely on glacial melt, mountain precipitation, and river runoff for their water supply.
MODIS Daily Land Surface Temperature (LST) Products
MODIS is a key instrument installed on the Terra and Aqua satellites; it has provided data since 2000 and 2002, respectively. The MODIS sensor has 36 spectral bands covering wavelengths from 0.405 µm to 14.385 µm. Our primary data are daily land surface temperature products from MODIS (MOD11A1, MYD11A1, version 6) for the period from 2002 to 2020, obtained from NASA's Earth Observing System Data and Information System website via the Google Earth Engine (GEE) platform (https://modis.gsfc.nasa.gov/data/ dataprod/mod11.php (accessed on 15 January 2022)). The daily LST product retrieves land surface temperatures using band 31 and band 32 through the split-windows algorithm [25], including daytime LST and nighttime LST of the land surface.
Among the various sensors and products, we chose the MODIS sensor and the MODIS LST daily product because the MODIS sensor has a temporal resolution of 1 day and its LST daily product contains a daytime LST and a nighttime LST with a spatial resolution of 1 km, which can be directly used to create continuous time series for further analysis. The MODIS LST product has the unique advantage of providing more surface information (four images per day) compared to other MODIS products, which increases its priority in this study.
Lake Information
The HydroLAKES database contains freshwater lakes, salt lakes, and artificial reservoirs, and was developed by Global HydroLAB [26]. The main processes used to produce HydroLAKES include manual identification and removal of polygons from rivers and wetlands, removal of duplicate and overlapping polygons, and the correction of damaged or incomplete polygon geometry. Surface area, depth, volume, shoreline length, elevation, and other valuable information for lakes described in the HydroLAKES database are available on the HydroSHEDS website (https://www.hydrosheds.org/page/hydrolakes (accessed on 3 January 2022)).
Methods
The algorithm for calculating ice-water fraction mainly contains four steps. Firstly, input MODIS datasets and HydroSHEDS datasets; apply quality control to MODIS datasets to remove cloudy pixels and pixels with high LST error values and select lakes. Secondly, merge the lake ice time series data retrieved from the LST_Day_1km band and LST_Night_1km from MOD11A1 and MYD11A1 products from 2002 to 2020. Thirdly, classify lake ice through a LST threshold value and calculate the ice fraction; export the lake ice fraction time series in a csv file. Lastly, use the merged ice fraction time series to extract ice phenology variables, including freeze-up start (FUS), freeze-up end (FUE), breakup start (BUS), breakup end (BUE), freeze-up duration (FUD), ice completely covered duration (ICD), and the breakup duration (BUD).
In this study, FUS represents the first day when the ice fraction is above 0.2, FUE represents the first day when the ice fraction is above 0.8, BUS represents the last day when the ice fraction is above 0.8, and BUE represents the first day when the ice fraction is above 0.2 after BUS. FUD represents the day between FUS and FUE, ICD represents the day between FUE and BUS, and BUD represents the day between the BUS and BUE day of the year.
Quality Control and Lake Selection
Using optical sensors to retrieve lake ice phenology can be largely affected by the clouds. We removed cloud-covered pixels and high LST error pixels through the quality control in the QC_Day and QC_Night bands of MOD11A1 and MYD11A1 datasets. In this study, we set pixels with the quality flag of "LST produced", "good data quality", and "average LST error ≤ 1 K" as valid pixels, and classified other pixels as invalid pixels. To simplify the following calculations, we set the value of valid pixels to 1 and the value of invalid pixels to 0. The formula is given as follows in Equations (1) and (2): where t is the day of the year of the image; S is the name of the satellite (Terra and Aqua); Image r is the LST daily product of the Aqua and Terra satellite; Image c is the image after the quality control operation. By applying Equations (1) and (2) to the whole image collection, we removed most of the low-quality pixels of the whole image collection. To avoid noise introduced by the mixed pixels, we only chose lakes that had surface areas larger than 30 km 2 . After the selection process, there remained 53 lakes with surface areas ranging from 32.43 to 23,865.91 km 2 .
MODIS Daily LST Combination
Since clouds are constantly moving in the atmosphere, cloud coverage varies with time, making it possible to fill in missing values due to quality control operations by combining images from different times of the day. If a pixel is a cloudy pixel in one satellite observation and a cloudless pixel in another observation, the pixel value of the cloudless pixel is used to fill the cloudy pixel as the final result. Considering that the air temperature varies over time, which means that a valid pixel may have two LST values, we used the largest value as the pixel value. In addition, choosing the largest LST value as the pixel value also reduced the error of water-ice classification, because water freezes when the temperature is below 0 degrees Celsius. Therefore, we used the maximum value combination method to combine the data of MOD11A1 and MYD11A1 (of the same day) after quality control processing. The formula is listed as follows in Equation (3): where x is the index of column (horizontal axis); y is the index of row (vertical axis); T stands for the Terra satellite and A stands for the Aqua satellite. Equation (3) was applied to data from MOD11A1 and MYD11A1 products on the same day. Maximum LST values for daytime and nighttime were obtained from two satellite products.
Lake Ice Penology Variable Extraction
The time series of water-ice transitioning was extracted from the merged image in Section 2.3.2 for the period from 2002 to 2020. The water-ice phases in the MODIS daytime and nighttime merged images were classified by applying a temperature threshold. Most of the lakes in the CA region are saline, which has a lower freezing point. Thus, we selected 272 K as the temperature threshold. If the LST value of the pixel was lower than 272 K, we set the pixel to ice and classified it as Pixel ice ; if the LST value was greater than 272 K, we set the pixel to water and classified it as Pixel water . The water-ice fraction was calculated by Equation (4): where t is the index of time and includes daytime and nighttime; N ice is the total number of Pixel ice and N valid is the total number of valid pixels. Equation (4) was applied to data after operation from Section 2.3.2. Daytime and nighttime lake ice fractions were obtained.
To further reduce the error of the water-ice fraction, we combined two lake ice fraction time series using Equation (3).
Validation of Ice Phenology Curve
The accuracy of derived ice-water fraction of a lake from the MODIS daily LST combined image was estimated by comparing the ice-water fraction derived from AVHRR daily SST product and Landsat 8. The overall accuracy was calculated as the root mean square error (RMSE) of the time difference and R 2 .
The lake ice fraction time series of three large lakes derived from the MODIS daily LST combined image, AVHRR daily SST image, and Landsat 8 image were compared to access the performance of the method. The lake ice fraction time series derived from the MODIS daily LST combined image was consistent with the AVHRR daily SST image and Landsat8 image (Figures 2 and 3). The R 2 of the comparison ranged from 0.68 to 0.81 and the root mean square error (RMSE) ranged from 0.17 to 0.25 (Table 1), which show that the water-ice fraction derived from the MODIS daily LST combined image had good quality.
Lake Ice Phenology Characteristics in Central Asia
Remarkable differences in lake ice phenology were found in Central Asia lakes. Some lakes, such as Xiaoxihaiz Shuiku, Kara-Bogaz-Gol, Lake Issyk Kul, Lake Sarygamysh and Lake Aydar, were rarely fully ice-covered in the winter, while some other lakes were frozen all year around. For some lakes, such as Xiaoxihaiz Shuiku, compared to other time periods, the lake ice covers for the period from 2007 to 2008 were significantly increased. This may be due to the occurrence of the La Nina event in 2007. The mean 'day of year' (doy) of FUS was 106.85 in all lakes; the mean doy of FUS was 137.02 in not completely ice-covered lakes, and the mean doy of FUS was 94.95 in completely ice-covered lakes ( Table 2); the mean doy of FUE was 125.26 in completely ice-covered lakes (Table 2); the mean doy of BUS was 227.35 in completely ice-covered lakes (Table 2); the mean doy of BUE was 231.86 in all lakes, the mean doy of BUE was 180.32 in not completely ice-covered lakes, and the mean doy of BUE was 252.20 in completely ice-covered lakes ( Table 2). Previous studies have revealed that the lakes in the Northern Hemisphere experienced earlier breakups and later freeze-ups. Not all lakes in Central Asia had the same trend as previous studies [15,27]. The trends of the ice phenology variables are presented in Figure 4.
From 2002 to 2020, 26 lakes (49.06%) had advanced FUS dates, and 21 lakes (39.62%) had advanced BUS dates. Due to some lakes rarely being completely frozen in the winter and the 'day of year' (in FUE and BUS) was too noisy for analysis, we set the trend value to zero in FUE and BUS. Lakes, located in the southwest part of Central Asia had obvious advancing trends in FUS dates. For the BUS dates, lakes in the north and west parts of Central Asia had obvious advancing trends.
Ice Phenology in Different Elevation Lakes
Lakes were clustered into three groups according to the elevation using K-means (Group 1: elevation higher than 3000 m, Group 2: elevation from 1000 to 2000 m, Group 3: elevation lower than 1000 m). The mean temporal patterns and trends of ice phenology of three groups of lakes from 2002 to 2020 are presented in Figure 5 and Table 3. Generally, higher altitude lakes had earlier FUS, FUE dates and later BUS, BUE dates. Compared to Lake Kaydak (elevation −29 m), in group 3, the FUS and FUE in Lake Aksayquin (elevation 4844 m) in group 1 were 27 and 41 days earlier, while the BUS and BUE were 80 and 74 days later. Furthermore, the trends of each lake's ice phenology variable in the three groups were consistent overall with the trends of earlier breakups and later freeze-ups in the Northern Hemisphere, as revealed by previous studies [12,15,27,28]. The trend of ice phenology in group 2 correlated with previous studies, while the trends in group 1 (an earlier breakup trend) and group 3 (a later freeze-up trend) existed some differences (see Table 3). The FUD, ICD, and BUD are listed in Table 4.
Group 1: Elevation Higher than 3000 m
Lake Karakul in group 1 had the longest average FUD of 33.06 days (see Appendix Table A1) and the only negative trend of FUD, this might be because Lake Karakul had the second largest area and depth (except for Lake Sarezskoye because the duration of Lake Sarezskoye was too noisy to analyze). Lake Aqqikkol had the shortest average FUD of 13.33 days (see Appendix Table A1). Lake Arkatag had the longest average ICD of 178.83 days (see Appendix Table A1), this might be because Lake Arkatag had the shallower lake depth in group 1. Lake Sarezskoye had the shortest average ICD of 79.11 days (see Appendix Table A1) and the most negative trend of ICD in group 1. Lake Ayakkum had the shortest average BUD of 21.89 days, while Lake Arkatag had the longest average BUD of 38.67 days (see Appendix Table A1).
Group 2: Elevation from 1000 m to 2000 m
Bosten Lake had the shortest average FUD of 20.56 days (see Appendix Table A1) and Lake Sailimu had the longest average FUD of 27.83 days (see Appendix Table A1). For ICD, Bosten Lake had the shortest average ICD of 77.62 days (see Appendix Table A1), while Markakol had the longest average ICD of 144.55 days (see Appendix Table A1) and the only positive trend of ICD. For BUD, Bosten Lake had the shortest average BUD of 13.61 days (see Appendix Table A1), while Lake Barkol had the longest average BUD of 29.89 days (see Appendix Table A1).
Group 3: Elevation Lower than 1000 m
Lake Balkhash had the shortest average FUD of 16.28 days (see Appendix Table A1), while Lake Ul'ken-karoy had the longest average FUD of 58.78 days (see Appendix Table A1). For ICD, the Qapshaghay Bogeni Reservoir had the shortest average ICD of 28.84 days (see Appendix Table A1), while Lake Seletyteniz had the longest average ICD of 145.28 days (see Appendix Table A1). For BUD, Lake Balkhash had the shortest average BUD of 14.06 days (see Appendix Table A1), while Lake Shalkar had the longest average BUD of 28.11 days (see Appendix Table A1).
Correlation of Lake Ice Phenology Variables with Influencing Factors
Both climatic conditions and local factors affect lake ice phenology to varying degrees [15]. Climatic conditions control the heat exchange between lake and air [29], while local factors influence the heat capacity, which ultimately affects the phenology of lake ice. For example, climatic conditions, such as wind speed, can destroy thin ice when a lake with thin ice encounters strong winds [30]. In addition, lakes with larger volumes and areas require more heat to freeze and breakup during a hydrological year [19,31]. Major climatic factors include a mean 2 m of air temperature, mean temperature during deglaciation, wind speed, short-wave radiation, and precipitation [29,30,32,33]. Local factors include latitude, longitude, elevation, depth, and area [19,30,33]. Climatic factors including air temperature, wind speed, and precipitation, are derived from monthly average products of ERA5 and shortwave radiation from TerraClimate. Figure 6a shows that temperature-related climatic factors mainly affect the ice breakup period, which is consistent with previous studies [19,29,34], and the completely frozen period of ice. Precipitation is negatively correlated with FUS and positively correlated with FUE, which has a strong influence on the completely frozen period of ice. This may be due to the fact that precipitation not only decreases the temperature [35,36], but also destroys the thin ice layer on the lake surface, which makes the freezing start earlier and the freezing end later, and finally prolongs the freezing process [30]. Short-wave radiation has a strong effect on FUS and FUD. The higher the wind speed, the earlier the freezing event occurs, the later the breakup event occurs, and the longer the ice completely covers the duration. This may be because wind accelerates convection over the lake surface, bringing cold air above the lake surface, accelerating the freeze-up process, delaying the breakup process [30], and prolonging the ICD. Strong winds may destroy the thin ice layer on the lake surface and prolong the freezing process [34,35,37]. Figure 6b,c, show the correlation between lake ice phenology variables and lakespecific factors. The results in Figure 6b shows that longitude and altitude have a strong influence on BUE, BUS, and FUE. This may be due to the fact that the higher the altitude, the lower the air temperature [38,39], and the altitude of Central Asia is higher in the east and lower in the west, which delays BUS, BUE, and advances FUE. Compared to BUE, BUS, and FUE, FUS is more dependent on latitude, lake area, and lake depth. This is because the temperature at high latitudes is lower than that at low latitudes, making FUS occur earlier. In addition, the larger the lake area and the deeper the average depth of the lake means that more heat needs to be released during the freezing period, thus affecting the date of FUS [9,30,34,40]. The result in Figure 6c shows that the lake area and the length of the lake shore have strong influences on BUD. This may be due to the fact that lakes with large areas and long shores have thinner ice compared to other lakes, which may be the reason for the negative correlation between BUD and lake area and shore length. FUD is negatively correlated with longitude and altitude. This is because the altitude in Central Asia is lower in the west and higher in the east, and the temperature is lower at higher altitudes than at lower altitudes. However, the anomalous relationship between latitude, lake area, and FUD is difficult to explain, probably because of the high salinity of lakes in Central Asia. ICD has a negative correlation with latitude and a positive correlation with longitude and altitude. This may be because the elevation in Central Asia is higher in the south and lower in the north, and elevation may be a more important factor than latitude. Figure 6. Heat map of the Spearman rank correlation between lake ice phenology variables and influencing factors, (a) describes the correlation between lake ice variables and climatic factors; (b,c) describe the correlation between lake ice variables and local factors. The size of the circle represents the absolute value of the correlation coefficient. The red color represents a positive correlation and blue represents a negative correlation; *** represents the significance at a confidence level of 0.01, ** represents the significance at a confidence level of 0.05, * represents the significance at a confidence level of 0.1. Ta is the mean annual 2 m air temperature. Pre is the annual precipitation. Ti is the mean temperature of the period from December to May from the MOD11A1 product. Ws is the annual mean wind speed. Sr is the mean downward surface shortwave radiation. The avg_depth is the average depth of the lake. Shore_len is the shore length of the lake.
Limitation and Prospection
The limitations of the study are listed below. First, the daily LST combination process (see Section 2.3.2) was unable to fill the data gaps caused by severe cloud contamination. Second, due to the lack of lake salinity data, using a threshold of 272 K to determine the freezing point of all lakes, instead of using a threshold based on lake salinity for each lake, may lead to some uncertainties in extracting lake ice phenology. In addition, the merging of daytime LST and nighttime LST ignores the temperature variation and may introduce some additional uncertainties [29]. Moreover, lake salinity is an important factor affecting lake ice phenology [33]. Due to the lack of lake salinity, the analysis between lake ice phenology and incluence factors (e.g., lake-specific factors) may not be comprehensive. Furthermore, anthropogenic activities affecting lake ice phenology were not addressed in this study. However, anthropogenic activities are also important factors affecting climate change, as anthropogenic global warming is best estimated at 1.07 • C [3], which has a significant impact on terrestrial systems [41,42]. Finally, the anomalous relationship between latitude, lake area, and FUD is difficult to explain, probably because of the high salinity of lakes in Central Asia.
As a sensitive indicator of regional climate change, the phenology of lake ice is still inadequately studied in some regions, especially in Central Asia. In addition, satellite products with long-term and temporal continuities have relatively low spatial resolutions and are weak in monitoring small lakes, while satellites with higher spatial resolutions are not continuous in time. Further studies can be conducted in the following aspects: (1) Using multiple satellite data to retrieve lake ice phenology for all lakes, including small lakes. (2) Quantifying and evaluating how anthropogenic activities affect lake ice phenology variables. (3) Projecting future lake ice variations in different representative concentration pathways (RCPs).
Conclusions
In this study, we used a combined method applied to MOD11A1 and MYD11A1 products to obtain daily LST and classified water and ice pixels using temperature thresholds. Based on this approach, we extracted lake ice phenology for 53 lakes in Central Asia for 2002-2020, including FUS and BUE for 53 lakes and FUE and BUS for 38 lakes. The results show that the time series of water-ice components derived from the MODIS daily-combined LST dataset and the AVHRR and Landsat8 datasets are generally highly consistent.
In this study, lakes in Central Asia start to freeze from October to December. The mean FUS and BUE of the Central Asia lakes were 106.85 and 231.86, respectively. Trends in ice phenology for each lake showed strong regional differences. Lakes distributed along the mountains show overall delayed trends among all lake ice phenology variables. In contrast, lakes in southwestern Central Asia showed clear trends of advancing FUS and BUE. From 2002 to 2020, the average advancement rate of FUS was −0.29 days/year for the 53 lakes in Central Asia, the average delay rate of FUE was 0.24 days/year for the 38 lakes in Central Asia, the average advancement rate of BUS was −0.22 days/year for the 38 lakes in Central Asia and the average advancement rate of BUE was −0.25 days/year for the 53 lakes in Central Asia.
Lake ice phenology in Central Asia is influenced by climatic factors (temperature, precipitation, ice age temperature, wind speed, and shortwave radiation) and local factors (latitude, longitude, lake area, mean lake depth, shore length, and altitude). In general, temperature and wind speed have the most direct effects on lake ice phenology, and precipitation mainly affects FUD. Lakes at high elevations tend to have longer ICD. Lake areas mainly affect FUD and BUD.
Conflicts of Interest:
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
v3-fos-license
|
2023-08-03T15:40:33.792Z
|
2023-07-29T00:00:00.000
|
260413255
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/24/15/12157/pdf?version=1690620068",
"pdf_hash": "e95b832b52eef142b8eb9f4582d051e11de96a12",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:602",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "986281f4035fcae22283c0ee2ef4caf5e7a97047",
"year": 2023
}
|
pes2o/s2orc
|
Transcriptomic and Functional Analyses of Two Cadmium Hyper-Enriched Duckweed Strains Reveal Putative Cadmium Tolerance Mechanisms
Cadmium (Cd) is one of the most toxic metals in the environment and exerts deleterious effects on plant growth and production. Duckweed has been reported as a promising candidate for Cd phytoremediation. In this study, the growth, Cd enrichment, and antioxidant enzyme activity of duckweed were investigated. We found that both high-Cd-tolerance duckweed (HCD) and low-Cd-tolerance duckweed (LCD) strains exposed to Cd were hyper-enriched with Cd. To further explore the underlying molecular mechanisms, a genome-wide transcriptome analysis was performed. The results showed that the growth rate, chlorophyll content, and antioxidant enzyme activities of duckweed were significantly affected by Cd stress and differed between the two strains. In the genome-wide transcriptome analysis, the RNA-seq library generated 544,347,670 clean reads, and 1608 and 2045 differentially expressed genes were identified between HCD and LCD, respectively. The antioxidant system was significantly expressed during ribosomal biosynthesis in HCD but not in LCD. Fatty acid metabolism and ethanol production were significantly increased in LCD. Alpha-linolenic acid metabolism likely plays an important role in Cd detoxification in duckweed. These findings contribute to the understanding of Cd tolerance mechanisms in hyperaccumulator plants and lay the foundation for future phytoremediation studies.
Introduction
Cadmium (Cd) is a highly toxic metal that is widely found in the environment, and soil contamination with Cd has been reported worldwide [1]. In addition, heavy metal accumulation can reduce organismal productivity at toxic levels [2][3][4][5]. For example, Cd inhibits plant growth and development by binding to amino acids containing sulfhydryl groups and interfering with plant environmental homeostasis [6]. Compared with many other heavy metal compounds, Cd compounds are more soluble. Thus, Cd is easily absorbed and adsorbed by plants. It frequently accumulates in various plant parts, through which it enters the food chain [7,8]. It is easily enriched as it moves up the food chain, thus jeopardising higher levels of the biota [9][10][11]. Therefore, Cd contamination not only affects plant yield and quality but also poses a significant threat to human health through the food chain [12]. Humans who consume excessive Cd are at risk of developing conditions such as immune suppression, obstructive lung disease, emphysema, and permanent kidney is recognised as a promising ideal species for Cd phytoremediation [46]. Zheng et al. [46] found that Cd enrichment in duckweed (Lemna minor) treated with 10 mg/L of Cd 2+ for 7 d and Cd removal from the water column reached 2834.30 mg/kg and 82.50%, respectively. Although many physiological and biochemical studies have been conducted, the molecular mechanisms underlying Cd tolerance in Cd-hyperaccumulated duckweed remain largely unknown. Therefore, there is an urgent need for genome-wide studies on duckweed.
In this study, two duckweed strains with a Cd-hyper-enrichment capacity were used as experimental materials. Experiments on its growth indexes, antioxidant enzyme activities, and cadmium-accumulation-related indexes revealed significant genotypic differences between high-Cd-tolerance duckweed (HCD) and low-Cd-tolerance duckweed (LCD) in response to cadmium stress in terms of growth and development, photosynthesis, and antioxidant enzyme activity. Therefore, we hypothesised that these two genotypes would differ significantly in their genome-wide responses to Cd stress. In this study, we used bioinformatic tools and methods to analyse the duckweed transcriptome. Samples were treated with 0.5 mg/L of CdCl 2 for 12 h and used for high-throughput RNA-seq together with control samples. A comparative transcriptome analysis revealed tens of thousands of single-gene expression patterns and helped investigate genome-wide regulation in response to Cd stress. These results offer a theoretical framework for understanding the mechanism of Cd tolerance in duckweed as well as new information for discovering novel genes and genetically enhancing plant resistance to Cd stress.
Effect of Cd Treatment on Duckweed Growth Indicators
To compare visible differences between HCD and LCD under Cd stress, we observed the growth of the two varieties at 12 h, 24 h, 3 d, and 7 d after the initiation of Cd treatment. The biomass of both HCD and LCD decreased, most notably by 3 and 7 d, and significant decreases in the growth rate were observed (Figure 1a,b). The differences in growth rates between the control and Cd-treated groups after 12 and 24 h of Cd stress were not significant, indicating that the effects of Cd toxicity on duckweed require a certain response time to manifest the phenotype. The growth rate of LCD under Cd stress was significantly higher than that of HCD by 3 d (Figure 1c). This result indicates that there is a great intra-species variation in Cd tolerance in duckweed and that Cd tolerance varies greatly among strains.
After treatment, the chlorophyll content of duckweed in the Cd-treated group decreased to some extent compared to that in the control group. Leaves gradually turned yellow as Cd treatment time increased (Figure 1g,h). In HCD, chlorophyll content decreased significantly by 52.87% (0.20 mg/g) after 7 d (Figure 1d). In LCD, chlorophyll content significantly decreased after 3 d (Figure 1e). The results showed that chlorophyll content gradually decreased with an increasing Cd treatment time. The chlorophyll content of HCD was higher overall than that of LCD at different times of Cd stress and was significantly higher than that of the LCD strain at 3 d, up to 0.24 mg/g (Figure 1f).
Effect of Cd Stress on Cd Concentration and Removal Efficiency of Duckweed
To further investigate the Cd tolerance characteristics of HCD and LCD, Cd concentration, bioconcentration factors (BCFs), and Cd removal efficiency were determined ( Figure 2). The Cd concentration of HCD gradually decreased with the increase in Cd treatment time and reached the peak (238.06 mg/kg (DW)) at 24 h. In contrast, the Cd concentration of LCD gradually increased with increasing Cd treatment time and reached a peak (127.63 mg/kg (DW)) at 7 d (Figure 2a). BCF is an important indicator used to measure the ability of plants to enrich pollutants. The BCF of HCD peaked (640.45) at 24 h and gradually decreased with increasing Cd treatment time. The BCF of LCD gradually increased with increasing Cd treatment time from 12 h and peaked at 7 d (329.78) (Figure 2b). The results showed that the BCF of HCD and LCD varied after Cd treatment. The removal efficiency of different duckweed strains differed. The HCD strain exhibited a
Effect of Cd Stress on Cd Concentration and Removal Efficiency of Duckweed
To further investigate the Cd tolerance characteristics of HCD and LCD, Cd concentration, bioconcentration factors (BCFs), and Cd removal efficiency were determined (Figure 2). The Cd concentration of HCD gradually decreased with the increase in Cd treatment time and reached the peak (238.06 mg/kg (DW)) at 24 h. In contrast, the Cd concentration of LCD gradually increased with increasing Cd treatment time and reached a peak (127.63 mg/kg (DW)) at 7 d (Figure 2a). BCF is an important indicator used to measure the ability of plants to enrich pollutants. The BCF of HCD peaked (640.45) at 24 h and gradually decreased with increasing Cd treatment time. The BCF of LCD gradually increased with increasing Cd treatment time from 12 h and peaked at 7 d (329.78) (Figure 2b). The results showed that the BCF of HCD and LCD varied after Cd treatment. The removal efficiency of different duckweed strains differed. The HCD strain exhibited a higher removal efficiency and was significantly higher than that of the LCD strain. The Cd removal efficiency of both duckweed strains increased with treatment time (Figure 2c). The results showed that the removal efficiency of strain HCD was greater than that of strain LCD, and that the effect of Cd concentration could be increased by extending the time when used for environmental remediation.
Effect of Cd Stress on Antioxidant Enzymes and Glutathione Sulfhydryltransferase of Duckweed
Figure 3a-i illustrates the antioxidant enzyme activities of the two duckweed strains. POD, CAT, and SOD activities of the two duckweed strains gradually increased with increasing Cd stress duration. After 7 d of treatment, POD, CAT, and SOD activities were
Effect of Cd Stress on Antioxidant Enzymes and Glutathione Sulfhydryltransferase of Duckweed
Figure 3a-i illustrates the antioxidant enzyme activities of the two duckweed strains. POD, CAT, and SOD activities of the two duckweed strains gradually increased with increasing Cd stress duration. After 7 d of treatment, POD, CAT, and SOD activities were significantly higher than those of the control (Figure 3a,b,d,e,g,h). The POD activity of the LCD strain was higher than that of the HCD strain (Figure 3f). This may suggest that LCD needs to produce more POD in response to Cd stress. Glutathione S-transferase (GST) is the main system for exogenous detoxification and cellular resistance to damage in plants [47][48][49]. The overall increase in GST activity of both duckweeds was observed with the addition of a 0.5 mg/L Cd treatment compared to the control (Figure 3j,k). The peak GST activity of the HCD strain was 0.2528 U/g at 24 h of Cd stress (Figure 3j). LCD reached its highest activity (0.0786 U/g) at 7 d under Cd stress, which was only one-fourth of the highest value of the HCD strain. The GST activity of the HCD strain was higher than that of the LCD strain at different time points (Figure 3k). . U/g, one enzyme activit unit, refers to 1 g of tissue that can convert 1 μmol of substrate within 1 min at the optimal temper ature. Asterisks indicate significant differences between Cd-treated and control samples at each tim point (*: p < 0.05, **: p < 0.01).
Overview of the Transcriptome of Different Treatment Groups under Cd Treatment
To further understand the molecular mechanisms of Cd tolerance in duckweed, sam ples from two duckweed strains exposed to 0.5 mg/L of CdCl2 for 12 h were subjected t a transcriptome analysis. After raw data filtering, a total of 544,347,670 clean reads wer Bars indicate mean ± SE (n = 3). U/g, one enzyme activity unit, refers to 1 g of tissue that can convert 1 µmol of substrate within 1 min at the optimal temperature. Asterisks indicate significant differences between Cd-treated and control samples at each time point (*: p < 0.05, **: p < 0.01).
Overview of the Transcriptome of Different Treatment Groups under Cd Treatment
To further understand the molecular mechanisms of Cd tolerance in duckweed, samples from two duckweed strains exposed to 0.5 mg/L of CdCl 2 for 12 h were subjected to a transcriptome analysis. After raw data filtering, a total of 544,347,670 clean reads were generated from all sample libraries. There were approximately 41,233,077-54,521,819 clean reads from each sample, with an average GC content of 54.18-55.00%. Additionally, the Q20 and Q30 values for HCD and LCD exceeded 97.26% and 89.59%, respectively. The rate of clean read mapping to the reference genome ranged from 52.13% to 59.18%, with a unique mapping rate of 46.64% to 53.03% (Table 1). All the biological replicates showed strong correlations (Figure 4a). The R-value was 0.91, indicating that the differences between the groups were greater than those within the groups (Figure 4b). The majority of the unigenes (45.89-64.79%) had differential expression levels between 0 and 5 ( Figure 4c).
Identification of Differentially Expressed Genes in Different Treatment Groups under Cd Treatment
To further investigate the molecular mechanisms of Cd 2+ tolerance in the different strains, differentially expressed genes (DEGs) were identified throughout the course of Cd 2+ treatment. As shown in Figure 5a, 1608 DEGs were found in HCD after Cd treatment compared to the control (Cd_HCD vs. CK_HCD), of which 761 DEGs were significantly up-regulated and 847 were significantly down-regulated. For the LCD strain, a total of 2045 DEGs were identified after Cd treatment versus the control (Cd_LCD vs. CK_LCD), of which 1347 DEGs were significantly up-regulated and 698 were significantly downregulated ( Figure 5b). Most obviously, there was a large difference in the number of up-vs. down-regulated genes (~2-fold difference) for HCD and LCD. HCD and LCD produced 632 DEGs after the Cd treatment ( Figure 4d). A Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis showed that 16 and 23 metabolic pathways in the HCD and LCD samples, respectively, responded significantly to Cd stress. Alpha-linolenic acid (ALA) metabolism, phenylpropanoid biosynthesis, plant hormone signal transduction, and drug-metabolism-P450 pathways were significantly annotated in both duckweed strains (Tables S1 and S2). For example, in plant hormone signal transduction, 36 DEGs (6 upregulated and 30 down-regulated) were significantly annotated in Cd_HCD vs. CK_HCD, and 33 DEGs (15 up-regulated and 18 down-regulated) in Cd_LCD vs. CK_LCD ( Figure 5c). Therefore, signal transduction may play an important role in the response to Cd stress in duckweed. These findings further indicate that the functional annotation of unigenes could be conducive to exploring the underlying mechanism of genotypic differences in Cd uptake by duckweed.
Functional Annotation of Unigenes and DEGs
To elucidate the potential biological functions of the DEGs, a gene ontology (GO) enrichment analysis was performed using GO with p < 0.05. The GO enrichment analysis showed that 54 ontologies were significantly enriched in HCD, and 40 ontologies were significantly enriched in LCD ( Figure S1). To understand the functions of these DEGs, GO annotations were used to classify the functions of predicted DEGs in duckweed under Cd stress, including the molecular function (MF), cellular components (CC), and biological process (BP) ( Figure 6). In addition, GO terms, such as response to stimulus, vacuole, and monooxygenase activity, which may play key roles in abiotic stress, were found to be differentially ranked among the two strains ( Figure 6, Figure S1).
Significantly Enriched Functions of Different Treatment Groups under Cd Treatment
Functionally enriched genes were detected (Figure 7), and the Cd-tolerant HCD strain was mainly enriched in signal transport mechanisms, secondary metal biosynthesis transport and metabolism, carbohydrate transport and metabolism, and amino acid transport and metabolism. The Cd-sensitive LCD strain was mainly enriched in secondary metal biosynthesis, transport, and catabolism ( Figure 7). These results indicate that signal transport mechanisms, secondary metal biosynthesis transport and metabolism, carbohydrate transport and metabolism, amino acid transport, and metabolism pathways play important roles in Cd tolerance in HCD. annotations were used to classify the functions of predicted DEGs in duckweed under Cd stress, including the molecular function (MF), cellular components (CC), and biological process (BP) ( Figure 6). In addition, GO terms, such as response to stimulus, vacuole, and monooxygenase activity, which may play key roles in abiotic stress, were found to be differentially ranked among the two strains ( Figure 6, Figure S1).
Significantly Enriched Functions of Different Treatment Groups under Cd Treatment
Functionally enriched genes were detected (Figure 7), and the Cd-tolerant HCD strain was mainly enriched in signal transport mechanisms, secondary metal biosynthesis transport and metabolism, carbohydrate transport and metabolism, and amino acid transport and metabolism. The Cd-sensitive LCD strain was mainly enriched in secondary metal biosynthesis, transport, and catabolism ( Figure 7). These results indicate that signal transport mechanisms, secondary metal biosynthesis transport and metabolism, carbohydrate transport and metabolism, amino acid transport, and metabolism pathways play important roles in Cd tolerance in HCD. The size of Q-value is indicated by the dot colour: the smaller the Q-value is, the closer the colour is to red. The number of differential genes contained under each function is indicated by the dot size. Rich factor indicates the ratio of differential genes located in a functional entry to all genes located in that functional entry. Only the top 30 GOs with the highest level of enrichment were selected for plotting.
Response of Starch Metabolism Pathway and Cell Wall Biosynthesis Pathway to Cd Stress in Duckweed
We analysed the profiles of genes involved in starch and sucrose metabolism, cell wall synthesis, and other glycan degradation pathways. Phenylalanine aminolyase (PAL), 4-coumarate-CoA ligase (4CL), and cinnamoyl coenzyme A reductase (CCR) were down- The size of Q-value is indicated by the dot colour: the smaller the Q-value is, the closer the colour is to red. The number of differential genes contained under each function is indicated by the dot size. Rich factor indicates the ratio of differential genes located in a functional entry to all genes located in that functional entry. Only the top 30 GOs with the highest level of enrichment were selected for plotting.
Response of Starch Metabolism Pathway and Cell Wall Biosynthesis Pathway to Cd Stress in Duckweed
We analysed the profiles of genes involved in starch and sucrose metabolism, cell wall synthesis, and other glycan degradation pathways. Phenylalanine aminolyase (PAL), 4-coumarate-CoA ligase (4CL), and cinnamoyl coenzyme A reductase (CCR) were downregulated under Cd stress. In addition, the enzyme expression of the cellulose degradation pathway was down-regulated under Cd treatment in both duckweed strains (Figure 8).
Reactive Oxygen Species Metabolism Is Involved in High Cd Tolerance
The expression of MPV17 was up-regulated in both duckweed strains. In HCD, the genes encoding CAT and peroxidase (PRDX) showed an enhanced expression. The expression of genes encoding antioxidant enzymes did not change in the LCD strain. In addition, the genes encoding 2-hydroxyacyl-CoA lyase (HACL1), acyl-CoA oxidase (ACOX), 2,4-dienoyl-CoA reductase (PDCR), and long-chain acyl-CoA synthetase (ACSL) showed an enhanced expression in fatty acid oxidation (Figure 8). These results suggest that gene expression may differ under Cd stress in different duckweed strains. Red arrows represent up-regulation, green arrows represent down-regulation, and yellow arrows represent mixed regulation. Dotted lines indicate that genes whose expression did not change in this pathway or were not addressed in this study were omitted. Metabolic fluxes indicated by black arrows were not addressed in this study.
Sulphur and Glutathione Metabolism in response to Cd Stress
Cd affected the expression of sulphur-and glutathione-metabolism-related genes in the two studied duckweed strains. In both strains, 3 -phosphoadenosine 5 -phosphosulfate synthase and adenylyl-sulphate reductase (glutathione) (APR) were up-regulated in expression under Cd stress. Additionally, cysteine synthase (cysK) and glutathione peroxidase (GPX) were up-regulated under Cd stress (Figure 8).
Reactive Oxygen Species Metabolism Is Involved in High Cd Tolerance
The expression of MPV17 was up-regulated in both duckweed strains. In HCD, the genes encoding CAT and peroxidase (PRDX) showed an enhanced expression. The expression of genes encoding antioxidant enzymes did not change in the LCD strain. In addition, the genes encoding 2-hydroxyacyl-CoA lyase (HACL1), acyl-CoA oxidase (ACOX), 2,4-dienoyl-CoA reductase (PDCR), and long-chain acyl-CoA synthetase (ACSL) showed an enhanced expression in fatty acid oxidation (Figure 8). These results suggest that gene expression may differ under Cd stress in different duckweed strains.
Other Related Key Genes May Contribute to Cd Tolerance in Duckweed
Cd-tolerance-related genes were differentially expressed in response to Cd treatment in each strain (Figures 7 and 8; Tables S3 and S4). Carbonic anhydrase (CA) and solute carriers SLC26, ABCB1, and ABCC2, and WRKY transcription factors, were significantly up-regulated in HCD under Cd stress. In LCD, SLC4, SLC26, ABCA3, ABCC1, and ABCG2 were up-regulated. In addition, WRKY was down-regulated in LCD. In addition, many chaperones were annotated in both LCD and HCD (Tables S3 and S4), suggesting these chaperones are important in responses to Cd stress.
DNA Repair in Response to Cd Stress
To explore the possible molecular mechanisms mediating Cd tolerance in duckweed, we investigated the expression of the genes involved in replication, transcription, and translation ( Figure 9). Proliferating cell nuclear antigen (PCNA) expression was up-regulated in HCD but was unchanged in LCD. Flap endonuclease-1 (Fen1) was down-regulated in LCD. Additionally, the expression of DNA ligase 1, which is involved in DNA replication, was up-regulated in LCD.
RNA and Protein-Related Biological Processes in Response to Cd Stress
As shown in Figure 9, only A2 expression in the Pol I subunit of HCD was up-regulated under Cd stress. The expression of other genes in LCD remained unchanged. In LCD, the expression of the H/ACA ribonucleoprotein complex subunit 2 (NPH2) transcript associated with ribosome processing was down-regulated. In HCD, the expression of the exosome complex components CSL4 (Csl4) and Rrp43 was up-regulated. In LCD, polyadenylate-binding protein (PABP1), which prevents mRNA degradation, was downregulated in LCD. In HCD, the 48S initiation complex and ribosome biosynthesis were up-regulated, whereas the 48S initiation complex and ribosome biosynthesis were downregulated or almost unchanged in LCD (Figure 9).
Alpha-Linolenic Acid Metabolism in Response to Cd Stress
We observed the differential expression of genes related to the ALA metabolic pathway in the two duckweed strains (Figure 10).
RNA and Protein-Related Biological Processes in Response to Cd Stress
As shown in Figure 9, only A2 expression in the Pol I subunit of HCD was up-regulated under Cd stress. The expression of other genes in LCD remained unchanged. In LCD, the expression of the H/ACA ribonucleoprotein complex subunit 2 (NPH2) transcript associated with ribosome processing was down-regulated. In HCD, the expression of the exosome complex components CSL4 (Csl4) and Rrp43 was up-regulated. In LCD, polyadenylate-binding protein (PABP1), which prevents mRNA degradation, was downregulated in LCD. In HCD, the 48S initiation complex and ribosome biosynthesis were upregulated, whereas the 48S initiation complex and ribosome biosynthesis were down-regulated or almost unchanged in LCD (Figure 9).
Alpha-Linolenic Acid Metabolism in Response to Cd Stress
We observed the differential expression of genes related to the ALA metabolic pathway in the two duckweed strains ( Figure 10).
Discussion
Heavy metal pollution is a major environmental stress factor that affects plant growth and development [50]. The heavy metal Cd inhibits various physiological processes in plants including growth, photosynthesis, chlorophyll content, and antioxidation [22]. There is a great intra-species variation in Cd tolerance in duckweed. Similar results have been reported by Chen et al. [20]. The chlorophyll content gradually decreased with increasing Cd treatment time (Figure 1d,e,g,h). This is in agreement with the findings of Szopiński et al. [51]. The chlorophyll content in HCD was higher than that in LCD at dif-
Discussion
Heavy metal pollution is a major environmental stress factor that affects plant growth and development [50]. The heavy metal Cd inhibits various physiological processes in plants including growth, photosynthesis, chlorophyll content, and antioxidation [22]. There is a great intra-species variation in Cd tolerance in duckweed. Similar results have been reported by Chen et al. [20]. The chlorophyll content gradually decreased with increasing Cd treatment time (Figure 1d,e,g,h). This is in agreement with the findings of Szopiński et al. [51]. The chlorophyll content in HCD was higher than that in LCD at different Cd stress times (Figure 1f). In Thlaspi fendleri, Cd-sensitive strains have been reported to decrease the chlorophyll content to a greater extent than Cdtolerant strains [52]. This may be because chlorophyll synthesis was less inhibited in the HCD strain than in the LCD strain [51,52]. The threshold value for Cd-hyper-enriched plants is typically 100 mg/kg (DW) [53]. The concentration of Cd in HCD increased first and then decreased with time. It decreased probably due to new plant growth at the lower Cd levels that results after Cd levels rapidly decrease from the uptake into HCD plants. Both HCD and LCD were Cd-hyper-enriched plants. The results showed that the removal efficiency of HCD was better than that of LCD and demonstrated that the effect of Cd concentration could be increased by extending the time when used for environmental remediation.
A negative effect of Cd accumulation in plants is the production of excess reactive oxygen species (ROS) [54]. CAT, POD, and SOD scavenge and reduce oxidative damage caused by excess ROS, and are key enzymes used by various plants to adapt to the environment. Hence, they are key indicators of the antioxidant capacity of plants [55,56]. In a previous study on Cd-stressed wheat, antioxidant enzyme activity increased with increasing Cd stress duration, which is consistent with the results of the present study [57]. Glutathione S-transferase (GST) is the main system for exogenous detoxification and cellular resistance to damage in plants [47][48][49]. The GST activity of the HCD strain was higher than that of the LCD strain at different time points (Figure 3k). This is probably because HCD absorbed more cadmium than LCD early in the treatment; the concentration of Cd in the medium was reduced and HCD was subjected to lower Cd stress. These results indicate that GST may be the key factor contributing to antioxidation in HCD and that the HCD strain was able to produce more GST to alleviate Cd toxicity and cellular damage. Additionally, the antioxidant enzyme system of duckweed plays an important role in coping with Cd-induced oxidative stress.
Starch is a resource held in plants that remobilizes its starch reserves in response to abiotic stresses such as a drought, a high salinity, and harsh temperatures, releasing energy and sugars and thereby aiding in stress reduction [58]. Glycolysis and the tricarboxylic acid (TCA) cycle are the major pathways of starch metabolism and tend to be overexpressed, in part, under Cd stress, owing to a high energy demand (Figure 8). Excess alcohol synthesised during anaerobic respiration has toxic effects on plants [59]. The expression of biosynthesis ethanol genes was up-regulated in Cd-sensitive LCD, which might be partly responsible for the severe toxicity exhibited by this strain (Figure 8). Phenolic compounds (e.g., phenolic acids and flavonoids) are an important class of plant secondary metabolites that can scavenge ROS [60]. The phenylpropanoid biosynthetic pathway is activated under abiotic stress conditions (e.g., drought, heavy metals, and salinity), leading to the accumulation of various phenolic compounds [60]. In the present study, the expression of enzymes related to this pathway, such as PAL, 4CL, and CCR, was down-regulated under Cd stress. Similar results were reported in L. punctata 6001 [42]. The cell wall is thought to be a site of Cd binding and is involved in Cd accumulation in Sedum alfredii and rice [61,62]. In contrast, enzyme expression in the cellulose degradation pathway was down-regulated under Cd treatment in the two duckweed strains, suggesting that the duckweed cell wall may have been involved in Cd binding. Flavonoids protect plants against various biotic and abiotic stresses [63]. Flavonoid-related biosynthetic genes in HCD and LCD were down-regulated or remained constant, implying that duckweed flavonoid compounds did not function under Cd stress (Figure 8).
It has been shown that the expression of genes related to sulphur metabolism and glutathione metabolism is associated with Cd tolerance in plants [64]. 3 -phosphoadenosine 5 -phosphosulfate synthase and APR are involved in sulphate activation and its reduction to sulphide, and the two enzyme genes are up-regulated under Cd stress (Figure 8). Similar results have been reported for Medicago sativa [64]. Sulphide is synthesised as cysteine with O-acetyl serine by the action of cysK and the expression of this enzyme is up-regulated according to transcriptomic data. Glutathione is a reducing agent that helps remove ROS and is produced from glutathione disulphide (GSSG) by GPX and glutathione reductase (GSR) [65]. Glutathione peroxidase expression was up-regulated under Cd stress, implying that GSH and GSSG cycling may be enhanced to improve ROS scavenging in duckweed ( Figure 8). The results showed that the two duckweed strains under Cd stress had similar expression levels of genes related to sulphur and glutathione metabolism. Glutathione is converted to R-S-GSH by glutathione sulfotransferase for Cd 2+ chelation in duckweed, which further improves its tolerance to Cd (Figure 8).
Peroxisomes are metabolic organelles that are mainly involved in lipid metabolism, ether lipid synthesis, and ROS metabolism [66]. MPV17 is a protein that has been suggested to be involved in the metabolism of reactive oxygen species [67]. Wi et al. [68] found that the overexpression of the MPV17 gene in Arabidopsis increased resistance to stress. The expression of MPV17 was up-regulated in both duckweed strains, suggesting that MPV17 may be involved in Cd tolerance in duckweed. In HCD, the genes encoding CAT and PRDX also showed an enhanced expression, indicating that antioxidant enzymes may contribute to cell survival under Cd 2+ stress. The expression of the genes encoding antioxidant enzymes did not change in the LCD strain. In addition, in LCD, the genes encoding HACL1, ACOX, PDCR, and ACSL showed an enhanced expression during fatty acid oxidation, whereas their expression levels were not up-regulated in HCD (Figure 8). The results showed that different strains of duckweed responded differently to Cd stress, which may explain why HCD plants were damaged to a lesser extent than LCD plants.
Carbonic anhydrase is a ubiquitous metalloenzyme involved in respiration, calcification, and biosynthesis that readily binds metal ions at the active site [69]. Caricato et al. [70] found that CA activity and protein expression were enhanced in Mytilus galloprovincialis under Cd stress. This is in agreement with the results of the present study, where CA gene expression was significantly up-regulated in strain HCD under Cd stress and was much higher than that in strain LCD (Tables S3 and S4). Studies have shown that the solute carriers SLC4 and SLC26 superfamilies play an important role in the process of oxidative stress [71]. In this study, SLC26 expression in HCD cells was up-regulated in response to Cd stress. In LCD, both SLC4 and SLC26 were up-regulated. It has been shown that the ATP-binding cassette transporter (ABC transporter) was associated with Cd tolerance in a plant [72]. In the present study, the expression of ABC transporter protein superfamily genes was up-regulated under Cd stress in both duckweed strains. Among these, the expression of ABCB1 and ABCC2 was up-regulated in HCD. Similar results have been reported for Ophiopogon japonicas [73]. In addition, the expression of ABCA3, ABCC1, and ABCG2 was up-regulated in LCD (Figure 8; Tables S3 and S4). Heat shock proteins (HSPs) are a class of stress proteins synthesised by organisms exposed to abiotic stresses that have been shown to protect cells from oxidative damage and apoptosis triggered by Cd exposure [74]. HSPs were found to play a significant role in protecting plants from adverse environments [75]. HSP90 and HSP70 overexpression contribute to the prevention of Cd toxicity [76]. In total, 24 and 13 genes were up-regulated in HCD and LCD groups, respectively. The transcription factor (TF) WRKY is known to respond to different biotic and abiotic stresses [77]. In addition, WRKY overexpression was found to significantly promote the uptake and accumulation of Cd in poplars [78]. In our study, WRKY22 and WRKY29 were up-regulated in the HCD group and down-regulated in LCD, which indicated that WRKY TFs may play a unique role in Cd tolerance. Therefore, these may be important factors for the better adaptation of the HCD strain to Cd stress.
Cadmium is known to affect plant cell proliferation and differentiation, cell cycle progression, DNA repair, DNA synthesis, apoptosis, and other BPs in a variety of ways [18]. Therefore, Cd stress is one of the factors contributing to genomic instability in duckweed [79]. DNA repair mainly includes mismatch repair (MMR), nucleotide excision repair (NER), and base excision repair (BER) [80]. In the present study, the relevant unigenes involved in the three repair systems were significantly altered in response to Cd stress. The upregulation of PCNA, a key gene in DNA repair and cell cycle regulation, contributes to the deleterious effects of Cd exposure [81]. Proliferating cell nuclear antigen expression was up-regulated in the HCD group but remained unchanged in the LCD group (Figure 9). The expression of flap endonuclease-1 (Fen1), a flap endonuclease involved in cutting the base of single-stranded flaps, was down-regulated in LCD [82]. In addition, the expression of DNA ligase 1, which is involved in DNA replication, was up-regulated in the LCD strain, suggesting a possible mechanism for coping with the oxidative damage under Cd stress ( Figure 9). These results indicate that DNA replication and repair systems appear to have similar responses to Cd 2+ , suggesting that the maintenance of genomic stability plays a central role in resistance to a high mutagenicity triggered by Cd 2+ in plants. This may be because the two duckweed strains respond differently to Cd 2+ toxicity.
RNA-and protein-related BPs are seriously threatened due to the cytotoxicity of Cd 2+ [83]. Gene expression in the rRNA, mRNA, and tRNA synthesis pathways associated with protein synthesis was altered under Cd stress (Figure 9). The rRNA, mRNA, and tRNA precursors were synthesised using RNA polymerases (Pol) I, II, and III. As shown in Figure 9, only A2 expression in the Pol I subunit of the HCD strain was up-regulated under Cd stress. The expression of other genes in the LCD strain remained unchanged. Based on the response of RNA Pol I, the maturation and translocation of rRNA in HCD appeared to be induced by Cd 2+ , ultimately favouring ribosome biosynthesis. Similar results were reported by Lv et al. [83] for Landoltia punctate. The expression of NPH2 transcripts related to ribosome processing was down-regulated in LCD (Figure 9). According to the RNA Pol II and III results, the genes synthesised by mRNA and tRNA remained essentially unchanged. The results showed that rRNA biosynthesis was enhanced in HCD, which facilitated the further expression of the duckweed genome. Considering that significant Cd-induced oxidative damage was found in LCD, we suggest that rRNA biosynthesis-related genes may be one of the factors of Cd tolerance in duckweed.
However, there may be a complex mechanism underlying the Cd 2+ stress response during mRNA splicing and degradation. In eukaryotes, introns in pre-mRNAs are removed by spliceosomes. In this study, the transcript levels of DEGs involving spliceosomes were up-regulated in both duckweed strains after Cd 2+ treatment (Figure 9). In addition, mRNA degradation in eukaryotes occurs via two mechanisms: First, the hydrolytic shortening of poly(A) occurs, followed by degradation by 5 -end decapitation and 5 →3 directional nucleic acid exonuclease action. Second, the hydrolysis of poly(A) occurs, followed by the 3 →5 degradation of the polysome exosome complex. HCD and LCD showed different response patterns for RNA degradation. In the HCD group, Cls4 and Rrp43 gene expression were up-regulated. In LCD, PABP1, which prevents mRNA degradation, was down-regulated. The results show that RNA degradation may be induced by the Cd treatment in both duckweed strains. The 48S initiation complex, ribosomes, and aminoacyl-tRNA, including modified tRNA, are the basic components of cellular protein factories. Protein production and ubiquitin-mediated protein hydrolysis changes exist in response to Cd stress [83]. In HCD, the 48S initiation complex and ribosome biosynthesis were up-regulated, whereas the 48S initiation complex and ribosome biosynthesis were downregulated or almost unchanged in LCD. The ubiquitin 26S proteasome system plays an important role in hormone signalling, transcriptional regulation, and plant response to environmental challenges [84]. In HCD, the expression of APC/C, Cullin-Rbx E3, and single ring-finger type E3 was down-regulated in response to Cd stress, except for the expression of the ubiquitin-binding enzyme E2, which was up-regulated. LCD showed different response mechanisms, indicating that Cd stress has its own specificity for the response mechanisms of Cd-tolerant and Cd-sensitive strains.
The omega-3 polyunsaturated fatty acid ALA is extracted from plant sources and has been shown to be one of the anti-inflammatory and antioxidant agents [85]. Oral ALA was demonstrated to prevent Cd-induced oxidative stress, neuroinflammation, and neurodegeneration in the mouse brain [85]. In this study, the expression of the ALA metabolic pathway was largely down-regulated in Cd-tolerant HCD, suggesting that HCD may have more ALA and thus be better equipped to cope with Cd-induced oxidative stress compared to LCD. Conversely, the ALA metabolic pathway of Cd-sensitive LCD was largely up-regulated, and ALA may have been metabolised more through the metabolic pathway. These results suggest that α-linolenic acid metabolism may be involved in the high Cd tolerance of duckweed. Another alternative explanation for LCD is that greater oxidative damage caused a greater need to metabolize damaged lipids.
Although we found that a lot of DEGs may be related to duckweed Cd tolerance, due to different read numbers, different rRNA expression, and uncertainty in RNA preparation, the ratio used to calculate DE can be distorted, and the TPM cannot be normalized to some extent [86]. Therefore, it is necessary to verify the function of candidate genes through gene overexpression and gene knockout in future studies. The results of this study lay a foundation for the study of plant Cd tolerance.
Isolation and Culture of Duckweed
In this study, two duckweed strains with a Cd-hyper-enrichment capacity were used as experimental materials. Two duckweed strains, Lemna minor 0009 (HCD, a high-Cdtolerance cultivar) and Lemna minor 0010 (LCD, a low-Cd-tolerance cultivar), were obtained from the Duckweed Germplasm Bank of the College of Life Sciences at Guizhou University. Before experimental treatment, a Hoagland's culture solution (containing 1.5% sucrose) was used to pre-culture the duckweed strains [87]. The duckweed plants were cultured at 25 • C under a 16/8 h light-dark cycle and light intensity of 5000 lx for 7 d. After the preculture, duckweed plants with good growth characteristics were selected for subsequent experimental treatments.
Experimental Design
Weigh 1.2 g (fresh weight) of HCD and LCD in a 13 × 8.2 × 5.3 cm box, respectively. To investigate the tolerance mechanism of Cd in duckweed, 0.5 mg·L −1 of Cd 2+ was prepared with a CdCl 2 solution (aladdin, Shanghai, China) and treated with duckweed. Among them, HCD and LCD were treated with 0.5 mg/L of Cd 2+ for different times at 25 • C under a 16/8 h light-dark cycle, 75% humidity, and light intensity of 5000 lx. Conduct three biological replicates per treatment. The groups were distinguished as follows: HCD control group (CK_HCD, no Cd 2+ added); HCD treatment group (Cd_HCD, 0.5 mg/L of Cd 2+ added); LCD control group (CK_LCD, no Cd 2+ added), and LCD treatment group (Cd_LCD, 0.5 mg/L of Cd 2+ added) ( Figure 11). Duckweed material was collected at 0 h, 12 h, 3 d, and 7 d to determine plant growth, chlorophyll content, Cd concentration, and antioxidant enzyme activity. The collected samples were immediately frozen in liquid nitrogen and kept at −80 • C for RNA-seq.
Determination of Growth Rate
Duckweed samples were collected at 12 h, 24 h, 3 d, and 7 d. After the treated duckweed samples were harvested, they were rinsed with ultrapure water and blotted using filter paper sheets. The fresh weights of duckweed samples were measured and recorded. Three biological replicates were used for each treatment, and each sample was cultivated in a separate box. The growth rate (GR) of duckweed was calculated according to Formula (1) [44]: where GR is the growth rate in g/d, W is the change in the fresh weight of duckweed after different treatment times (g), T is the treatment period (d), W 0 is the fresh weight on the first day (g), and W 7 is the fresh weight on day 7 (g).
Experimental Design
Weigh 1.2 g (fresh weight) of HCD and LCD in a 13 × 8.2 × 5.3 cm box, respectively. To investigate the tolerance mechanism of Cd in duckweed, 0.5 mg·L −1 of Cd 2+ was prepared with a CdCl2 solution (aladdin, Shanghai, China) and treated with duckweed. Among them, HCD and LCD were treated with 0.5 mg/L of Cd 2+ for different times at 25 °C under a 16/8 h light-dark cycle, 75% humidity, and light intensity of 5000 lx. Conduct three biological replicates per treatment. The groups were distinguished as follows: HCD control group (CK_HCD, no Cd 2+ added); HCD treatment group (Cd_HCD, 0.5 mg/L of Cd 2+ added); LCD control group (CK_LCD, no Cd 2+ added), and LCD treatment group (Cd_LCD, 0.5 mg/L of Cd 2+ added) ( Figure 11). Duckweed material was collected at 0 h, 12 h, 3 d, and 7 d to determine plant growth, chlorophyll content, Cd concentration, and antioxidant enzyme activity. The collected samples were immediately frozen in liquid nitrogen and kept at −80 °C for RNA-seq. Figure 11. Experimental design.
Determination of Growth Rate
Duckweed samples were collected at 12 h, 24 h, 3 d, and 7 d. After the treated duckweed samples were harvested, they were rinsed with ultrapure water and blotted using filter paper sheets. The fresh weights of duckweed samples were measured and recorded. Three biological replicates were used for each treatment, and each sample was cultivated in a separate box. The growth rate (GR) of duckweed was calculated according to Formula (1) [44]:
Determination of Chlorophyll Content
After aspirating the water on the surface of the duckweed, 0.1 g of the sample was weighed in a 10 mL centrifuge tube, placed in a −20 • C freezer for 1 h, removed to add 2 mL of 95% ethanol, preheated to 50 • C, shaken thoroughly, and placed in the dark at room temperature for 3 h. The supernatants were then collected. Absorbances of the chlorophyll pigments in the collected supernatants were measured at 663 and 645 nm using a fullwavelength Multiskan microplate photometer (Thermo Scientific Multiskan FC, Shanghai, China), and the chlorophyll content was calculated according to Equations (2)-(4).
where Chl a is the concentration of chlorophyll a (mg/L), Chl b is the concentration of chlorophyll b (mg/L), Chl is the concentration of chlorophyll (mg/L), A 663 is the absorbance of the chlorophyll solution at 663 nm, and A 645 is the absorbance of the chlorophyll solution at 645 nm converted to chlorophyll content per gram of fresh leaves (mg/g) based on the chlorophyll concentration in the extracts.
Determination of Cd Content in Duckweed Samples and Culture Media
The duckweed was rinsed sequentially with flowing tap water and ultrapure water, and the fresh weight was recorded after filter paper was blotted dry. The fresh duckweed samples were then placed in an oven at 60 • C to dry overnight until their weight was constant. Samples were ground into a powder and placed in test tubes for measurement. First, 0.1 g of sample powder was weighed in the digestion tube and 2 mL of concentrated nitric acid was added overnight. Then, 4 mL of concentrated nitric acid was added and mixed thoroughly. Digestion progressed at 280 • C for 4 h (GB/T 23739-2009). A blank control was used for each digestion stage to eliminate possible errors. After digestion, the remaining digestion solution was cooled to room temperature in a digestion tube and washed with deionised water. Then, the volume was fixed at 50 mL for the analysis. For each treatment, 50 mL of the liquid medium was obtained and centrifuged at 3500 rpm for 10 min, and the supernatant was placed in a refrigerator at 4 • C for measurement. A flame atomic spectrophotometer (Analytik Jena AG NovaAA 400P, Jena, Germany) was used to detect Cd [88,89].
Cd concentration in duckweed was calculated according to Formula (5) [90], the removal efficiency of Cd from water bodies by duckweed was calculated according to Formula (6), and bioconcentration factors (BCFs) were the concentration ratios of heavy metals in plant tissues to the aqueous environment and are commonly used to evaluate the accumulation trends of heavy metals in organisms [91]. BCF was calculated according to Formula (7).
where M is the Cd content in the sample per unit weight (mg/kg), C t is the Cd content in the sample digestion solution (mg/L), C 0 is the Cd content in the blank digestion solution (mg/L), V is the total volume of the sample digestion solution (mL), and m is the total amount of dry powder weighed during digestion (g).
where R is the removal efficiency of Cd from water by duckweed, C i is the initial Cd concentration (mg/L), and C f is the residual Cd concentration after treatment (mg/L).
where R BCF is the bioconcentration factor, C p is the concentration of Cd in the plant tissues (mg/kg), and C h is the final concentration of Cd in the culture solution (mg/L).
Determination of Antioxidant Enzyme Activity and Glutathione Sulfotransferase Activity
To prevent enzyme inactivation, we weighed 0.1 g of duckweed tissue that had been frozen in liquid nitrogen. Then, 1 mL of the extraction solution was added, and the material was homogenised in an ice bath. The supernatant was removed after centrifugation at 8000 rpm for 10 min at 4 • C and put on ice for measurement [92,93]. Superoxide dismutase (SOD, EC 1.15.1.1), peroxidase (POD, EC 1.11.1.7), catalase (CAT, EC 1.11.1.6), and glutathione thioltransferase (GST, EC 2.5.1.18) activities were measured in 1 mL of the supernatant using specific kits according to the manufacturer's instructions (Solarbio, Beijing, China).
RNA Extraction and cDNA Library Construction
For the transcriptome analysis, HCD or LCD was treated with 0 and 0.5 mg/L of Cd 2+ for 12 h, and duckweed samples were collected for total RNA extraction. Plant tissues were frozen in liquid nitrogen. Total RNA was extracted using a Total RNA Extractor (Trizol) kit (Sangon Biotech) (Total RNA Extractor, Trizol), and RNA concentration was measured using a Qubit2.0 RNA assay kit (Life). RNA integrity and genomic contamination were detected with agarose gel (PCR instrument, T100™ Thermal Cycler). Using the 3 polyA structure of messenger RNA and related molecular biology techniques, we performed experiments on the complete total RNA of 12 Lemna minor samples, including mRNA isolation, fragmentation (Hieff NGS™ MaxUp Dual-mode mRNA Library Prep Kit for Illumina ® , YEASEN), double-stranded cDNA synthesis (low-temperature centrifuge, Thermo Scientific Sorvall Legend Micro 21R), cDNA fragmentation modification, magnetic bead purification and fragmentation sorting (Hieff NGS ® DNA Selection Beads, YEASEN), library amplification, and other processes. The recovered DNA was accurately quantified using a Qubit DNA Assay Kit to facilitate mixing in equal amounts at a 1:1 ratio for sequencing afterwards.
After testing and quality control, we finally obtained sequencing libraries that were suitable for "paired-end" 2 × 150. Raw image data files from "paired-end" 2 × 150 were converted into raw sequenced reads with a CASAVA Base Calling analysis. A quality assessment of the raw sequenced data was performed using FastQC software. Relatively accurate clean reads were obtained with quality control using Trimmomatic software. The Lemna minor genome was used as the reference sequence (genome ID: 27408). After QC, the sequenced sequences were compared with the reference genome using HISAT2, and the comparison results were counted using RSeQC. We treated the samples with different Cd concentrations, collected the samples and sent them to Sangon Biotech (Shanghai) Co., Ltd. (Shanghai, China), and performed library construction and RNA-seq for each sample separately on an Illumina Hiseq™ platform.
RNA-seq Data Processing
Raw reads of the transcriptome dataset were processed by Sangon Biotech (Shanghai) Co. The reference genome was used as the reference sequence, and the sequences after quality control were mapped with the reference genome using HISAT2. The comparison results were counted using RSeQC. Gene function annotations were selected from the following authoritative databases: Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG), homologous-protein-clustering NCBI COG/KOG, UniProt, Ensembl, Biomart, protein-family PFAM, CDD, STRING, NCBI NT, and NCBI NR.
Statistical Analysis
Statistical analyses were performed using IBM SPSS Statistics 26 and GraphPad Prism software. p-Values were calculated with a one-way analysis of variance (ANOVA), and the multiple t-test method was used for analytical comparison to determine the significance of differences between treatment means (p < 0.05). Data are means ± SE, replicated three times.
Conclusions
For time reasons, we did not perform a functional validation of the screened-in genes of interest. Our physiological and transcriptome investigations of two duckweed strains demonstrated significant differential effects of Cd stress on growth rate and chlorophyll content. A significant intra-species variation was observed in Cd tolerance and Cd removal efficiency. Specifically, HCD showed a significantly higher removal efficiency than LCD. The antioxidant enzyme systems of the two duckweed strains were also affected differently under Cd stress, and greater Cd-induced oxidative damage was observed in LCD. GST activity was higher in the HCD strain than in the LCD strain at all time points. In addition, based on the transcriptomic analysis, there were 1608 and 2045 DEGs in the HCD and LCD duckweed strains, respectively, under Cd stress, with significant genotypic differences. Starch metabolism, sulphur metabolism, and ROS pathways were activated, and the expression of glutathione-synthesis-related genes was up-regulated to eliminate the effects of Cd-induced ROS. The gene expression of the genetic central dogma is triggered within 12 h in response to the Cd exposure. Genes involved in Cd uptake and transport, such as ABC transporter proteins, were actively expressed under Cd stress. These results improve our understanding of the potential mechanisms underlying Cd tolerance in duckweed at the transcriptomic level. They also lay the foundation for our future research on Cdtolerance-related pathways and genes. Data Availability Statement: Data available on request due to restrictions eg privacy or ethical. The data presented in this study are available on request from the corresponding author.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2015-11-04T00:00:00.000
|
7647708
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/s13075-015-0824-x",
"pdf_hash": "61cbe74003e86d6ed35de81b5f33e7ae84b2f9ae",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:603",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "61cbe74003e86d6ed35de81b5f33e7ae84b2f9ae",
"year": 2015
}
|
pes2o/s2orc
|
Investigation of the cytokine response to NF-κB decoy oligonucleotide coated polysaccharide based nanoparticles in rheumatoid arthritis in vitro models
Introduction The transcription factor nuclear factor-kappa B (NF-κB) is highly involved in regulation of a number of cellular processes, including production of inflammatory mediators. Thus, this transcription factor plays a role in pathology of many diseases, including rheumatoid arthritis, an autoimmune disease hallmarked by an imbalance of pro and anti-inflammatory cytokines. Small nucleic acids with sequences that mimic the native binding site of NF-κB have been proposed as treatment options for RA; however due to low cellular penetration and a high degree of instability, clinical applications of these therapeutics have been limited. Methods Here, we describe the use of N-trimethyl chitosan-polysialic acid (PSA-TMC) nanoparticles coated with decoy oligodeoxynucleotides (ODNs) specific to transcription factor NF-κB (PSA-TMC-ODN) as a method to enhance the stability of the nucleic acids and facilitate increased cellular penetration. In addition to decoy ODN, PSA-TMC nanoparticles were loaded with RA therapeutic methotrexate (MTX), to assess the anti-inflammatory efficacy of a combination therapy approach. Two different in vitro models, a cell line based model as well as a primary RA cell model were used to investigate anti-inflammatory activity. One way ANOVA followed by Holm-Sidak stepdown comparisons was used to determine statistical significance. Results In general, free ODN did not significantly affect secretion of pro-inflammatory cytokines interleukin-6 (IL-6) and interleukin-8, (IL-8) while free MTX had variable efficacy. However, PSA-TMC-ODN and PSA-TMC-ODN-MTX resulted in significant decreases in the inflammatory mediators IL-6 and IL-8 in both cell models. In addition, PSA-TMC exhibited sufficient cellular uptake, as observed through fluorescence microscopy. Conclusions These results support our previous findings that PSA-TMC nanoparticles are an effective delivery vehicle for small nucleic acids, and effectively alter the pro-inflammatory state characteristic of RA.
Introduction
Rheumatoid arthritis (RA) is an autoimmune disease characterized by inflammation of the synovial tissue of joints. Over time, the infiltration of immune cells to the synovial lining leads to hyperplasia, increased vascular growth, and formation of a tumor-like tissue known as the pannus [1]. The physiology of a chronic inflammatory state eventually results in cartilage degradation and bone resorption. An imbalance of proinflammatory and antiinflammatory cytokines contributes to the state of chronic inflammation. Briefly, the levels of anti-inflammatory cytokines (interleukin (IL)-4, IL-10, and IL-13) present in the synovium are too low to combat the effects of proinflammatory cytokines (tissue necrosis factor alpha (TNFα), IL-1, IL-6, and IL-8) [2]. Of the cells present in the RA synovial lining, "macrophage-like" cells and activated synovial fibroblasts are accepted as the primary mediators of the proinflammatory/anti-inflammatory imbalance [3,4].
Nuclear factor kappa-light chain enhancer of activated B cells (NF-κB) is a transcription factor involved in the regulation of a variety of cellular processes, including growth, apoptosis, and inflammatory and immune responses [5]. NF-κB-dependent gene expression is known to play a critical role in the observed cytokine imbalance, as well as to contribute to increased inflammation in RA [6,7]. Under normal conditions, NF-κB is sequestered in the cytoplasm by means of a bound inhibitor known as inhibitor of NF-κB (IκB). External stimulation from inflammatory mediators, including IL-1β, leads to a signaling cascade that results in phosphorylation of the inhibitor, followed by dissociation of the NF-κB/IκB complex and subsequent nuclear translocation of NF-κB. Once inside the nucleus, NF-κB initiates transcription of proinflammatory cytokines, including IL-6 and IL-8, two cytokines highly involved in regulating inflammation in RA. IL-6 and IL-8 both possess NF-κB binding sites on their promotor regions, indicating they are highly regulated by NF-κB [8]. Transcription factor decoy oligonucleotides (ODNs) have the potential to reduce inflammation in RA by binding to NF-κB in the cytoplasm, preventing nuclear translocation, and mitigating transcription of proinflammatory proteins. The mechanism of decoy ODN is illustrated in Fig. 1.
Transcription factor decoy ODNs mimic the native DNA binding site of the transcription factor, but are only~20 base pairs long and do not encode any genes. NF-κB decoy ODNs are desirable drug candidate because they provide a way to selectively regulate specific genes [9]. A number of reviews herald decoy ODNs as a potential future treatment for a variety of pathologies, including inflammatory and autoimmune disorders, such as RA [9][10][11][12]. However, despite promising potential for treatment, applications have been limited by low cellular penetration and a lack of stability of the nucleotides, which combined result in low overall bioavailability [13,14]. To overcome stability problems, chemical modifications such as phosphorothioate or methyl phosphonate are often applied to the nucleotide backbones [15]. While these modifications enhance stability, they do not necessarily lead to increased delivery efficiency, resulting in the need for a high dose and frequently repeated delivery. This is not a sustainable method for delivery, as phosphorothioate nucleic acids have been shown to have a concentrationdependent toxicity [16,17].
Methods such as viral vectors, cationic lipid formulations, and, more recently, cationic polymer formulations exist to overcome the barriers to nucleic acid delivery. In general, drug delivery systems for nucleic acids must have the following attributes: biocompatibility and biodegradability, reticuloendothelial system (RES) avoidance, nonimmunogenicity, cellular uptake capability, and cell or tissue specificity [18]. Viral vectors are associated with major drawbacks, including viral-induced immunogenicity, toxicity, mutation of the nucleic acid of interest with the viral DNA, and the potential for inactivation of the gene of interest due to recombination [19]. Lipofectamine reagents, marketed by Life Technologies (Carlsbad, CA, USA), have become the most referenced cationic lipidbased transfection reagent, with the claim of increased efficiency over other available reagents. However, these compounds often exhibit cytotoxicity and are prone to accumulation within the liver in vivo, leading to significant nucleic acid payload degradation [20,21]. Recent research has focused on the alternative use of cationic polymers for nucleic acid delivery. Poly(ethyleneimine) (PEI) is one of the most widely used cationic polymers for nucleic acid delivery; however, PEI exhibits considerable toxicity toward a variety of mammalian cells [22,23]. Delivery systems for RA, where the ultimate goal is to reduce inflammation, require materials that do not contribute to inflammation or the immune response and that exhibit low levels of toxicity. Therefore, an alternative delivery method to viral vectors, cationic lipids, and synthetic cationic polymers is needed.
We have recently described a nanoparticle system based on two natural polymers, N-trimethyl chitosan (TMC) and polysialic acid (PSA). This nanoparticle Proinflammatory cytokines, including IL-1β, activate the cell signaling pathway associated with NF-κB, including activation of IκK, phosphorylation and inactivation of Iκβ, and translocation of NF-κB to the nucleus. NF-κB decoy ODNs can prevent translocation of the transcription factor, as well as subsequent transcription of NF-κB-dependent genes. IκB inhibitor of NF-κB, IL interleukin, NF-κB nuclear factor kappa-light chain enhancer of activated B cells system is noncytotoxic, is nonimmunogenic, and has been shown to effectively deliver encapsulated disease-modifying anti-rheumatic drugs (DMARDs) and surface-coated NF-κB decoy ODNs when applied to in vitro models of RA and cystic fibrosis (CF), respectively [24,25]. In this manuscript, we report the use of PSA-TMC nanoparticles (NP) as a delivery system to combine treatment of a DMARD, methotrexate (MTX), and NF-κB decoy ODN.
Cell culture SW982 cells were obtained from ATCC (Manassas, VA, USA) and grown in Dulbecco's modified Eagle's medium (DMEM; Fisher Scientific, Pittsburgh, PA, USA) supplemented with 10 % fetal bovine serum (FBS; Atlanta Biologicals, Atlanta, GA, USA) until confluent. Primary RA cells were isolated from synovial tissue obtained from two Caucasian RA patients, both women between the ages of 50 and 59. The use of human samples was approved by the Syracuse University Institutional Review Board (IRB). The tissue samples were obtained by Dr Timothy Damron at Community General Hospital (Syracuse, NY, USA) following written and informed consent by each patient, as required by an IRB-approved protocol. Tissue was isolated following a protocol outlined by Zimmerman et al. [27]. Briefly, the synovial tissue was minced finely and incubated at 37°C with 0.1 % Trypsin (Invitrogen, Carlsbad, CA, USA) in phosphate-buffered saline (PBS) for 30 minutes. Tissue was then digested for 2 hours in DMEM with 0.1 % Collagenase P. After digestion, tissue was filtered through a 100 μM filter. The resultant solution was centrifuged, and the pelleted cells were resuspended in DMEM with 10 % FBS, placed in a T-75 flask, and cultured at 37°C with 5 % CO 2 . After three passages, the cells were stained with CD44-FITCmAB (Santa Cruz Biotechnology, Santa Cruz, CA, USA) to confirm fibroblast cells. To supplement the two sets of primary cells obtained through synovial tissue isolation, human fibroblast-like synoviocytes (HFLS; lot numbers 2884 and 2956, female Caucasian) were obtained from Cell Applications (San Diego, CA, USA) and cultured in DMEM with 10 % FBS at 37°C with 5 % CO 2 .
Nanoparticle preparation and characterization
ODN-coated NP were prepared as described previously [25]. Briefly, 6.4 mg TMC (55 % quaternization) were dissolved in 3.0 ml of 0.3 % acetic acid in a glass vial. Meanwhile, 3.2 mg PSA and 1.0 mg TPP were dissolved in 2.0 ml DI (Deionized) water. To prepare nanoparticles loaded with MTX, 2.4 mg MTX were added to the aqueous PSA solution. The latter solution was sonicated for 10 minutes and then added drop-wise to the TMC solution with stirring. Stirring was continued at room temperature for 20 minutes. At this time, 10 μg ODN were added to the nanoparticle suspension. Stirring was continued for an additional 10 minutes to ensure uniform electrostatic adhesion of ODN to the nanoparticle surface, as well as complete dispersion. Upon completion of stirring, centrifugation at 3000 rpm for 15 minutes yielded a pellet of ODN-coated NP.
Nanoparticle size, zeta potential, and polydispersity index were determined using a Malvern Zetasizer NanoZS90 (Malvern Instruments, Malvern, UK). Following centrifugation, nanoparticles were resuspended at a concentration of 2 mg/ml in DI water and filtered through a 0.45 μM syringe filter. Samples were loaded into cuvettes or capillary cells for measurements at 25°C.
Determination of MTX loading
High-performance liquid chromatography (HPLC) was used to determine the amount of MTX loaded into ODN-coated nanoparticles. After nanoparticles were pelleted via centrifugation, supernatant samples were saved and analyzed using a Prominence Ultrafast Liquid Chromatography System (Shimadzu Instruments, Kyoto, Japan). Samples were run using a 93:7 (v/v) mixture of 50 mM ammonium acetate and acetonitrile mobile phase at a flow rate of 0.75 ml/minute with a 100 μl injection volume. The detection wavelength used was 210 nm. To determine the amount of MTX present based on peak area, a calibration curve of eight known concentrations (from 50 to 0.39 μg/ml) of MTX was constructed. PeakFit 4.2 (SYSTAT Software, San Jose, CA, USA) software was used to analyze the peak area.
In vitro efficacy of ODN-coated NP SW982 cells or primary RASF cells were plated on 24-well plates at a density of 20,000 cells/well. Nanoparticles were prepared as described in Nanoparticle preparation and characterization. In addition to ODN-coated nanoparticles (NP-ODN), bare nanoparticles (NP), MTX-loaded nanoparticles (NP-MTX), ODN-coated MTXloaded nanoparticles (NP-MTX-ODN), and nanoparticles coated with a SCO (NP-SCO) were prepared. As a control, MTX alone was prepared at a concentration of 1.0 mg/ml DMEM media. After centrifugation, all nanoparticles were resuspended in serum-free DMEM at a concentration of 1.0 mg/ml. Then 500 μl of each treatment group was added to the 24-well plate in duplicate as follows: media alone (control); ODN alone; NP-ODN; NP; NP-SCO; NP-MTX; NP-ODN-MTX; and MTX alone. The complexes were removed and media was replaced after 4 hours to allow for normal growth conditions. The amount of ODN in each treatment group was held constant at 1 μg/ml, or approximately 500 ng/well. This concentration does not have an impact on cellular proliferation. Twenty-four hours after initial complex addition, inflammation was induced with the addition of 1.0 ng/ml IL-1β. This concentration has been shown to increase levels of IL-6 and IL-8 when administered to SW982 cells [28]. After incubation at 37°C for an additional 24 or 48 hours, supernatant samples were collected and stored at -80°C for analysis of IL-6 and IL-8.
Quantitative analysis of inflammatory cytokines
Enzyme-linked immunosorbent assay (ELISA) kits for IL-6 and IL-8 were purchased from Peprotech (Rocky Hill, NJ, USA) and run according to the manufacturer's instructions. Samples were run in duplicate, and each experiment was repeated independently at least three times.
In vitro cellular uptake
To examine internalization of the nanoparticles, cellular uptake experiments were performed. Prior to nanoparticle synthesis, TMC was tagged with Alexa-Fluor 488 carboxylic acid, succinimidyl ester, and mixed isomers in dimethylsulfoxide (DMSO; 1 mg/ml) (Invitrogen, Grand Island, NY, USA). Then 25 mg TMC were dissolved in 4 ml of 0.1 M sodium bicarbonate buffer (pH 8.3), 500 μl AF 488 dye were added, and the solution was stirred for 1 hour at room temperature. Upon completion of stirring, the resultant material was dialyzed for 48 hours against water to ensure removal of unreacted dye. In addition, the amount of TMC used has an excess of reactive amine groups relative to amount of Alexa Fluor 488, and therefore the amount of unreacted dye was expected to be negligible. 1 H-NMR confirmed dye conjugation. Cells were plated on lysine-coated glass-bottom dishes (Mattek Corp, Ashland, MA, USA) at a density of 100,000 cells per dish 2 days prior to scheduled imaging to allow for adherence and confluence. ODN-coated NP were prepared using Alexa Fluor 488 tagged TMC, and Texas Red tagged ODN. On the day of imaging, sterile filtered NP, NP-ODN, and ODN alone were administered to the plated cells at a concentration of 1.0 mg/ml, 1.0 mg/ml, and 500 ng/ml, respectively. The concentrations used were well below the concentrations associated with cytotoxicity in order to avoid any changes in uptake due to activation of a cellular inflammatory response. Complexes were incubated with the cells for 45 minutes at 37°C prior to removal. The cells were washed three times with 1× PBS and imaged using a Nikon Eclipse Ti (Nikon Metrology, Brighton, MI, USA) inverted microscope.
Statistical analysis
IL-6 and IL-8 protein levels were expressed relative to an untreated, stimulated control group, with all data presented as mean ± standard deviation (SD) for all groups (n ≥3). One-way analysis of variance (ANOVA) followed by Holm-Sidak testing for multiple comparisons was performed to compare IL-6 and IL-8 protein secretion following treatment and inflammatory stimulation. All statistical tests were conducted with α = 0.05.
Nanoparticle loading and characterization
Our laboratory previously reported the use of NP for effective delivery of conventional, small molecule therapeutics, such as MTX and dexamethasone, as well nucleic acid-based therapeutics, particularly decoy ODN [24,29]. As expected based on these prior studies, NP possessed a size of close to 100 nm (115 nm) and a positive zeta potential (37 mV), while NP-ODN nanoparticles possessed a significantly larger size with a diameter of 159 ± 15 nm and a decrease in surface charge to 23 mV [24,25]. Furthermore, NP-ODN loaded with MTX led to another slight size increase, insignificantly larger than NP-ODN alone, with a diameter of 184 ± 5.6 nm while maintaining a positive zeta potential of approximately 33 ± 6.5 mV. All nanoparticle formulations possessed size between 100 and 200 nm, favorable for evading the RES in applications for drug delivery [30].
HPLC was performed to determine the amount of MTX loaded within the NP-ODN-MTX. Loading capacity and loading efficiency values of 0.20 mg MTX/mg nanoparticle and 86.7 %, respectively, were obtained. The high loading efficiency of MTX suggests that the majority of the drug is encapsulated prior to addition of ODN to the nanoparticle suspension, and therefore interactions between ODN and MTX are limited.
Effect of ODN-loaded and MTX-loaded NP on IL-6 and IL-8 secretion in RA in vitro models Previously, we have reported the use of a luciferase reporter assay to confirm that the reduction in inflammatory protein levels was in fact occurring due to decoy ODN interference with NF-κB [24]. In the current study, to initially determine efficacy of NF-κB decoy ODN-loaded and MTX-loaded NP, the SW982 cell line was used as a model of RA. The SW982 cell line has been shown to mimic activated RA synovial fibroblast cells with regards to the expression of inflammatory mediators, particularly when stimulated in 1 ng/ml IL-1β [31]. We have previously conducted cytotoxicity studies of NP formulations and concluded that NP, as well as NP-ODN, NP-MTX, and MTX alone, do not impact cellular proliferation at low concentrations and are therefore appropriate for this study [24,25]. In addition to results on cytotoxicity, we recently demonstrated that NP-ODN are effective at reducing inflammation in an in vitro model of CF [24].
To assess the bioactivity of NP coated with the NF-κB decoy ODN and/or loaded with MTX, the secretion of two potent inflammatory mediators, IL-6 and IL-8, by SW982 cells was investigated. Both of these proinflammatory mediators are directly influenced by NF-κB and play a major role in the inflammatory response in RA. IL-6 is a multifunctional cytokine, with the ability to regulate the immune response, inflammation, and hematopoiesis, and plays a crucial role in RA pathogenesis [32]. IL-8 was chosen as a representative chemokine and is responsible for recruiting immune cells to the synovium and contributing to the tumor-like pannus tissue. In addition, IL-8 is involved in upregulation of inflammation via paracrine signaling mechanisms in the RA synovium [33]. The mechanism of action of MTX in RA treatment and inflammatory activity is currently unresolved; however, the drug is believed to interfere with cell folate metabolism. Furthermore, several reports suggest MTX acts on NF-κB as an inhibitor [34]. We explored coadministration of NF-κB decoy ODN and MTX to observe any potential synergistic activity. IL-6 and IL-8 levels were examined in response to treatment with NP-ODN, NP-MTX, NP-ODN-MTX, ODN alone, and MTX alone using immunoassays.
The IL-6 secretion profile in response to different treatment groups is shown in Fig. 2. In general, we were interested in the IL-6 response of cells subjected to the different treatment groups in comparison with untreated control cells and in comparison with cells treated with ODN alone. At 24 hours (Fig. 2a), cells treated with NP-ODN-MTX displayed a significant reduction of IL-6 relative to untreated control cells. A decrease in IL-6 levels in comparison with untreated control cells was also observed following treatment with NP-ODN; however, this reduction was not great enough to be significant. At 48 hours (Fig. 2b), a significant decrease relative to both untreated control cells and cells administered ODN alone was observed following treatment with NP-MTX and NP-ODN-MTX. NP-ODN displayed trends at 48 hours similar to those at 24 hours. This treatment resulted in a decrease in IL-6 levels in comparison with both untreated control cells and ODN alone; however, the decrease was not great enough to be significantly different.
The IL-8 secretion profile from SW982 cells in response to treatment with different nanoparticle formulations is portrayed in Fig. 3. After 24 hours (Fig. 3a), NP-ODN-MTX resulted in a significant decrease in IL-8 levels compared with ODN administered alone. While the level of IL-8 in response to NP-ODN-MTX treatment was lower than the untreated control, the difference was not significant. At 48 hours (Fig. 3b), multiple significant decreases in IL-8 levels were observed. Cells treated with NP-ODN, NP-MTX, NP-ODN-MTX, and MTX alone all had IL-8 levels significantly lower than the untreated control cells and cells treated with ODN alone.
These results suggest that NP can be used to deliver decoy ODN and MTX, alone and simultaneously, to activated RA synovial fibroblasts. To further validate the ability of NP to serve as an effective treatment strategy for RA, in vitro experiments were also conducted with Results are expressed as fold changes of IL-6 levels relative to an untreated control (solid line at 1). One-way ANOVA followed by Holm-Sidak multiple comparisons testing was used to determine the impact of treatment on IL-6 secretion. *Significant difference from the control. †Significant difference from ODN alone. Data presented as mean ± SD (n = 3). IL interleukin, MTX methotrexate, NP PSA-TMC nanoparticles, ODN oligonucleotide primary cells. Previous reports have noted discrepancies in cytokine production between cell line models and primary RASF cells [35]. Furthermore, a literature search revealed that immortalized cells, such as SW982, constitutively express the NF-κB pathway, indicating that they may be more susceptible to NF-κB interference than primary RASF cells [36].
Primary RASF cell cytokine secretion was investigated following treatment with ODN, NP-ODN, NP-MTX, NP-ODN-MTX, and MTX. At 24 hours (Fig. 4a), a significant reduction in IL-6 secretion by the primary cells was observed in response to NP-ODN and NP-ODN-MTX in comparison with untreated control cells, as well as cells administered ODN alone. While cells treated with NP-MTX and MTX alone experienced a decrease in levels of IL-6, the decrease was not great enough to be considered significant. A lack of significant reduction of IL-6 secretion in response the NP-MTX and MTX alone is in accordance with several reports, described in further detail in the Discussion, stating that MTX does not have a direct effect on IL-6 levels in primary RASF cells. At 48 hours (Fig. 4b) although trends similar to those at 24 hours are seen with reductions in IL-6 levels in response to NP-ODN, NP-MTX, and NP-ODN-MTX, significant reductions were not observed in response to any NP treatment.
The IL-8 secretion response of primary RA cells to NP treatments is depicted in Fig. 5. At 24 hours (Fig. 5a), despite similar reduction trends to those observed for primary IL-6 secretion and SW982 IL-8 secretion, the differences were not significant. At 48 hours (Fig. 5b), NP-ODN, NP-MTX, NP-ODN-MTX, and MTX alone all resulted in significant decreases in IL-8 secretion when compared with the untreated control. Furthermore, NP-ODN-MTX treatment resulted in a significant decrease when compared with ODN alone. Results are expressed as fold changes of IL-6 levels relative to an untreated control (solid line at 1). One-way ANOVA followed by Holm-Sidak multiple comparisons testing was used to determine the impact of treatment on IL-6 secretion. *Significant difference from the control. †Significant difference from ODN alone. Data presented as mean ± SD (n = 4). IL interleukin, MTX methotrexate, NP PSA-TMC nanoparticles, ODN oligonucleotide
Cellular uptake of NP-ODN
To facilitate visualization of carrier uptake and localization of NP and NPDON in vitro, TMC was modified with a green (Alexa Fluor 488) fluorescent tag, and ODN was modified with red (Texas Red). Tagged NP, NP-ODN, and ODN were incubated with SW982 cells at 37°C for 45 minutes, and visualized. A composite image from the uptake experiments is shown in Fig. 6. NP-ODN demonstrated cellular uptake of both NP and ODN, while ODN alone did not appear to enter the cells.
Discussion
A major barrier in the advancement of nucleic acid therapies to achieving clinical relevance is a general lack in ability of the negatively charged nucleic acid to enter the negatively charged cell membrane. A number of positively charged carrier systems and transfection reagents have been explored to overcome this barrier; however, many of these are associated with toxicity, immunogenicity, and/or are highly variable based on cell type [37]. The NP carrier system presented here has been found to be noncytotoxic, while maintaining a positive surface charge [24,25,38]. We expect that the positive surface charge of the particles due to TMC facilitates interaction with negatively charged cell membranes, increasing ODN cellular uptake. Meanwhile, PSA has no known receptors in the body, making it an optimal choice for a component, as this will probably allow for RES evasion and reduce the likelihood of inducing an immune response. Further, PSA has properties similar to polyethylene glycol (PEG), a polymer commonly used to extend circulation time via incorporation into nanocarrier systems or protein conjugates [25,[39][40][41]. In sum, we have reported herein a nanoparticle system based on natural polysaccharides, which is anticipated to reduce immunogenicity and enhance hydrophilicity. An NF-κB decoy ODN was chosen as the NA (Nucleid acid) drug of choice for this study due to the known activity of NF-κB in RA pathology.
Under normal conditions, NF-κB is bound to an inhibitor in the cytoplasm. However, in response to an inflammatory stimulus, the inhibitor undergoes phosphorylation, leading to dissociation of the NF-κB/inhibitor complex and subsequent nuclear translocation of the transcription factor. For this study, we chose to quantify IL-6 and IL-8 as the representative cytokine and chemokine, respectively. In addition to other roles in the inflammatory response, IL-6 and IL-8 play a role in stimulation of vascular endothelial growth factor (VEGF), a growth factor linked to production of blood vessels [42]. The newly and hence typically rapidly formed blood vessels have larger pore sizes between the endothelial barrier than normal blood vessels, which can be exploited for drug delivery via the enhanced permeation and retention (EPR) effect [43]. Colloidal carrier systems between 100 and 200 nm can passively accumulate in areas associated with blood vessels with larger, leaky pores in the endothelial barrier; thus, the EPR effect is a means of passive targeting [30]. While RA pathology does not exhibit the retention aspect of this phenomenon, the enhanced permeation appears to be great enough to act as a passive targeting mechanism [44,45].
As NF-κB has a well-established involvement in RA, the transcription factor is an enticing target for drug candidates. However, as demonstrated here and in previous studies, administration of decoy ODNs alone results in low efficacy and delivery with many available reagents results in high degrees of cytotoxicity [24]. As shown in Fig. 6, the use of PSA-TMC as a carrier system for decoy ODNs drastically increases the presence of the decoy within the cell, proving an increased amount of therapeutic at the site of action while maintaining cell health. With the use of the PSA-TMC carrier system, a lower amount of ODN can be administered, thereby decreasing potential adverse effects from the decoys themselves.
We have previously established increased antiinflammatory bioactivity of an NF-κB decoy when administered via NP in comparison with administration without a delivery vehicle in a CF in vitro model [24]. Likewise, in the current study, decoy ODN efficacy was increased when administered via PSA-TMC to primary and cell line in vitro models of RA. SW982 cells yielded significantly decreased levels of IL-6 in response to treatment with NP-ODN-MTX at 24 and 48 hours and in response to NP-MTX at 48 hours only. As primary cells are isolated from different individuals, it is not unexpected to observe increased variability among cytokine expression and secretion when compared with cell line groups [46], as seen here.
With regards to the effect of MTX on cytokine modulation, there have been conflicting reports. Early reports by Loetscher et al. claimed that MTX is ineffective at mediating IL-8 production in RA [47], while Kraan et al. reported decreased IL-8 in synovial fluid after MTX treatment [48]. Similarly, Nishina et al. [49] recently reported MTX effectively reduced IL-6 plasma levels in RA patients, while Inoue et al. [50] claimed that MTX did not have an inhibitory effect on IL-6 production by RA synovial cells. Previous studies conducted by our group found MTX delivery alone to be inconsistent, providing further evidence that the therapeutic effects of MTX may not be manifested in changes in the cytokine milieu [25].
The cytokine results portrayed in Figs. 2, 3, 4 and 5 in response to MTX alone are reflective of the variable response observed among RA patients. In addition to unpredictable efficacy, MTX is associated with a number of severe, dose-dependent side effects, limiting the tolerable dosage level. Several reports advocate for combination therapy of DMARDs, particularly MTX, with biologic therapies. For example, a report by Goekoop-Ruiterman et al. [51] claimed increased clinical improvement in early stages of diseases progression with combination therapies. Likewise, claims of increased efficacy of low-dose MTX combined with alternative therapies, such as phosphodiesterase type 3 inhibitor cilostazol, have also been reported [52]. Indeed, primary RASF cells showed a significant response in IL-6 levels to treatment with NP-ODN-MTX and NP-ODN, but not NP-MTX or MTX alone. The primary cell model also resulted in a significant reduction of IL-8 in response to NP-ODN, NP-MTX, NP-ODN-MTX, and MTX in comparison with an untreated control; however, only NP-ODN-MTX resulted in a significant decrease in comparison with just ODN delivery alone. These results suggest that decoy ODN has the ability to act alone, as well as to enhance efficacy of DMARD MTX when delivered in combination. NP provide a delivery vehicle to safely enhance cellular uptake of ODN, as well as encapsulate and carry MTX to the required site of action.
Conclusion
In this study, we obtained results furthering our claim that NP can be used to effectively deliver nucleic acidbased drugs. Furthermore, we showed the combination of NF-κB decoy ODN and DMARD MTX resulted in reduction in inflammatory cytokines in both cell line and primary RASF models of RA in more instances than treatment with either therapy individually. To our knowledge, this is the first report investigating combination therapy of MTX with a decoy ODN. While NP have been used to administer both MTX and ODN separately in previous studies, this is the first time we have attempted to combine these two therapies and report successful modulation of inflammatory proteins in RA in vitro models [24,25]. Incorporating in vivo testing is necessary to determine both safety and efficacy of PSA-TMC loaded with ODN and MTX, but this preliminary in vitro investigation provides strong evidence to support future studies.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2009-10-30T00:00:00.000
|
104243
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-9-384",
"pdf_hash": "ceb2a9315056820313deaa0ca1d32ee01cbd2fba",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:604",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "f1c264dc342018dd5e4d0f7ff8b4847dc054d115",
"year": 2009
}
|
pes2o/s2orc
|
Cooperation of decay-accelerating factor and membrane cofactor protein in regulating survival of human cervical cancer cells
Background Decay-accelerating factor (DAF) and membrane cofactor protein (MCP) are the key molecules involved in cell protection against autologus complement, which restricts the action of complement at critical stages of the cascade reaction. The cooperative effect of DAF and MCP on the survival of human cervical cancer cell (ME180) has not been demonstrated. Methods In this study we applied, for the first time, short hairpin RNA (shRNA) to knock down the expression of the DAF and MCP with the aim of exploiting complement more effectively for tumor cell damage. Meanwhile, we investigated the cooperative effects of DAF and MCP on the viability and migration, moreover the proliferation of ME180 cell. Results The results showed that shRNA inhibition of DAF and MCP expression enhanced complement-dependent cytolysis (CDC) up to 39% for MCP and up to 36% for DAF, and the combined inhibition of both regulators yielded further additive effects in ME180 cells. Thus, the activities of DAF and MCP, when present together, are greater than the sum of the two protein individually. Conclusion These data indicated that combined DAF and MCP shRNA described in this study may offer an additional alternative to improve the efficacy of antibody-and complement-based cancer immunotherapy.
Background
Cervical cancer is the second most common cancer among women worldwide, with about 470,000 newly diagnosed cases and almost 250,000 deaths every year [1,2]. Cervical cancer is the leading cause of death from cancer in many low-resource countries [3]. The lack of preventive strategies, early diagnostic methods, and effective therapies to treat recurrent cervical tumors creates a pressing need to understand its pathogenesis and to identify molecular markers and targets for diagnosis as well as therapy [4]. The unlimited growth and the metastasis are the important traits of the tumor, are the main causes of cancerrelated death induced by the failure of treatment of cervical cancer [5]. However, the factors that promote malig-nant transformation and growth in cervical carcinoma remain largely unknown.
The complement system has been characterised extensively, both biochemically and functionally. When complement components are deposited on host cells, complement regulating factors protect autologous cells from complement mediated cytotoxicity [6]. Antibodymediated complement-dependent killing of tumor cells is not a very efficient effector mechanism, due to the overexpression of complement regulatory proteins on tumor cells, which are protected from complement attack in this way. Decay-accelerating factor (DAF) and membrane cofactor protein (MCP) function as cell surface regulators that serve to protect self cells from attack by autologous complement. The two proteins complement each other in that DAF acts to accelerate the decay of the classical and alternative C3 convertases (C4b2a and C3bBb) [7] while MCP functions as a cofactor for the factor I-mediated cleavage of cell-bound C3b and C4b [8]. Although both proteins have been studied extensively, little is known about whether and, if so, how they cooperate on cervical cancer cells surface. Therefore, our present study aimed to investigate the expression of DAF, MCP and to assess effect of the protein on cervical cancer cells survival.
Reagents
Human cervical cancer cell lines ME180 (The level of DAF and MCP in ME180 was the maximal expression [see Additional file 1]) was provided by China Centre for Type Culture Collection. The short hairpin RNA (shRNA) was synthesized by Wuhan Genesil Biotechnology Co., Ltd (Wuhan, China). Lipofectamine 2000 was purchased from Invitrogen (Carlsbad, CA). Phototope-HRP Western Blot Detection System, including anti-mouse IgG, HRPlinked antibody, Biotinylated protein ladder, 20× Lumi-GLO Reagent and 20× Peroxide were purchased from Cell Signaling Technology (Beverly, MA, USA). The mouse monoclonal antibodies against decay-accelerating factor (DAF), membrane cofactor protein (MCP) and β-actin were purchased from Santa Cruz Biotechonolgy (Santa Cruz, CA, USA). The 3 H-thymidine was endowed by isotope lab of Nanjing Medical University. Trizol reagent was purchased from life Techology (Gaitherburg, MD, USA). Human blood was obtained from healthy donors.
Tissue procurement and preparation
Human cervical cancer tissues were collected from 30 patients who underwent radical hysterectomy because of cervical carcinoma at Nanjing Maternity and Child Health Care Hospital between October 2004 and January 2007. Tumor specimens were obtained immediately after surgery. Local ethical approval was obtained before com-mencing this study and, as appropriate, tissue was collected with informed consent.
Human cervical cancer cell line culture and DNA transfection
The human cervical cancer cells were propagated in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% Fatal Calf Serum (FCS), 100 U/ml penicillin, 100 μg/ml streptomycin, 10% unnecessary amino acid at 37°C in a 5% CO 2 incubator. Cells used in experiments were from 5 to 7 passages. Lipofectin-and oligofectaminetransfection of shRNA was performed according to the manufacturer's recommendations. Briefly, to transfect tumor cells of one well of a six-well plate, each 500 pmol shRNA and 10 μl lipofectin were diluted in 750 μl Opti-MEM. After preincubation of the lipofectin solution for 45 min at 37°C, both solution were mixed and incubated for additional 15 min at room temperature. The lipofectin/ shRNA mixture was subsequently overlaid onto the cells and incubated for 2 h. Finally, 1 ml growth medium (20% FCS) per well was added for further cultivation of the tumor cells. Reporter gene activities were normalized to total protein, and all results represented the average of triplicate experiments.
Construction of DAF shRNA-expressing plasmid vector
The complementary oligonucleotides encoded a hairpin structure with a 19-mer stem derived from the target site, in this experiment, the targeted short hairpin RNA (shRNA) sequences for DAF were CTC CAC TGG ACA GAG CTG CC and MCP were GCG CGG CGC GGA AGA CGC TG. Two complementary domains were separated by a 9-bp loop sequence. Near the 3' end of the shRNA template was a 6 nucleotide poly (T) tract recognized as a RNA pol III termination signal. The 5' end of the two oligonucleotides was BamHI and Hind III restriction site overhung. The vectors of DAF and MCP shRNA-expressing plasmid were constructed by using pGenesil-1 as the vector backbone. The shRNA synthesized and annealed was ligated into the BamHI and Hind III site of the pGenesil-1 expression vector. At the same time, we chose an unrelated gene shRNA as a negative control.
Western blot analysis
The cells were collected with sample buffer. Whole cell lysates (50 μg) for each sample were subjected to electrophoresis in 10% SDS-polyacrylamide gels. Thereafter, the protein was blotted onto a PVDF membrane. Primary antibodies against DAF, MCP and β-actin were used according to the manufacturer's recommedations. After washing the membrane, the second antibody (HRP-conjugated anti-mouse IgG) was used for the detection of DAF, MCP and β-actin. The bands were visualized by the ECL detection system with 5 to 10 min exposure after washing the membrane. β-actin was used as a protein loading control.
C3-deposition
ME180 cells were incubated and subsequently sensitized by incubation with a rabbit polyclonal anti-ME180 antiserum (diluted 1/5) for 20 min on ice. ME180 cells were washed, resuspended at 3 × 10 6 cells/ml and 50 μl cells were mixed with 100 μl of C8-depleted serum (prepared by passage over an anti-C8 affinity column). After 1 hr incubation at 37°, complement activation was terminated by washing cells once with ice-cold EDTA solution (20 mM EDTA/FACS buffer) and two additional times with FACS buffer. Flow cytofluorometrical analysis, as described above, was used to quantify C3b binding.
Complement-mediated cytotoxicity
A radioactive cytotoxicity assay was used to measure complement-mediated cytotoxicity. ME180 cell transfectants were grown to 70% confluency on 100-mm culture plates. Cells were removed with 4 ml Versene and washed three times with PBS. A total of 1 × 10 7 cells then were suspended in 1 ml PBS containing 500 μCi 51 Cr, and the cell suspensions were incubated for 2 h at 37°C with occasional shaking. After washing three times with GVB-E, 5 × 10 5 labeled cells were incubated for 15 min at 4°C with rabbit anti-hamster lymphocyte serum (1:2) in 200 μl of GVB-E. Cells were washed three more times with GVB 2+ , resuspended to 10 5 cells/ml in GVB 2+ , and 100-μl aliquots of the cell suspension added to the wells of 96-well V-bottom plates. Then serial dilutions of normal human serum were added in 100-μl volumes in triplicate wells for each cell type. A volume of 100 μl of 1% Triton X-100 and buffer alone were included as controls for 100% release and for spontaneous release, respectively. The percent specific release was calculated from the formula: % specific release = [(measured release -spontaneous release)/ (100% release -spontaneous release)].
Cell viability assay
We confirmed proliferating activity by water-soluble tetrazolium salt (WST-1) assay (Roche Diagnostics, Mannheim, Germany). The WST-1 assay is a colorimetric method in which the dye intensity is proportional to the number of viable cells. Cells were seeded into 96-well microtiter plates at a concentration of 5 × 10 3 cells/well. After 12-h incubation, cells were treated with different media for 48 h. After incubation, the cells were washed with PBS and the cell proliferation reagent WST-1 was added, then incubated for 4 h. Sample absorbence was analyzed with a bichromatic ELISA reader at 450 nm. All experiments were performed in triplicate with different passages of the ME180 cells.
In vitro migration assay
Cell migration was assayed using 24-mm diameter chambers with 8-μm pore filters (Transwell, 6-well cell culture). The ME180 cells were removed from the culture flasks and resuspended at 7.5 × 10 6 cells/ml in serum-free medium, and then 0.2 ml cell suspension was added to the upper chambers. Afterwards, the lower chambers were added with the different medium (0.5 ml). The chambers were incubated for 48 hours at 37°C in a humid atmosphere of 5% CO2/95% air. And then the filters were fixed in 95% ethanol and stained with H.E. The upper surfaces of the filters were scraped twice with cotton swabs to remove non-migrated cells. The experiments were repeated in triplicate with different passages of the ME180 cells, and the migrated cells were counted microscopically (400×) in five different fields per filter.
Measurement of 3 H-thymidine incorporation ( 3 H-TdR)
ME180 were plated into 96-well plates and incubated overnight. Media were removed from the cells and replaced with 200 μl media, as described in the results section. The cells were incubated for 54 h, and then DNA synthesis was determined by 3 H-TdR for the final 18 h. The media were carefully removed and the cells were detached with 50 μl trypsin-EDTA. The cells were then harvested onto glass filters with a Tomtech cell harvester and the radioactivity retained on the dried filters was measured by the addition of 50 ml scintillation liquid and counted in a TopCount NxT scintillation counter. All experiments were performed in triplicate with different passages of the ME180 cells.
Statistical analysis
Most results are presented as means ± SD. Differences between various data sets were tested for significance using Student's t-test and p-value of less than 0.05 were considered significant (*p < 0.05; **p < 0.01; ***p < 0.001).
The complement regulatory protein expression and C3b deposition in human cervical cancer tissue
In order to investigate the relationship between complement regulatory protein and complement-mediated cytotoxicity in human cervical cancer cells, the DAF, MCP expression and C3b deposition of 30 cases human cervical cancer tissues and surrounding non-neoplastic tissues were analyzed in this experiment (Figure 1). The expression of DAF and MCP was significantly increased in human cervical cancer tissue compared with surrounding non-neoplastic tissues. Meanwhile, a further significant decrease of C3b deposition could be demonstrated in human cervical cancer tissues. This finding suggested that DAF and MCP may play an important role in survival of human cervical cancer cells.
The DAF and MCP expression levels and C3b deposition in cancer tissue Figure 1 The DAF and MCP expression levels and C3b deposition in cancer tissue. A: Relative DAF expression levels were shown between human cervical cancer tissues (T) and surrounding non-neoplastic tissues (N). The expression of DAF mRNA and protein was measured by Real-time PCR and Western blot respectively, The DAF gene in human cervical cancer tissue was overexpressed. Results shown are the mean ± SD of three independent transfections (n = 3), each conducted in triplicate. B: Relative MCP expression levels were shown between human cervical cancer tissues (T) and surrounding non-neoplastic tissues (N). The expression of MCP mRNA and protein was measured by Real-time PCR and Western blot respectively, The MCP gene in human cervical cancer tissue was overexpressed. Results shown are the mean ± SD of three independent transfections (n = 3), each conducted in triplicate. C: Deposition of opsonizing C3 split products were shown between human cervical cancer tissues and surrounding non-neoplastic tissues (measured as common C3b moiety). Data are presented as means ± SD of two independent experiments. Student's t-test: N versus T. **p < 0.01; *p < 0.05, respectively.
MCP and DAF work synergistically in preventing C3b deposition
In this experiment, MCP and DAF protein were studied together. As shown in Figure 2, MCP alone on the cell surface conferred minimal inhibition of C3b deposition(5.8%), as did a range of limiting DAF concentrations(0-11.3% inhibition). In contrast, inhibition increased dramatically when both proteins were incorporated into the cells (up to 53.3%). With the fixed limiting dose of MCP, inhibition was again dependent on the concentration of the DAF added.
Functional analysis of MCP and/or DAF shRNA-mediated inhibition of MCP and/or DAF expression
Because MCP and DAF are regulators of the early complement pathway, their knock-down was also expected to improve C3 opsonization of tumor cells. Therefore, C3 split product deposition (measured as C3b) was analysed on MCP and/or DAF-deficient tumor cells following complement activation. Tumor cells were transfected with negative shRNA, MCP shRNA, DAF shRNA and MCP shRNA+ DAF shRNA respectively. The results showed improved C3b opsonization upon MCP and DAF suppression, where down-regulation of MCP and DAF did enhance complement-mediated cytotoxicity significantly ( Figure 3A).
A further significant augmentation of complement-mediated cytotoxicity could be demonstrated in ME180 cells upon simultaneous transfection of MCP shRNA+ DAF shRNA. Compared to complement-mediated cytotoxicity following knock-down of MCP (most potent single effect), a combined inhibition of MCP and DAF expression further enhanced significantly complement-mediated cytotoxicity of ME180 cells by 25%( Figure 3B)
The effect of human cervical cancer cells viability, migration and proliferation induced by down-expression of MCP and/or DAF
To investigate the effect of DAF and MCP on human cervical cancer cells viability, migration and proliferation, ME180 was transfected with negative shRNA, MCP shRNA, DAF shRNA and MCP shRNA+ DAF shRNA respectively. Cell viability was determined by WST-1 assay. As shown in Figure 4A, MCP shRNA+ DAF shRNA can significantly decrease cell viability, compared with negative shRNA, meanwhile, the viability of cells in MCP shRNA group and DAF shRNA group had slight change compared with MCP shRNA+ DAF shRNA group. The results indicated that cooperative inhibition of DAF and MCP genes resulted in dramatically decreasing viability of ME180 cells.
To determine whether DAF and MCP involved in regulation of cell migration, ME180 was transfected with negative shRNA, MCP shRNA, DAF shRNA and MCP shRNA+ DAF shRNA respectively. The number of migrated cell treated with MCP shRNA+ DAF shRNA was significantly lower than that in negative shRNA (p < 0.01). The silence of DAF and MCP synergistically can significant decrease the migration of ME180 cells. The numbers of migrated cells were also differences in MCP shRNA and DAF shRNA group when compared with MCP shRNA+ DAF shRNA group (p < 0.05), which indicated enhancing effect of DAF and MCP on cell migration. Figure 4C, the exposure of MCP shRNA+ DAF shRNA decreased the proliferation of cervical cancer cell. There was an apparent decrease in ME180 DNA synthesis, exposed to MCP shRNA+ DAF shRNA for 24 h after the initial manipulation. The effect of negative shRNA, MCP shRNA and DAF shRNA exposure on cell proliferation was also determined. MCP shRNA+ DAF shRNA can significantly decrease the proliferation of cervical cancer cell, as compared with negative shRNA, MCP shRNA and DAF shRNA group respectively. As above, MCP and DAF can dramatically inhibit the proliferation of cervical cancer cell in a cooperative fashion.
Discussion
The membrane attack complex inhibitory protein DAF and MCP are key members of the family of complement regulatory proteins. Due to the expression of membranebound complement regulatory proteins, complement deposition on neoplastic cells is limited and therefore not sufficient to induce potent tumour cell killing [9]. Silencing of complement regulatory proteins has been shown to sensitize tumour cells to complement attack [10]. Therefore, we applied a shRNA strategy to inhibit specifically the expression of DAF and MCP aiming at better employment of complement for tumour cell destruction. Figure 2 C3b deposition. Cooperative inhibition of C3b deposition by DAF and MCP was shown in this experiment. At the low levels tested, DAF had only minor effects on C3b deposition when present alone. However, when present with low levels of MCP, the inhibition dramatically increased. For each experiment, the data given are the mean ± SD, where n = 3.
C3b deposition
Numerous studies have been performed on the complement regulatory proteins in primary tumors and in tumor cell lines, in an attempt to clarify their significance to can-cer immunoresistance. The fact that most cancers, independent of their tissue origin, express at least two if not three complement regulatory proteins, is perhaps not surprising considering the wide tissue distribution of DAF and MCP [11]. Li et al [12] examined colorectal and gastric carcinomas and osteosarcoma and found increased expression of DAF, whereas Kiso et al [13] found increased expression of both DAF and MCP in intestinal type gastric carcinoma. In ovarian cancer, DAF and MCP were more heterogeneously expressed and resistance to complement correlated in these cells with high level of DAF and MCP expression [14]. In this study, we detected the expression of DAF and MCP protein in cervical cancer tissue by Western blot. Increasing changes in DAF and MCP protein were observed, the expression of DAF and MCP were significantly increased in human cervical cancer tissue compared with surrounding non-neoplastic tissues, Meanwhile, a further significant decrease of C3b deposition could be demonstrated in human cervical cancer tissues. This finding suggested that DAF and MCP may play an important role in survival of human cervical cancer cells.
Although DAF and MCP each have been studied extensively, no investigations have focused on whether the two proteins interact in providing optimal protection of self cells from autologous complement despite the fact that nearly all cells express both proteins. Using the above experimental system with incorporated DAF and MCP in ME180 cells, we demonstrated that the two regulators work synergistically on the cell surface in preventing alternative pathway-mediated C3b deposition. The magnitude of the inhibition of C3b uptake in the presence of the two proteins compared to each protein individually was striking. At higher concentrations of the proteins, this cooperative inhibition reached 53-72% as compared to 0-11% for each protein when the proteins were incorporated individually at the same concentrations.
Functional studies were performed to evaluate complement resistance of tumour cells after shRNA-mediated knock-down of DAF and/or MCP expression. The most striking effects were found that CDC was increased about 67% after inhibition of DAF and MCP; 39% and 36% increased CDC were apparent in MCP or DAF-deficient ME180 cells respectively. In contrast to the DAF and MCP shRNA approach, silencing DAF or MCP by shRNA had only a weak effect on ME180 cells and negative shRNA had no effect at all. It therefore appears that, in human cervical tumours, synergistically silencing DAF and MCP may be more effectively inhibit complement-mediated cytotoxicity.
The expression of DAF and MCP were significantly increased in human cervical cancer tissue. This finding suggested that DAF and MCP may play important roles in C3b deposition and complement-mediated cytotoxicity Figure 3 C3b deposition and complement-mediated cytotoxicity. A: Deposition of opsonizing C3 split products on DAFand/or MCP-deficient tumour cells (measured as common C3b moiety). Tumor cells were transfected with negative shRNA, MCP shRNA, DAF shRNA and MCP shRNA+ DAF shRNA respectively, as indicated in the diagram. Following complement activation by polyclonal tumor-specific antibodies, deposited C3b molecules were quantified by flow cytometry. Data are shown from one representative experiment (of three). Negative shRNA, MCP shRNA, DAF shRNA versus MCP shRNA+ DAF shRNA. **p < 0.01; MCP shRNA, DAF shRNA versus negative shRNA. *p < 0.05, respectively. B: Complement-dependent cytolysis of DAF-and MCP-deficient ME180 cells. Tumor cells were transfected with negative shRNA, MCP shRNA, DAF shRNA and MCP shRNA+ DAF shRNA respectively, as indicated in the diagram. 72 h later, complement-dependent cytolysis of ME180 cells was measured. Data are presented as means ± SD of triplicates of one representative experiment (of three). DAF shRNA, MCP shRNA, MCP shRNA+ DAF shRNA versus negative shRNA. **p < 0.01; ***p < 0.001; MCP shRNA, DAF shRNA versus MCP shRNA+ DAF shRNA.*p < 0.05, respectively. Figure 4 Human cervical cancer cells viability, migration and proliferation. ME180 cells (3 × 10 4 /ml) were treated with negative shRNA, MCP shRNA, DAF shRNA and MCP shRNA+ DAF shRNA respectively in this experiment. (A): The ME180 cells viability was detected via WST-1 assay. Sample absorbance was analyzed using a bichromatic ELISA reader at 450 nm. **p < 0.01 versus MCP shRNA+ DAF shRNA, # p < 0.05 versus MCP shRNA+ DAF shRNA. (B): The migration of ME180 cells was measured by Transwell assay. The migrated cells were counted microscopically (400×) in five different fields per filter. **p < 0.01 versus MCP shRNA+ DAF shRNA, # p < 0.05 versus MCP shRNA+ DAF shRNA. (C) to observe the measuring 3 H-thymidine incorporated into DNA over the last 18 h of the final incubation. Results were mean ± SD from 3 independent experiments. **p < 0.01 versus MCP shRNA+ DAF shRNA, # p < 0.05 versus MCP shRNA+ DAF shRNA. the survival of the human cervical cancer cells. According to our study ( Figure 4A, B, and 4C), Cooperative inhibition the expression of DAF and MCP genes might decrease viability, migration and proliferation of ME180 cells. DAF and MCP may protect tumor from accidental injury by activated complement, also confer resistance on cancer cells. It seemed that DAF and MCP promotes cell viability, migraton and proliferation, even though the exact mechanism should be studied further. In the future trial we should provide a detailed analysis of complement regulatory protein expression on HPV infected and not infected human cervical cancer cells. We should also examine HPV infected and not infected premalignant cervical lesions and primary cervical squamous carcinomas to determine whether changes in the expression of these proteins are associated with the development of cervical disease.
Conclusion
In conclusion, we were able to identify potent shRNA for efficient down-regulation of DAF and MCP expression on tumour cells. Knock-down of both surface-regulators clearly sensitized tumour cells to complement attack. This has been proved by analysis of complement-mediated cytotoxicity and also by investigation of C3b-deposition. The present data indicated that synergistical silencing DAF and MCP significantly decreased the human cervical cancer cell viability, migraton and proliferation. These data indicated that combined DAF and MCP shRNA described in this study may offer an additional alternative to improve the efficacy of antibody-and complement-based cancer immunotherapy in the future.
|
v3-fos-license
|
2020-08-27T09:05:46.408Z
|
2020-08-20T00:00:00.000
|
221979772
|
{
"extfieldsofstudy": [
"Computer Science",
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-4292/12/17/2694/pdf",
"pdf_hash": "a18a776b5dd1e73a2f57a71db35634136ff82689",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:605",
"s2fieldsofstudy": [
"Computer Science",
"Geology"
],
"sha1": "52436b513797d5801cd8561de850afd622c28499",
"year": 2020
}
|
pes2o/s2orc
|
An E ff ective Lunar Crater Recognition Algorithm Based on Convolutional Neural Network
: The lunar crater recognition plays a key role in lunar exploration. Traditional crater recognition methods are mainly based on the human observation that is usually combined with classical machine learning methods. These methods have some drawbacks, such as lacking the objective criterion. Moreover, they can hardly achieve desirable recognition results in small or overlapping craters. To address these problems, we propose a new convolutional neural network termed e ff ective residual U-Net (ERU-Net) to recognize craters from lunar digital elevation model (DEM) images. ERU-Net first detects crater edges in lunar DEM data. Then, it uses template matching to compute the position and size of craters. ERU-Net is based on U-Net and uses the residual convolution block instead of the traditional convolution, which combines the advantages of U-Net and residual network. In ERU-Net, the size of the input image is the same as that of the output image. Since our network uses residual units, the training process of ERU-Net is simple, and the proposed model can be easily optimized. ERU-Net gets better recognition results when its network structure is deepened. The method targets at the rim of the crater, and it can recognize overlap craters. In theory, our proposed network can recognize all kinds of impact craters. In the lunar crater recognition, our model achieves high recall (83.59%) and precision (84.80%) on DEM. The recall of our method is higher than those of other deep learning methods. The experiment results show that it is feasible to exploit our network to recognize craters from the lunar DEM.
Introduction
The Moon is the closest celestial body to Earth and the natural satellite of Earth.There are a lot of impact craters on the surface of the Moon.Impact craters and highlands constitute the typical lunar landform.While the Moon lacks common terrestrial geological processes due to atmosphere, wind, and water, impact craters can be well preserved.The study of lunar landform and lunar rock structure via impact craters on the lunar surface is of great significance to the investigation of the origin and evolutionary history of the Moon [1].The impact crater can be divided into two principal categories: the main crater and secondary crater.Secondary craters are small ones that are formed by the ejecta falling down the planet surface where big impact craters are generated.Hence, secondary craters tend to cluster in a discrete ray around the main crater.A large number of secondary craters can be found on the Moon, Mars, and other celestial bodies, where there is nearly no weathering.Secondary craters around the small main crater are usually so small that are hardly recognized, which is still a difficulty in the crater recognition.At present, the main method of crater recognition depends on human observation.The advantage of the human observation method is that it can accurately classify various types of impact craters.However, the method of human observation is very time-consuming.It can be used to deal with image and video data.But human observation is not suitable for massive lunar exploration data, particularly massive lunar digital elevation model (DEM) images.Moreover, this method lacks a uniform objective criterion.Different people have different recognition results for the same lunar landform image.Even the same person may have different recognition results for the same lunar landform image at different times.The differences of experts reach on crater recognition of up to 45% [2].How to quickly and accurately recognize lunar craters is still a challenging problem in the field of lunar exploration.
In recent years, in order to obtain more information on impact craters and reduce the consumption of human resources, a number of researchers have begun to design crater recognition algorithms.In 2005, Kim [3] proposed a method to automatically extract the crater features on Mars.The whole process consists of three stages: the focus stage, location stage, and debugging stage.This method can classify salient craters well.However, it has high computational complexity and cannot well recognize the craters with the complex distribution.Goran [4] proposed a new crater recognition algorithm based on fuzzy edge detectors and Hough transform on the DEM of Mars.In 2011, Ding [5] presented a universal and practical framework that used the boosting and transfer learning method to recognize sub-kilometer craters.Yan Wang [6] integrated the improved sparse kernel density estimator into the boosting algorithm and proposed a sparse boosting method.This method is used to automatically recognize sub-kilometer craters in high-resolution images.Although the sparse boosting method has low computational complexity, it can hardly classify the overlapping impact craters, and the obscure craters are easily regarded as the background.Besides, M.J. Galloway [7] proposed an automatic crater extraction method in 2015.This method uses the Hough transform, Canny edge detection algorithm, and the non-maximum suppression (NMS) algorithm to perform the crater recognition.Kang [8] proposed an algorithm to extract and identify crater on CCD stereo camera images and associated DEM data.The algorithm is based on 2-D and 3-D features, which extract geometric features from optical images, and finally, get the result by using 3-D features from DEM data.Vamshi [9] used an object-based method to detect craters from topographic data.The method uses high-resolution image segmentation to create objects and then shape and morphometric criteria to extract craters from objects.Zhou [10] extracted slope of aspect values at crater rims and applied the morphological method to get true crater shapes.In [11], Rodrigo Savage used the Bayesian method to analyze the shape of sub-kilometer craters in high-resolution images.This work has developed a parametric model using the diameter of the crater, the height of the crater edge, edge eccentricity, and direction, etc. to represent craters.Min Chen [12] detected lunar craters from lunar DEM based on topographic analysis and mathematical morphology.The algorithm can detect dispersal craters and connected craters, but it is not suitable for detecting overlapping craters because of the irregular distribution of the sink points of those craters.Due to the shortcomings of mathematical morphology, this algorithm performs worse on craters with incomplete edges.Chen Yang [13] used a deep and transfer learning method to identify and estimate the age of crater in Chang'E data and provided a two-stage craters detection method.In 2020, Lemelin [14] used the wavelet leaders method to get a near-global and local isotropic characterization of the lunar roughness from the SLDEM2015 digital elevation model.Almost all of the above methods have considered craters as a whole, which may fail to deal well with the incomplete and complex impact craters.In order to address this problem, we pay more attention to the edges of the impact craters, recognize them, and then get their location and size in this work.
Recently, deep learning has achieved great success in the field of computer vision.The algorithm based on the convolution neural network (CNN) is excellent in solving the problems of object detection, image segmentation, and image classification, etc.The CNN is a kind of feedforward multi-layer neural network with convolutional computation and deep structure.It is one of the representative algorithms of deep learning [15,16].CNN reduces the data dimensions and extracts data features by convolution, pooling, and other operations step by step.It adjusts the convolution weight in the training process.In 2015, Jonathan Long et al. [17] proposed the fully convolutional network (FCN) in which the full connection layer of CNN was replaced by the convolution layer.The output of FCN is the spatial domain mapping rather than the probability of categories.Thus, FCN transforms the image segmentation problem into an end-to-end image processing problem.Both the input and output of the network are images.It takes less than 0.2 s to segment a typical image.FCN has become a very important work in the field of semantic segmentation.After FCN, many models based on FCN (U-Net [18], R-FCN [19], SegNet [20]) have also achieved good results in image segmentation, object detection, and other applications.In [18], Olaf Ronneberger improved FCN and proposed a new semantic segmentation network called U-Net that was first applied to biomedical image segmentation.U-Net is an efficient neural network that can achieve good performance in neuronal structure segmentation.He et al. [21,22] proposed a deep residual network (ResNet), which sufficiently used the image information but did not add additional parameters.The problem of the degeneration in deep CNN is successfully solved by ResNet.
Lunar landform images can be roughly grouped into two categories.One category is the ordinary optical image.Another category is the DEM dataset.Since the DEM dataset is hardly affected by the illumination and camera angle, our work has performed crater recognition on DEM.Note that directly using image recognition methods mentioned above [5][6][7] cannot achieve desirable recognition results.In order to develop a crater recognition approach for the lunar DEM data, we exploit deep learning theory to propose a semantic segmentation network structure termed effective residual U-Net (ERU-Net) that borrows the ideas of U-Net and ResNet.It can detect the edge of craters well and perform better when the depth of network layers increases.ERU-Net uses U-Net as the basic network structure in which the traditional convolution is replaced by the residual convolution.In the proposed crater recognition algorithm, we firstly train an ERU-Net model to detect the edge of the craters.Then, we use a matching algorithm to get the crater location information from the edge detection result.Finally, we verify whether the extracted craters are real craters.Different from traditional machine learning methods, our method exploits the full convolution network to classify each pixel in the image.Even if the impact crater is incomplete in the lunar image, the edge of the impact crater can be accurately recognized by using the convolution network.
Experimental Data
We use the lunar digital elevation model (DEM) images marked by NASA [23].The reason for using the DEM dataset is to avoid the influence of the illumination and viewing angle on the experimental results.The size of the image is 92160 × 30720, which spans 0 to 360 • E and 60 • S to 60 • N. The DEM has a resolution of 256 pixels/degree or 118 m/pixel as shown in Appendix A. In theory, the full convolutional neural network can accept images of any size, but it is limited by the size of GPU graphics memory.Therefore, we clip the lunar DEM image to a number of images of size 256 × 256.Then, we use Cartopy [24], which is a Python package to transform the image into an orthographic projection, in order to minimize image distortion.It can avoid craters appear non-circular at high latitudes.After the training data are generated, the corresponding labeled data need to be generated.Because the impact craters are mostly circular, we draw circles in a 256 × 256 blank image as edge label of impact craters based on the lunar crater position information provided by Head [25] and Povilaitis [26].Head provided dataset of lunar craters, which was larger than 20 km in diameter, by using high-resolution altimetric measurements of the Moon.Povilaitis provided a dataset of smaller craters, which was 5 to 20 km in diameter, by using CraterTools extension [27].The total number of craters in Head's dataset is 5186, and in Povilaitis's dataset is 19,337.The validation dataset and test dataset are generated in the same way.The area of generating testing images is different from that of the training images, which ensures that the testing images are not within the training and validation dataset.From Figure 1, it can be found that the labeled dataset is not complete because the radius of an impact crater in the dataset is greater than 5.For example, some small and shallow craters are missed, and even some of the obvious craters are not labeled.In general, increasing training data can obtain better recognition results.We generate 30,000 images as a training dataset and 3000 images as a validation dataset to train CNNs in experiments.Then, we evaluate the model performance on the test dataset containing 3000 images.The size of the image we crop is 500 to 6500 pixels.Then, we resize it to 256 × 256.The experiment data has a different resolution range from 230 m/pixel to 2996 m/pixel or 10 pixel/degree to 131 pixels/degree.As shown in Figure 1, the bottom right corner of each graph, the four images have different resolutions.
Network Architecture
In this subsection, we design a new network called effective residual U-Net (ERU-Net), which uses U-Net [18] as network infrastructure, and combine the characteristics of residual blocks in ResNet [21], which can simplify network training and sufficiently use image information.The network structure of ERU-Net is shown in Figure 2.
As shown in Figure 2, the whole network is divided into three parts, i.e., the encoder, bridge, and decoder parts [28].The encoder part encodes the input image into the tensor form, shrinks the size of the feature map, and increases the number of feature map channels.The bridge part connects the encoder and decoder parts.The decoder part changes the image features to pixel-level images, enlarges the size of the feature map, and reduces the number of feature map channels.The lunar landform images are input into the encoder part, and the last layer of the network outputs the lunar crater edge prediction results (see the first row in Figure 3).The encoder part contains three residual units [29], each of which is formed of a convolution layer and a residual convolution block.Each residual unit in the encoder is followed by a pooling layer to reduce the size of the feature map.There is a residual unit in the bridge part.The decoder part contains three residual units.Before each residual unit in the decoder, a deconvolution layer is added to enlarge the feature map.Then, the enlarged feature map obtained by each deconvolution is concatenated with the feature map of the same size and channels in the encoder part.
Network Architecture
In this subsection, we design a new network called effective residual U-Net (ERU-Net), which uses U-Net [18] as network infrastructure, and combine the characteristics of residual blocks in ResNet [21], which can simplify network training and sufficiently use image information.The network structure of ERU-Net is shown in Figure 2.
As shown in Figure 2, the whole network is divided into three parts, i.e., the encoder, bridge, and decoder parts [28].The encoder part encodes the input image into the tensor form, shrinks the size of the feature map, and increases the number of feature map channels.The bridge part connects the encoder and decoder parts.The decoder part changes the image features to pixel-level images, enlarges the size of the feature map, and reduces the number of feature map channels.The lunar landform images are input into the encoder part, and the last layer of the network outputs the lunar crater edge prediction results (see the first row in Figure 3).The encoder part contains three residual units [29], each of which is formed of a convolution layer and a residual convolution block.Each residual unit in the encoder is followed by a pooling layer to reduce the size of the feature map.There is a residual unit in the bridge part.The decoder part contains three residual units.Before each residual unit in the decoder, a deconvolution layer is added to enlarge the feature map.Then, the enlarged feature map obtained by each deconvolution is concatenated with the feature map of the same size and channels in the encoder part.The details of ERU-Net are as follows.Each residual unit contains three 3 × 3 convolution layers.We use the zero-padding to ensure that the size of output images is the same as that of the input images.The first convolution layer is used to adjust the number of channels.The second and third convolution layers yield a residual convolution block.In this block, we use the add operation to add the input feature map and output feature map obtained by performing the convolution two times.In the encoder part, we use the maximum pooling (max-pooling) operation to down-sample the output feature map from the residual unit.The pooling size is 2 × 2, and the size of the feature map is reduced to half of its original size.In the decoder part, we use 3 × 3 deconvolution with 2 strides as up-sampling layers to enlarge the size of the feature map.Then, the feature map is obtained by concatenating the deconvolution and its corresponding feature map of the same size in the encoder part.Note that the filling strategy in U-Net [18] convolution layers is not filling (valid filling), which may lead to the inconsistent image sizes.The input size of U-Net is 572 × 572, and the output size is 388 × 388.It is necessary to crop the images in the encoder part and then concatenate their associated feature maps in the decoder part.Since the filling strategy used in the ERU-Net convolution is zero-padding, the size of the feature map after deconvolution in the decoder part is the same as that of the corresponding feature map in the encoder part.They can be directly concatenated without cropping.The network depth of U-Net is 5, and the number of initial filters is 64.The network depth of ERU-Net for lunar crater recognition is 4, and the number of initial filters is 112.Inspired by a wide residual network (WRN) [30], ERU-Net uses dropout operation [31] after each max-pooling and concatenation operation.Using dropout in CNN can avoid the model falling into the trap of over-fitting.When the last residual convolution operation is completed, the number of channels in the output feature map is adjusted to 1 by a 1 × 1 convolutional layer, and then the output is activated by the sigmoid function to obtain the final output image.
Unlike deep residual U-Net [29], our ERU-Net uses 3 × 3 convolution to adjust the number of network channels before the residual unit, which can simplify the identity mapping procedure in the residual convolution [21].The input and output sizes of each residual convolution are identical in ERU-Net.Therefore, they can be directly added without 1 × 1 convolution to adjust channel number and, consequently, can speed up the network training processing.In addition, deep residual U-Net uses 3 × 3 convolution with 2 strides to shrink the size of the feature map and exploits the up-sampling operation to enlarge the feature map size.In the proposed ERU-Net, the max-pooling operation is used for down-sampling, and 3 × 3 deconvolution with 2 strides is used for up-sampling.
Loss Computation
The loss function is used to estimate the inconsistency between predicted results and the ground-truth in machine learning [32].The smaller the loss value, the better the robustness of the model.
The essence of the crater network prediction is to determine whether each pixel is on the crater edge.It is essentially a binary classification problem.Here, the loss function used in ERU-Net training process is binary cross-entropy (BCE) loss [33,34]: where p i is the label of a pixel i in ERU-Net predicted result, and t i is the label of this pixel in the ground-truth.The loss of an image is the sum of the loss of all pixels.If the difference between the predicted image and the labeled image is large, the loss will be greater.
Crater Extraction
The image predicted by the network cannot directly provide crater information, such as the location information of craters.It is necessary to find out the location and size of the potential impact craters in the prediction result.Most impact craters are circular.We use the match template algorithm in scikit-image [35] that is an image processing package to match the crater edge.The match template algorithm is a simple and direct algorithm.Firstly, we draw a circle with radius Ra in the blank image as a template, and the prediction result of the network is regarded as the target image.Then, the match template algorithm is used to get matching results.In the match template algorithm, we need a matching threshold to eliminate unreliable matching results.This threshold is 0.5, and the radius Ra ranges from 5 to 40 pixels in this work.Due to the existence of many overlapping craters, we find that the detection of rings in segmentation results using Hough transform is not so effective because it takes more time than a match template algorithm when setting a small center distance value.It is believed that this matching method is more accurate than other methods, i.e., Hough transform, and Canny edge detection [36].
In [37], researchers generated templates from LOLA (lunar orbiter laser altimeter) track data.The templates are crater images.Our match template algorithm is not the same as that in [37].Our template is obtained by drawing circles rather than finding craters in segmentation results.[37] matched the impact crater directly from the original LOLA track image.We may get multiple impact craters at one location.So, we need to choose the most suitable crater for this location and use NMS to filter out other craters.
After extracting the impact crater, we need to determine whether the crater is correctly recognized.Given a lunar image I, (x i , y i ) is the position of a crater c i extracted from I by our network, where x i is the latitude of the crater, and y i is the longitude of the crater.Let r i be the radius of the crater c i .xj , ŷj is the position of the ground-truth crater corresponding to c i , where xj is the latitude of this ground-truth crater, and ŷj is the longitude of it.Its radius is denoted as rj .If Equations ( 2) and (3) are satisfied, the recognized crater is a correct crater.Otherwise, it is considered as a false crater.Denote D x,y as the longitude and latitude error threshold, and D r as the radius error threshold.In the experiments, we set D x,y = 2.0 and D r = 1.0.
abs r i − rj /min r i , rj < D r (3)
Evaluation Method
The evaluation method used in the general semantic segmentation task is IOU (intersection over union) [38] of the network.The IOU score is the standard performance measure for the semantic segmentation.However, IOU is not suitable for the crater recognition task because we need to extract craters from prediction results.Therefore, we use the evaluation method commonly used in machine learning as follows.
Let P be the precision and R be the recall [39], which are commonly used to evaluate the performance of the classification model in machine learning.The precision and recall are, respectively, computed in Equations ( 4) and ( 5), The craters satisfying Equations ( 2) and (3) are marked as correctly recognized craters (blue circle in Figure 3d), and the total number of these craters is denoted as T p .Except for the above-recognized craters, craters in the prediction result that do not match ground-truth craters are marked as newly discovered craters [36] (green circle in Figure 3d), and their total number is denoted as F p .The number of the unrecognized craters in the ground-truth crater set is denoted as F n (red circle in Figure 3d).T p + F n is the number of ground-truth craters.T p + F p is the number of predicted craters.
Generally speaking, when the precision is high, the recall is often low, and vice versa.To balance the influence of precision and recall, F-score (F β ) [39] is used to measure the classification performance of the model.When the recall is more important, set β > 1.When the precision is more important, set β < 1.When the recall and precision are equally important, set β = 1.F β is defined as follows, Note that the lunar crater is not completely annotated.That is, many truly existing craters are not marked in the ground-truth.These craters are treated as false-negative.Some examples of these craters are shown in Figure 1, in which they are denoted as the red circles in the first row.In [40], the airbus ship detection competition was evaluated using the F 2 -score.The ground-truth data are incomplete in this competition that pays more attention to the recall measure.In our crater recognition, the crater information is provided by Head [25] and Povilaitis [26], and the label of the training dataset is incomplete too.Like [40], we also pay more attention to the recall measure.In this experiment, we set β = 2 and choose F 2 -score as the main measure of network performance.
In the previous section, the newly discovered crater is found by the network prediction.The false-positive rate of crater recognition in this paper is also called the discovery rate (DR) [36].We use two discovery rates here.The first discovery rate (DR 1 ) is the ratio of the newly discovered craters to all recognized craters.The second discovery rate (DR 2 ) is the ratio of the number of newly discovered craters to that of all impact craters (recognized and unrecognized craters).They are defined as follows: Besides the above important measurements in the lunar crater recognition, the accuracy of the position and size of the recognized craters can also evaluate the performance of the model.We calculate the latitude error (Error la ), longitude error (Error lo ), and radius error (Error r ) of recognized craters by the following formula.
Error lo = abs lo p − lo t × 2/ r p + r t (10) Error r = abs r p − r t × 2/ r p + r t (12) where lo p is the longitude value of the predicted crater, and lo t is the longitude value of the corresponding true crater.la p is the latitude value of the predicted crater, and la t is the latitude value of the corresponding true crater.r p is the radius value of the predicted crater, and r t is the radius value of the corresponding true crater.
Results
The experimental data in this paper are the lunar digital elevation model images from NASA [41].Our method is compared with the CNN proposed by Aris et al. [36], deep residual U-Net 29, and D-LinkNet [42] by adjusting the number of training images, modifying the number of starting filters, and deepening the network structure.We use Keras [33] as the deep learning framework.All models are trained on one NVIDIA GTX1080 GPU with 8 GB onboard memory.
Training
We use the deep learning framework Keras [33] based on Python to construct the network model and use the Adam optimizer [43] to optimize the model.The learning rate is 0.0001, and the dropout rate is 0.1.For each batch, three images are trained during the training procedure.The training epoch is 5.
Performance Comparison of Networks
In this section, we have conducted the experiments with different networks (ERU-Net, deep residual U-Net [29], the network proposed by Aris [36], D-LinkNet [42], ERU-Net-2, i.e., the residual unit of ERU-Net contains two Res-Blocks).The number of training images is 30,000.We randomly run each algorithm five times.The number of validation images is 3000.The number of test images is 3000.Crater recognition results of different networks are shown in Figure 4.In order to demonstrate the performance of our algorithm, we evaluate the trained model by using the evaluation method mentioned in Section 2.4 on 3000 test images.The recognition result of each algorithm is shown in Table 1.It reports the mean of recall (Equation ( 4)), precision (Equation ( 5)), F2-Score (Equation ( 7)), discovery rates (DR1 in Equation ( 8), DR2 in Equation ( 9)), the latitude error denote as Lo-err and longitude error denote as La-err, and the radius error Ra-err (Equations ( 10)-( 12)) obtained by the networks.For the first five measures, higher measure values indicate better recognition results, and for the last three measures, lower values indicate better recognition performance in Table 1.The number of initial filters in algorithms in Table 1 is 112.It can be seen from Table 1 that ERU-Net has a higher recall and discovery rate than the network structure proposed by Aris [36].The recall of our method is about 5% higher than that of Aris' method.ERU-Net-2 has more parameters [36].In (c-f), red circles denote unrecognized craters, blue circles denote correctly recognized craters, and green circles denote new craters predicted by the network.
In order to demonstrate the performance of our algorithm, we evaluate the trained model by using the evaluation method mentioned in Section 2.4 on 3000 test images.The recognition result of each algorithm is shown in Table 1.It reports the mean of recall (Equation ( 4)), precision (Equation ( 5)), F2-Score (Equation ( 7)), discovery rates (DR1 in Equation ( 8), DR2 in Equation ( 9)), the latitude error denote as Lo-err and longitude error denote as La-err, and the radius error Ra-err (Equations ( 10)-( 12)) obtained by the networks.For the first five measures, higher measure values indicate better recognition results, and for the last three measures, lower values indicate better recognition performance in Table 1.The number of initial filters in algorithms in Table 1 is 112.It can be seen from Table 1 that ERU-Net has a higher recall and discovery rate than the network structure proposed by Aris [36].The recall of our method is about 5% higher than that of Aris' method.ERU-Net-2 has more parameters than ERU-Net because ERU-Net-2 has two Res-Blocks in the residual unit.The number of parameters of deep residual U-Net is almost equal to that of ERU-Net.We can see in Table 1, only DR1 of deep residual U-Net is higher than that of ERU-Net, and other measures of deep residual U-Net are lower than those of ERU-Net.For the Aris' Net, except for the precision measure, our method outperforms this network on the key measures in the lunar carter recognition.We can observe that among all compared approaches, our approach achieves the best recognition performance as a whole.As shown in Figure 5, we draw the P-R curves of ERU-Net, ERU-Net-2, and Aris' method.Match template thresholds are ranged from 0.2 to 0.8.When the threshold is increased, the recall of the model is increased, and the precision is decreased.It can be found that ERU-Net-2's performance is close to that of Aris'.ERU-Net's performance is better than others.
Remote Sens. 2020, 12, x FOR PEER REVIEW 10 of 18 than those of ERU-Net.For the Aris' Net, except for the precision measure, our method outperforms this network on the key measures in the lunar carter recognition.We can observe that among all compared approaches, our approach achieves the best recognition performance as a whole.As shown in Figure 5, we draw the P-R curves of ERU-Net, ERU-Net-2, and Aris' method.Match template thresholds are ranged from 0.2 to 0.8.When the threshold is increased, the recall of the model is increased, and the precision is decreased.It can be found that ERU-Net-2's performance is close to that of Aris'.ERU-Net's performance is better than others.
Fewer Train Data
In order to verify the learning ability of our proposed model, we change the number of training images to retrain the network model.The number of training images is 5000, and our model and Aris' network are run randomly for five times.The results are shown in Table 2.For each algorithm, this table also reports the mean of recall, precision, F2-Score, discovery rate of the network model, the latitude and longitude error, as well as the radius error.It can be concluded from Table 2 that the classification precision and recall of ERU-Net on 5000 images are higher than that of network proposed by Ari S et al.The highest precision is obtained by using ERU-Net-2, which is about 4% higher than the Aris' network.The recall of ERU-Net is 3.3%
Fewer Train Data
In order to verify the learning ability of our proposed model, we change the number of training images to retrain the network model.The number of training images is 5000, and our model and Aris' network are run randomly for five times.The results are shown in Table 2.For each algorithm, this table also reports the mean of recall, precision, F2-Score, discovery rate of the network model, the latitude and longitude error, as well as the radius error.It can be from Table 2 that the classification precision and recall of ERU-Net on 5000 images are higher than that of network proposed by Ari S et al.The highest precision is obtained by using ERU-Net-2, which is about 4% higher than the Aris' network.The recall of ERU-Net is 3.3% higher than that of the Aris' net.The ERU-Net-2 achieves the highest F2-Score in the three models.It can be found from Tables 1 and 2 that the recognition results obtained by using 30,000 training images are better than those obtained by using 5000 images.Moreover, we can observe that our proposed method is suitable for applying to a small-scale dataset.
Deepening Network Structure
In order to investigate whether adding residual blocks can effectively prevent performance degradation, we deepen the ERU-Net, deep residual U-Net [29], and Aris' Net [36].The number of residual units in the encoder and decoder parts is three in ERU-Net and deep residual U-Net.Besides, the number of convolution units in the encoder and decoder parts is three in Aris' Net.We deepen ERU-Net and deep residual U-Net by adding a residual unit in the encoder and decoder parts, respectively.Similarly, we deepen Aris' Net [36] in the same manner.Before the deepening network structure, the depth of ERU-Net, deep residual U-Net, and Aris' Net is four.After deepening network structure, the depth of those networks is five.Considering the size of the GPU memory and the time costs, we set the number of initial filters to 56 in this experiment.The number of lunar images for training is 30,000.Besides, each network model is trained for five times.
Table 3 reports the recognition result after deepening the network structure.It can be seen from Table 3 that the recall of the ERU-Net increases by 1.8%, and F2-Score increases by 1.5% after deepening the network.In the deeper structure, the recall of deep residual U-Net increases by 1.2%, and the precision increases by 1.7%, but the F2-Score decreases by 1.4%.Note that the recall of the network in [36] decreases by 1.5%, the precision increases by nearly 1.7%, and the F2-Score decreases by 0.9% after deepening the network.Therefore, we can conclude that compared with deep residual U-Net and Aris' Net, ERU-Net achieves the best recognition performance after deepening the network structure.The reason is that ERU-Net uses residual convolution blocks instead of traditional convolution blocks, which are easily trained and more generalizable than traditional convolution blocks.The combination of Res-Block and batch normalization can perfectly solve the problem of gradient diffusion and sufficiently use classification information within images.Residual structure [21] can be used to solve the performance degradation problem of a deep convolution neural network under extreme depth conditions.The structure of the residual network is simple and can improve recognition performance.ERU-Net has a better recognition result if the network structure is deepened.Although deep residual U-Net uses the residual convolution, it achieves lower F2-Score and precision in this experiment, compared with our network.The number of network parameters of ERU-Net with 112 initial filters (denoted as ERU-Net-112) is 23.74 million, and the recognition results are shown in Table 1.In Table 3, our network has 5.9 million parameters when it 56 initial filters (denoted as ERU-Net-56).The recall of ERU-Net-112 is 3.8% higher, and the F2-Score is 0.5% higher than ERU-Net-56.Hence, we can conclude that more network initial filters will lead to more network parameters and better recognition results.
Crater Distribution
The DEM images used in experiment 3.2.1-3.2.3 are randomly selected examples and are not necessarily completely representative of the network recognition algorithm's ability to recognize craters in areas with different crater densities.In this section, we discuss the recognized result on different density degrees.We divide the density of distribution into three levels, i.e., high, medium, and low.The number of craters, which are larger than 5 pixels in diameter, in a 256 × 256 image less than 20 is low.The number between 20 and 100 is medium, and the number of more than 100 is high.The experiment results are shown in Table 4.We can find when the number of craters increases, the recall decreases, and the precision increases.In Table 4, our ERU-Net achieves higher recall and precision than U-Net in high and medium levels.At a low level, the recall of ERU-Net is higher than U-Net, and the precision is 3.5% lower than U-Net.
In this experiment, we prove that our method also yields great recognition results in dense areas than others.In order to get the best model of our ERU-Net, we have spent five GPU Days to train ERU-Net with 112 initial filters on 50,000 training images.The depth of ERU-Net is four.For each batch, three images are trained.After an epoch is finished, we evaluate the trained model on 3000 test images.The evaluation results are shown in Tables 5 and 6.They show that when the running times of training increase, the precision tends to constantly increase.But, as shown in Table 5, we have found that the recall decreases between 25 and 30.Therefore, the best model appears between 25 and 30.Table 6 shows that the recall and F2-Score reach the maximum at the 28th training-time.The recall of our model based on ERU-Net with 112 initial filters achieves 83.59%, and the precision is 84.80% on 3000 test images.
Discussion
The proposed method achieves high recall and precision on Moon crater recognition.It is a practical algorithm for recognizing impact crater.In the previous section, we have reviewed the existing methods and compared them with our method.In this section, we discuss our method from three perspectives.First of all, we discuss the method of image semantic segmentation.Secondly, the crater extraction method is discussed.Finally, we discuss our future work and the improvement direction of the impact crater recognition.
Discussion: Experiment Result
As shown in Section 3.2., the algorithm of impact crater recognition proposed by us achieves good recognition results.The recall rate of our algorithm is improved by 5% compared with Aris' method [36].We propose an image segmentation network called effective residual U-Net.The segment ability of this network on the crater recognition task is better than deep residual U-Net [29], the network proposed by Aris [36], and D-LinkNet [42].ERU-Net has a strong learning ability and is suitable for applying on the small-scale dataset.After deepening the network structure, our method can achieve a 2% recall improvement.Deep residual U-Net only shows a 1.2% improvement.U-Net shows a 1.5 reduction.For the impact craters with different density distribution, our method can achieve good recognition results.The reason why our method is better than the others is that our method has better segmentation results.The network we have designed has better learning ability than others.
Discussion: Image Segment
This method attempts to use an image segmentation method to extract the edge of impact crater from the DEM of the Moon and then achieve the purpose of recognizing impact crater.We use residual convolution block, dropout, batch normalization, and other skills to improve U-Net.Our results suggest that using residual convolution can improve the segmentation performance of the network.In the case of a small amount of train data, the learning ability to use residual convolution is strong.When increasing the number of network layers, the use of residual convolution can get better segmentation results, whereas the segmentation result of the network without residual convolution is decreased after deepening the network.Nevertheless, it doesn't mean that the more residual convolution we use, the better is the segmentation result.For example, the result of ERU-NET-2 is not as good as using ERU-NET when the initial filter number is 56.
In general, the residual structure solves the problem of gradient vanishing and gradient exploding in a deep convolution network since it uses skip connection and identity mapping to enhance the learning ability of the network.
Discussion: Crater
In this work, we use the template matching algorithm to extract the impact crater from the segmentation results, which is better than other algorithms (Canny edge, Hough transform).It can extract the small crater in the big crater.We can use smaller thresholds to identify stacked impact craters.However, the Canny edge and Hough transform are difficult to deal with these situations.They are suitable for dealing with independent and separated impact craters.
Conclusions
In this paper, an effective crater recognition algorithm is proposed to recognize craters from the lunar digital elevation model.The research will help lunar researchers identify impact craters more quickly and map the Moon's topography.We use image segmentation and template matching to solve the problem of overlapping crater recognition and propose a new deep learning method, i.e., ERU-Net, which is based on U-Net and the residual convolution to perform lunar crater recognition.This work demonstrates that using the proposed ERU-Net to recognize craters from the lunar digital elevation map is effective and feasible.Compared with other deep learning methods applied in the lunar crater recognition, our ERU-Net outperforms these compared methods.Moreover, our network can achieve better recognition results when the network structure is deepened.In addition, ERU-Net also can be used in other image segmentation tasks.
In future work, we will continue to study in three areas.First, we will improve the shot-connections of ERU-Net with [44][45][46][47] or apply some methods of instance segmentation [48,49] to recognize crater directly.We will apply our method combining with some traditional classification methods, such as sparse representation algorithm [50], to the impact crater recognition on other celestial bodies.Second, multi-information fusion is also a problem worthy of attention.We will try to use our method to experiment on ordinary photographic images in the future and find a method to combine DEM features and ordinary photographic image features to recognize impact crater.Third, we will seek or design better impact crater extraction methods to deal with the more complex stack craters and improve extraction results.
Remote Sens. 2020, 12, x FOR PEER REVIEW 4 of 18 model performance on the test dataset containing 3000 images.The size of the image we crop is 500 to 6500 pixels.Then, we resize it to 256 × 256.The experiment data has a different resolution range from 230 m/pixel to 2996 m/pixel or 10 pixel/degree to 131 pixels/degree.As shown in Figure 1, the bottom right corner of each graph, the four images have different resolutions.
Figure 1 .
Figure 1.Original images and labeled images.(a) is the original image.(b) is the labeled image, yielding the ground-truth crater set here.The red circles in original images are the impact craters with obvious missing marks, and the yellow circles in original images are the impact craters with incorrect mark.The bottom right corner reflects the scale bars of the labeled DEM image.It means a pixel in the image corresponds to the real size.
Figure 1 .
Figure 1.Original images and labeled images.(a) is the original image.(b) is the labeled image, yielding the ground-truth crater set here.The red circles in original images are the impact craters with obvious missing marks, and the yellow circles in original images are the impact craters with incorrect mark.The bottom right corner reflects the scale bars of the labeled DEM image.It means a pixel in the image corresponds to the real size.
Figure 3 .
Figure 3. Crater prediction and extraction result by ERU-Net.(a) The ground-truth label of lunar images.Blue circles denote the ground-truth craters.(b) CNN (convolution neural network) prediction results.(c) Crater extraction results from prediction results.(d) The recognition result of our method is compared with the ground-truth.Blue circles denote the correctly recognized craters, red circles denote unrecognized craters, and green circles denote new craters that are predicted by our network.
Figure 3 .
Figure 3. Crater prediction and extraction result by ERU-Net.(a) The ground-truth label of lunar images.Blue circles denote the ground-truth craters.(b) CNN (convolution neural network) prediction results.(c) Crater extraction results from prediction results.(d) The recognition result of our method is compared with the ground-truth.Blue circles denote the correctly recognized craters, red circles denote unrecognized craters, and green circles denote new craters that are predicted by our network.
Figure 3 .
Figure 3. Crater prediction and extraction result by ERU-Net.(a) The ground-truth label of lunar images.Blue circles denote the ground-truth craters.(b) CNN (convolution neural network) prediction results.(c) Crater extraction results from prediction results.(d) The recognition result of our method is compared with the ground-truth.Blue circles denote the correctly recognized craters, red circles denote unrecognized craters, and green circles denote new craters that are predicted by our network.
Figure 4 .
Figure 4. Crater recognition results of different networks.(a) The original lunar DEM (digital elevation model) data.(b) Ground-truth images.Blue circles denote ground-truth craters.(c) The recognition results of our proposed ERU-Net.(d) The recognition results of ERU-Net with two Res-Blocks.(e) The recognition results of deep residual U-Net [29].(f) The recognition results of the network designed by[36].In (c-f), red circles denote unrecognized craters, blue circles denote correctly recognized craters, and green circles denote new craters predicted by the network.
Figure 4 .
Figure 4. Crater recognition results of different networks.(a) The original lunar DEM (digital elevation model) data.(b) Ground-truth images.Blue circles denote ground-truth craters.(c) The recognition results of our proposed ERU-Net.(d) The recognition results of ERU-Net with two Res-Blocks.(e) The recognition results of deep residual U-Net [29].(f) The recognition results of the network designed by[36].In (c-f), red circles denote unrecognized craters, blue circles denote correctly recognized craters, and green circles denote new craters predicted by the network.
Table 1 .
The recognition results of different network models on 30,000 training images (bold in the table means the best results on this evaluation index).
Table 1 .
The recognition results of different network models on 30,000 training images (bold in the table means the best results on this evaluation index).
Table 2 .
Recognition results of different network models on 5000 training images (bold in the table means the best results on this evaluation index).
Table 2 .
Recognition results of different network models on 5000 training images (bold in the table means the best results on this evaluation index).
Table 3 .
The recognition results of models after deepening the network (bold in the table means the best results on this evaluation index).
Table 4 .
The recognition results in different distributions (bold in the table means the best results on this evaluation index).
Table 5 .
The recognition result between 15 and 30 training time (bold in the table means the best results on this evaluation index).
Table 6 .
The result between 26 and 29 training time (bold in the table means the best results on this evaluation index).
|
v3-fos-license
|
2019-03-12T13:07:17.958Z
|
2015-10-27T00:00:00.000
|
74559978
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://sljid.sljol.info/articles/10.4038/sljid.v5i2.8080/galley/5852/download/",
"pdf_hash": "6d15329a8c54aed9b39be536acc294defbce012b",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:606",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d34cf49d55a3a19778bb4505adc018ffb2210477",
"year": 2015
}
|
pes2o/s2orc
|
Clinico-pathological correlation and outcome analysis of disseminated histoplasmosis treated with conventional Amphotericin B
Introduction Disseminated histoplasmosis is a treatable common opportunistic infection in HIV infected people and not uncommon in others in a tropical country like India. The objective of our study was to evaluate clinical-pathological correlation and treatment outcome of disseminated histoplasmosis treated with conventional Amphotericin B in an endemic area. Material and methods This was a retrospective observational study of twenty-two cases of disseminated histoplasmosis admitted to a tertiary care hospital from January 2009 to December 2012 and treated with Amphotericin B followed by oral Itraconazole therapy for one year. Results of treatment outcome including relapse and mortality were analyzed in January 2014. Results Histoplasmosis was diagnosed in patients with advanced HIV (72%) illness with a mean CD4 count of 63.43/l. Tuberculosis and diabetes were other co-morbid illnesses and it was less common among immunocompetent patients (9%). Fifty percent of the patients presented with cutaneous lesions along with systemic manifestations while 27% had only mucocutaneous lesions. Adrenal histoplasmosis (18%) was common in HIV negative subjects. HIV positive patients showed excellent response to Amphotericin B followed by Itraconazole therapy. In 27% HIV positive patients, the disease manifested as IRIS (immune reconstitution inflammatory syndrome). Relapse was seen in 2(9%) patients. After one year of completion of therapy 16 patients were cured, 3 patients (13.6%) died in the early part of treatment and one was lost to follow up.Treatment response in HIV infected patients showed excellent results but long term maintenance itraconazole therapy was inevitable (12-32 months). Conclusions It is concluded that early diagnosis and treatment can prevent a fatal disease like disseminated histoplasmosis with conventional Amphotericin B followed by Itraconazole. Adrenal histoplasmosis was common among the HIV negative population. Extensive follow up is required to identify early relapse which may need further prolongation of therapy for cure.
Introduction
Histoplasma capsulatum is a dimorphic fungus, responsible for the commonest cause of systemic mycosis. 1 The organism is endemic in the Ohio and Mississippi river basin areas of the United States and is also reported from parts of Central and South America, Europe, Africa and Asia. Histoplasmosis is an infrequently reported disease in India and only sporadic cases from different regions of this country have been reported in the literature. 2 Infection is caused by inhalation of microspores and contaminated soil which remains potentially infectious for many years and is most often found near bat and bird habitats. 3 This organism is of low human virulence and only clinically manifests when host immune responses allow persistent parasitisation of macrophages. 4 Disseminated histoplasmosis (DH) runs a varied clinical course ranging from asymptomatic selfhealing illness, to acute or chronic pulmonary disease and an acute to chronic progressive form, depending upon the immune status of the host, virulence of the organism and size of inoculum. Acute progressive disseminated illness, which presents with abrupt onset of fever, organomegaly, lymphadenopathy and cytopenia without any granulomatous response is usually fatal. The subacute variety follows a relentless course with focal lesions and organomegaly. The chronic form has an indolent course with focal lesions, organomegalies and an effective cell mediated immune response. Its clinical features overlap with those of other systemic illnesses of tropical countries such as tuberculosis, leishmaniasis, sarcoidosis and malignancies. Diagnosis therefore requires a high index of suspicion with recognition of common modes of presentation and familiarization with diagnostic tests.
The aim of our study was to analyze treatment responsiveness, outcome and mortality in terms of age, sex, type of presentation, associated illness, CD4 cell count, clinical manifestations, laboratory and radiographic investigations in diagnosed cases of disseminated histoplasmosis (DH) in a tertiary care hospital.
Materials and methods
We retrospectively analyzed the recorded data of patients diagnosed with DH admitted to Carmichael Hospital for Tropical Diseases, a tertiary referral center in Eastern India, from January 2009 to December 2012 and followed up till December 2013.We defined DH using the following criteria.
a) The presence of extra pulmonary H.capsulatum with or without granuloma seen using the Periodic acid-Schiff (PAS) or Grocott's methenamine silver (GMS) stain from aspirated material and biopsy specimens (skin, oral lesion and palatal mucosa, adrenal gland). b) Positive culture (tissue/bone marrow) in Sabouraud dextrose agar media. In some patients the organism was isolated from more than one site.
HIV was diagnosed by standard ELISA method according to NACO protocol.
We defined acute illness if symptoms were present for 2 weeks or less, sub-acute between 2 to 6 weeks of symptoms and chronic if more than 6 weeks of symptoms. Routine laboratory investigation records including complete haemogram, fasting and post prandial blood sugar, liver function tests, urea, creatininine, LDH records were monitored. Imaging studies including chest Xray, Ultrasonography of abdomen, CECT abdomen (if needed) were done. In HIV positive patients CD4 count and combination anti-antiretroviral therapy (cART) records were monitored from Anti-Retroviral Therapy (ART) Centre. Institutional ethical clearance was obtained.
Results and Analysis
In our study twenty-two patients were diagnosed as DH (Male: Female ratio was 19:3). DH was diagnosed by demonstration of organism from Fine-needle aspiration cytology (FNAC) and/or biopsy material in 19 patients, fungal culture from aspirated material of cutaneous lesions in 4 and bone marrow in 3 patients. Four patients were diagnosed by cytology /biopsy using High-dose contrast-enhanced computed tomography (CECT) guided FNAC of the adrenal gland (Table 1). Sixteen patients had HIV infection, 4 were diabetics, 5 had pulmonary tuberculosis and 8 patients had extra pulmonary tuberculosis. Two patients were immunocompetent and had no co-morbid conditions ( On examination, few non-tender papulo-nodular, crusted skin lesions were present on the face in 50% of patients with oral and mucosal ulcers in 6 (27.2%) patients. There was hepatosplenomegaly in 7 (31.8%) patients, splenomegaly in 2 (9.1%) patients and lymphadenopathy in 4 (18.1%) patients (Table 3). DH was diagnosed in 10 HIV positive patients before the commencement of ART .In 6 patients; the disease manifested after commencement of ART and was documented as IRIS. Anemia and leucopenia was found in 21 patients and raised LDH were seen in 12 patients.
The total treatment duration with amphotericin B followed by oral itraconazole for non HIV patients was one year. However, in the HIV infected patient group, 2 died in the 1st week after initiation of treatment. As this patient died within the first week of diagnosis, we could not complete the induction phase of Amphotericin B treatment. The third patient died just after completion of the course of Amphotericin B injection and had very low base line CD4 count (<100/ µl) before initiation of treatment. Clinical well-being was seen in thirteen patients from the 2nd week of treatment with gradual disappearance of skin lesions. Ten patients with base line CD4 count <100/ µl required prolonged itraconazole maintenance therapy with 4 patients needing 12-16 months, 5 patients 16-24 months, and one patient 32 months of treatment. One patient was lost to follow up at the 14th month .Two patients with baseline CD4 count 100-200cells/ µl , were treated for 12-14 month till their CD4 count was raised and maintained a steady value of > 150/ µl for 6 months (Table 4) .
Four patients (with diabetes and tuberculosis) were treated with amphotericin B and itraconazole for one year. Minor side effects of amphotericin B were experienced by all of them. One diabetic patient developed hypokalemia and nephrotoxicity (raised creatininine) which were managed conservatively. Two immunocompetent patients with no comorbidities were treated for twelve months. In two immunocompetent patients, disappearance of cutaneous lesions was seen after 3 months of initiation of therapy although a total one year of therapy was given to all patients. Two patients, who had features of Addison's disease, were successfullytreated for one year with Itraconazole, a corticosteroid and minerelo-corticosteroid supplement.
Relapse occurred in 2 patients which were treated with oral itraconazole for one more year. Disease was reactivated (by reappearance of clinical symptoms and laboratory confirmation) in one of them 1.5 years after completion of therapy. HIV positive patients were treated with combination antiretroviral drugs according to NACO guideline. Diabetes and tuberculosis were managed simultaneously along with histoplasmosis.
Discussion
This was one of the few large studies conducted in India where previous studies was done sporadically. 5,6 In our study, HIV was the commonest associated illness (72.7%) although with low prevalence, representing 0.85% of all admitted HIV patients. A previous study done by Mc Kinsey DS et al in 1989 showed a high prevalence rate (5-32%) of histoplasmosis among HIV positive patients. 7 The low prevalence in our study is probably due to late diagnosis and underdiagnosis in bone marrow culture. 8 Body weakness, fever and weight loss were the commonest symptoms in the study group. Similar to previous studies in other countries, 50% of our patients presented with cutaneous manifestations. 9 We found that only 2(9%) immunocompetent and 27% of the entire study group had oro-mucosal lesions. M.Harnalikaret al (2012) also reported similar finding. 10 SoujaFilho FJ et al in a previous report in 1995 showed DH is rare in immunocompetent persons and is usually associated with the severe disseminated form of histoplasmosis. 11 Though adrenal gland histoplasmosis is a part of disseminated infection, none of the HIV positive patients showed significant clinical and radiological adrenal enlargement. Adrenal involvement was noted in 4 HIV seronegative patients. Addison's disease developed in 2 patients. Adrenal involvement in histoplasmosis resulting in adrenal insufficiency is an uncommon presentation, 12 Addison's disease typically occurs with extensive destruction of both adrenal glands during infection by the organism. In early stages of destruction, it typically presents as chronic fatigue syndrome. Angeli A et al (1991) has shown that adrenal glands were enlarged and there were signs of granulomatous inflammation within the adrenals. 13 Adrenal function recovers following cure with antifungal therapy unlike that seen with treatment of tuberculosis, where lifelong cortisol replacement is needed. 6 Duration of treatment for adrenal histoplasmosis is not clear from existing information and treatment regimens advocate antifungals given for periods ranging from 6 months to 2 years. 14 Deepak Kothari et al (2013) reported persistence of H. capsulatum in adrenal biopsy material 7 years after treatment with Itraconazole for nine months and suggested that prolonged therapy with regular reviews of adrenal morphology and histology is needed. 14 We too have seen one of our patients with adrenal histoplasmosis relapsed after one year of completion of treatment. Another immunocompetent patient developed signs and symptoms of relapse 1.5 years after of completion of therapy.
Similar to McKinsey D S et al (1997), we have seen that very low CD4 counts (mean CD4 value 63/ µL) along with endemicity may be a possible risk factors for DH in advanced HIV illness. 15 Recent studies show that for DH patients with moderately severe or severe manifestations, liposomal Amphotericin B is more effective and less toxic than the deoxycholate formulation of Amphotericin B. Patients in the study were treated with low cost deoxycholate formulation of Amphotericin B for 10 days (hospital supply) followed by Itraconazole therapy for one year (according to IDSA guidelines). Since ART was introduced, the clinical and immunologic status of HIV-infected patients has dramatically improved. We have also found that cure rate was high among treated HIV infected patients and treatment was discontinued when CD4 counts steadily rose above 150/ µl. Mitchell Goldman et al (2004) showed that discontinuation of antifungal maintenance therapy appeared to be safe in HIV infected patients who achieved adequate immunological response to antiretroviral therapy (CD4>150/ µl). 16 DH manifested as IRIS in 6 (27%) HIV patients in the current study. DH manifested as IRIS within two months of ART initiation and IRIS was documented by raised CD4 count and presence of granuloma. Breton et al (2006) also demonstrated IRIS in HIV positive population. 16 Acute progressive DH in AIDS has a high mortality rate. Three patients presented with very advanced illness of acute variety and died in the early weeks of diagnosis before completing the induction phase of therapy with Amphotericin B. Relapse of disease was seen in 2 of our patients who required prolonged Itraconazole therapy. Prolonged suppression (secondary prophylaxis) with once-daily Itraconazole is recommended for patients who experience relapse or are in irreversible immunosuppression. 18
Conclusion
Treatment with liposomal Amphotericin B is known to be superior in term of safety, toxicity and mortality for treatment of disseminated histoplasmosis. However conventional Amphotericin B is also highly effective both in immunocompromised and immunocompetent patients and can be used for its management in resource poor settings including India.
Limitations
As a tertiary hospital, moderate to severe cases are referred from different parts of Eastern India, the exact numbers of infected population particularly in endemic areas could not be assessed. We could not estimate the serum or urinary histoplasma antigen level due to the unavailability of the test.
|
v3-fos-license
|
2021-09-23T05:13:55.025Z
|
2021-07-15T00:00:00.000
|
237592371
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.cell.com/article/S2329050121001182/pdf",
"pdf_hash": "1137b6c1dfa1115cb35296a2d0951771d906b4c8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:607",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "1137b6c1dfa1115cb35296a2d0951771d906b4c8",
"year": 2021
}
|
pes2o/s2orc
|
High throughput screening of novel AAV capsids identifies variants for transduction of adult NSCs within the subventricular zone
The adult mammalian brain entails a reservoir of neural stem cells (NSCs) generating glial cells and neurons. However, NSCs become increasingly quiescent with age, which hampers their regenerative capacity. New means are therefore required to genetically modify adult NSCs for re-enabling endogenous brain repair. Recombinant adeno-associated viruses (AAVs) are ideal gene-therapy vectors due to an excellent safety profile and high transduction efficiency. We thus conducted a high-throughput screening of 177 intraventricularly injected barcoded AAV variants profiled by RNA sequencing. Quantification of barcoded AAV mRNAs identified two synthetic capsids, peptide-modified derivative of wild-type AAV9 (AAV9_A2) and peptide-modified derivative of wild-type AAV1 (AAV1_P5), both of which transduce active and quiescent NSCs. Further optimization of AAV1_P5 by judicious selection of the promoter and dose of injected viral genomes enabled labeling of 30%–60% of the NSC compartment, which was validated by fluorescence-activated cell sorting (FACS) analyses and single-cell RNA sequencing. Importantly, transduced NSCs readily produced neurons. The present study identifies AAV variants with a high regional tropism toward the ventricular-subventricular zone (v-SVZ) with high efficiency in targeting adult NSCs, thereby paving the way for preclinical testing of regenerative gene therapy.
INTRODUCTION
The adult brain has long been considered a tissue with no regenerative capacity, partly due to the absence of pluripotent cells. In the late 1990s, a reservoir of neural stem cells (NSCs) with the potential to generate glia and neuronal progeny was identified in the adult mammalian brain. 1,2 The largest reservoir of NSCs in rodents is located along the walls of the lateral ventricles, the so-called ventricular-subventricular zone (v-SVZ). The potential of these NSCs to produce different glia and neuronal subtypes has been demonstrated by lineage-tracing studies. [3][4][5] NSCs get activated to provide progeny for tissue homeosta-sis but also in the frame of a traumatic brain injury. [6][7][8][9][10][11] However, the ability to activate NSCs highly declines with age, 12 hampering repair of the brain. This fairly limited endogenous-regenerative capacity calls for new strategies to specifically target and genetically modify adult NSCs within the natural environment of the brain.
Many different viral and transgenic approaches have been developed in the past to manipulate adult NSCs and their progeny. 13 For a long time, onco-retroviruses and lentiviruses that integrate their genomes into the host cellular chromatin were the tools of choice. However, limitations of integrating viruses, 14 such as insertional mutagenesis, 15,16 gradual silencing of the inserted transgene, 17,18 and the fact that not all nondividing cells are equally transduced in vivo, 19 hamper their use for targeting of especially quiescent (q)NSCs within the v-SVZ. Over the last few years, the non-enveloped adeno-associated viral (AAV) vectors have taken center stage as a gene-delivery vehicle for human gene therapy with two gene therapeutic approaches that have gained regulatory approval for commercial use in patients: Glybera (uniQure) and Luxturna (Novartis), and with a large amount of AAV gene therapeutic strategies even in the CNS under clinical development, as reviewed in Hocquemiller et al., 20 Deverman et al., 21 Foust et al., 22 and Wang et al. 23 AAVs are small virus particles, belonging to the dependoviruses within the parvoviridae family with a capsid diameter of~22 nm that is sterically limiting its genome to~4.7 kb. 24 The original AAV genome consists of only two genes, the rep and cap gene, which are the structure of the AAV capsid, whereas the rep gene is involved in several processes ranging from transcription initiation to packaging of the AAV genome. For vector production, these genes are commonly delivered in trans and thus can be easily modified. [25][26][27][28][29][30][31][32][33] Over the last decades, hundreds of AAV isolates were identified in various species, with an interestingly high homology regarding their capsid protein amino acid sequences, e.g., up to 99% for the primate isolate AAV1 compared to the human isolate AAV6. 34 Favorable safety profiles combined with the ability to mediate long-term transgene expression and to efficiently target many different human tissues are major assets that make AAVs a preferred technology. 25,[35][36][37][38] Nonetheless, specific targeting of NSCs in the v-SVZ has remained challenging to date. Whereas the most efficient wild-type (WT) serotype, AAV9, shows high transduction efficiency upon intravenous and intracranial injection, it mainly targets neurons and astrocytes, but not NSCs. 22,[39][40][41] Just recently, the power of structure-guided DNA shuffling was used to develop the newly engineered AAV variant SCH9. This new variant was able to target cells in the v-SVZ including NSCs. 42 However, to date, the usefulness of AAV vectors for transduction of stem cells remains debated, mainly based on conflicting reports concerning their transduction efficiency as reviewed. 43 The variable regions of the viral protein (VP), which is encoded by the cap gene, are involved in receptor binding and antibody recognition and thus modifications thereof can be used to guide targeting of specific cell types. Engineering of the AAV capsid for optimization of organ, region, or cell specificity can be achieved by methods such as random cap gene mutation, DNA family shuffling, or peptide display, combined with in vivo selection. 42,[44][45][46][47][48][49][50] Most recently, barcoding of double-stranded encapsidated DNA and next-generation sequencing (NGS) were shown to allow for highthroughput screening of AAV capsid libraries. 51,52 Taking these advances as a platform, we apply here these barcoded AAV libraries by intracerebroventricular injection of the adult rodent brain in order to find an optimal candidate to transduce NSCs from the v-SVZ. By using a combination of NGS, immunohistochemistry (IHC), flow cytometry, and mathematical modeling, we validate transduction of the NSCs within the v-SVZ and their neurogenic lineage by the novel AAV capsid peptide-modified derivative of WT AAV1 (AAV1_P5).
RESULTS
To identify AAV capsids able to transduce NSCs in the v-SVZ with the highest transduction efficiency possible, we performed an NGS-based high-throughput screening of 177 different barcoded AAV capsid variants. These AAV variants comprise 12 AAV WTs, 94 newly generated peptide display mutants based on these WTs, and 71 chimeric capsids generated through DNA family shuffling. Among the synthetic capsids are 24 previously published benchmarks, with the remaining ones being generated as described in Materials and methods, in Table S4, and in greater detail in Weinmann et al. 53 To assess the performance of individual AAVs, the capsid variants were uniquely barcoded with a 15-nucleotide (nt)-long random DNA sequence and packaged into an AAV vector expressing a cytomegalovirus (CMV) promoter-controlled eYFP (enhanced yellow fluorescent protein) that harbors the barcode in its 3 0 untranslated region (UTR 7 days post-injection (dpi), qNSCs and active NSCs (aNSCs), as well as other cell populations of the v-SVZ, including transient amplifying progenitors (TAPs), neuroblasts, astrocytes, oligodendrocytes, and ependymal cells, were fluorescence-activated cell sorting (FACS) analyzed as previously described 6,12,54 (Figures S1A and S1B; Tables S2 and S3). Finally, RNA libraries from the different cell populations were generated for NGS analysis ( Figure 1A). In parallel, additional mice were sacrificed at 7 dpi for detection of the eYFP reporter in the v-SVZ. Efficient transduction of cells in the v-SVZ by both AAV libraries was confirmed by detecting the expression of the eYFP reporter along the ventricular walls ( Figure 1B). Already after 7 dpi, few eYFP-positive (eYFP + ) cells migrated to the olfactory bulb (OB) and were detected in the core and granular cell layer (GCL; Figure 1C), indicating that the AAV vector was retained along the lineage and did not prevent migration.
For AAV mRNA analysis, capsids were ranked within each sorted cell population by the relative expression of their cognate barcodes, normalized by their frequency within library #1 and library #3. Overall capsid rankings of the 71 capsids shared by both libraries revealed the same top candidates and correlated strongly (Spearman's rank correlation r = 0.84, p < 0.01) ( Figure 1D). Furthermore, we did not find a significant association between barcode guanine-cytosine (GC) content and frequency in either library (Figures S2L and S2M and Materials and methods), indicating that the Figure 1. In vivo screening to identify AAV capsids that specifically target the v-SVZ (A) Schematic illustration of the experimental outline to perform the in vivo screening, including markers used to sort cells of the NSC lineage (see Figure S1 for sorting strategy Figures S2F and S2K). These two lead candidates clearly outperformed the well-established AAV2 and AAV9 WT capsids across all v-SVZ-cell populations ( Figures 1K and 1L), as well as the parent WT AAV1. Taken together, our study has successfully identified AAV capsids that were highly region specific for the v-SVZ, probably due to their inability to migrate out of this region as reported for the SCH9 variant. These candidates exhibited a higher efficiency in targeting both aNSC and qNSC than established WT AAV variants in the v-SVZ in vivo.
One potential application of gene therapy is to genetically modify freshly isolated cells and transplant them back to the donor. Hence, to identify the capsid with the fastest transduction rate of isolated NSCs, we assessed the expression dynamics of WT serotypes AAV2 and AAV9, respectively (AAV2_WT and AAV9_WT), AAV9_A2, and AAV1_P5 in NSCs in vitro. To detect viral transduction of targeted cells and their progeny, we took advantage of the recombination of pairs of loxP sites by the Cre recombinase (Cre/loxP) system and engineered the AAVs to express a CMV immediate enhancer/bactin (CAG) promoter-controlled Cre recombinase fused to GFP (CAG_-Cre::GFP). We decided to use the CAG promoter to assess performance of these capsids, since this promoter proved to outperform other promoters for in utero electroporation of embryonic neural progenitors. 55 Subsequently, we transduced primary-cultured NSCs from B6-Gt(ROSA)26Sortm14(CAG-tdTomato)Hze (tdTomatoflox [TdTom-flox]) mice with these 4 candidates ( Figure 2A). Crefused GFP and cytoplasmic tdTomato were detected via immunocytochemistry at days 1, 3, 5, and 7 post-transduction (dpt) (Figures 2A and 2B). Interestingly, whereas all capsids showed a similar number of transduced cells at 7 dpt ( Figures 2C and 2D), AAV1_P5 exhibited the fastest transduction kinetics ( Figure 2C), already showing labeling at day 1 ( Figures S3A and S3B).
Next, we investigated whether the newly identified AAV capsids AAV1_P5 and AAV9_A2 also target v-SVZ cells in vivo. To this end, we individually injected 10 9 vgs of AAV9_A2, AAV1_P5, or the well-established AAV9_WT and AAV2_WT, all containing the CAG_Cre::GFP construct, into tdTomato-flox mice ( Figure 2E). Notably, at 7 dpi, the tropism toward the v-SVZ highly differed between the tested capsids ( Figures 2F and 2G). AAV2_WT and in particular AAV9_WT targeted many cells outside of the v-SVZ, especially in the medial and dorsal wall of the lateral ventricles, whereas the striatum was not targeted (Figures 2F and 2G and data not shown). In contrast to the WT capsids, AAV1_P5 and AAV9_A2 demonstrated a significantly higher tropism toward the v-SVZ (Figure 2I). AAV1_P5 showed the most unique tropism, with 98% of all tdTomato-labeled cells lying along the v-SVZ. In addition, transduction rates of overall cells also differed among the four capsids. AAV1_P5 and AAV9_A2 exhibited the fastest kinetics and most robust rate of transduction, with AAV1_P5 transducing the largest number of cells at 5 dpi as compared to the other capsids ( Figure 2J). The overall number of transduced NSCs became similar at 7 dpi for all capsids except AAV2_WT ( Figure 2J). Nevertheless, AAV9_WT mostly targeted cells lying outside of the ventricular wall that we clearly identified as neurons based on their morphology. This is in line with previous reports showing a high transduction efficiency of AAV9 for neuronal cells. 22,56 By contrast, AAV1_P5 and AAV9_A2 exhibited a selective tropism for the v-SVZ, mainly targeting NSCs/TAPs (SOX2 + /GFAP +/À /S100B À ) as well as ependymal cells (SOX2 + /S100B + ; Figures 2H and 2J).
Along the wall of the v-SVZ, ependymal cells are organized in a socalled pinwheel architecture with NSCs in the center. 57 Within these structures, ependymal cells outnumber NSCs, explaining why AAV1_P5 and AAV9_A2 transduce more ependymal cells overall. A recent report using single-cell transcriptomics and fate mapping of ependymal cells demonstrates their inability to generate progeny even after growth factor administration or brain injury. 58 This ensures that progeny labeled with AAV1_P5 or AAV9_A2 stems from NSCs. However, manipulated ependymal cells communicate with neighboring NSCs and might indirectly change the progeny of these NSCs. To address this, strategies to de-target ependymal cells, by using a NSC-specific promoter or a microRNA (miRNA)-regulated viral vector, 59,60 might be of use. The latter would require a screening for ependymal cell-specific miRNAs. Taken together, our data To select the best candidate between AAV1_P5 and AAV9_A2 regarding NSC transduction efficiency, we performed FACS analysis of the v-SVZ and OB of injected mice. 2-month-old C57BL/6N mice were injected with 10 10 vgs in 10 mL of either AAV1_P5 or AAV9_A2 capsids containing the eYFP reporter under the CMV promoter, as these were the capsids used for the barcoded libraries. 6 dpi, mice were sacrificed, and NSCs with their progeny from the v-SVZ and the OB neuroblasts were analyzed by FACS quantification ( Figures S3C, S6A, and S6B). By determining the fraction of YFP + cells among these cell types, we calculated the labeling efficiency of the different viruses. Our results show that AAV1_P5 has a higher labeling efficiency for NSC (11.19%) than the AAV9_A2 capsid (2.95%) (Figure S3D). This higher transduction efficiency could also be seen for qNSC, aNSC, TAPs, and negative binomials (NBs) from the SVZ ( Figure S3D). This prompted us to proceed with the AAV1_P5 capsid for further experiments. Of note, the overall low number of detected YFP + cells is due to the lower sensitivity of FACS analysis for YFP-expressing cells as compared to mCherry or tdTomato, as previously shown (Tlx -YFP (YFP expression under the Tlx [Nr2e1 nuclear receptor gene] promoter) versus Tlx-tdTomato in Baser et al. 61 ). In addition, here, we directly measure the viral YFP as opposed to the measurement of tdTomato expression induced by AAV-Cre in Figure 1.
In order to test the ability of direct AAV1_P5-transduced NSCs to generate progeny, freshly isolated NSCs from tdTomato-flox mice were transduced with AAV1_P5 expressing Cre recombinase under the control of a CMV promoter (CMV_Cre). Thereafter, transduced cells were transplanted into the v-SVZ of C57BL/6N WT mice ( To fully characterize the identity of AAV1_P5-transduced cells in the v-SVZ and the OB, as well as to address potential changes arising from AAV transduction itself, we profiled transduced and untransduced cells from the same mouse by single-cell RNA sequencing (scRNA-seq). To this end, 3-month-old eYFP-reporter mice (B6- and Tlx-CreERT2-YFP mice 62 ) were injected with 10 9 vgs/mouse AAV1_P5 harboring the CMV_Cre construct. Upon transduction, Cre recombinase causes the excision of a transcription terminator upstream of eYFP, which leads to eYFP expression. Transduction also causes excision of the neomycin resistance (NeoR) gene ( Figure 3A, top). 37 dpi, we isolated cells from the v-SVZ and other brain regions as schematically depicted in Figure 3A. More precisely, we isolated labeled cells of the v-SVZ and the striatum, rostral migratory stream (RMS), and OB, here referred to as rest of the brain (RoB).
To capture the remaining unlabeled cells of the NSC lineage in the v-SVZ, we also isolated GLAST + v-SVZ cells (see Figures 3A and S4EÀS4G for the proportion of cell populations). Two samples of two pooled mice each were subjected to scRNA-seq. Initial inspection of the resulting 4,572 single-cell transcriptomes revealed a segregation of proliferating cells as indicated by the expression of the proliferation marker protein KI67 (MKI67) and canonical markers of G2/M and S phase ( Figures S4H and S4I). After mitigating the effects of phase heterogeneity by regression, we obtained a continuous trajectory ranging from NSCs to late NBs (LNBs)/immature neurons ( Figure 3B). This lineage progression is characterized by downregulation of glia markers, followed by increased expression of ribosomal genes and cell-cycle genes, and finally upregulation of neuron differentiation genes. 6 Visualizing the expression of representative genes from Llorens-Bobadilla et al. 6 recapitulated the same transcriptional progression in our dataset ( Figure 3C). Only few eYFP + off-target cells (sample #1: 9.7%; sample #2: 2.7%) were captured, consisting of mostly ependymal cells ( Figure 3B). We found that cells isolated from RoB are located at the very end of this trajectory, as expected ( Figure S4J).
Next, we sought to distinguish labeled (eYFP + NeoR-negative [NeoR À ]) cells from unlabeled (eYFP À NeoR-positive [NeoR + ]) cells in our single-cell transcriptomes ( Figures 3D and S4K). As expected ( Figure 3A, top), eYFP-expressing cells mostly do not express NeoR, and vice versa, cells expressing NeoR mostly do not express eyfp. Only very few cells express both eyfp and NeoR (samples #1 and #2: 1.4% and 3.7%), possibly due to incomplete Cre-mediated excision. Transcripts of the viral Cre-recombinase, however, were rarely detected and mostly in early stages of the lineage but notably, also in very few cells at the end of the lineage, indicating an overall very low expression that prevents estimation of the dilution of viral transcripts along the lineage ( Figure S4L). The floxed genes, eyfp and NeoR, exhibited higher expression than the Cre transcript. eyfp was more readily detected than NeoR, but ultimately, both genes suffered from the usual "dropout" in scRNA-seq, i.e., the failure to capture and/or detect transcripts. 63 For a substantial fraction of cells, neither NeoR nor eyfp was detected. The fraction of such undistinguishable cells was larger in cells with fewer total detected transcripts such as qNSCs and LNBs ( Figures 3D and 3E). To overcome this issue and estimate AAV1_P5 transduction efficiency while accounting for single-cell transcriptomes. Most cells form one continuous trajectory from qNSCs to early NBs (ENBs; mostly from v-SVZ) and late NBs (LNBs)/immature neurons (mostly from rest of brain). Few off-target cells including Ep cells and others (gray) were captured. (C) Mean relative gene expression of NSC lineage markers from Llorens-Bobadilla et al. 6 and Ep cell markers from Shah et al. 58 total transcript count per cell and the likely different expression strengths of eyfp and NeoR, we employed maximum likelihood estimation ( Figure 3F and Materials and methods). LNBs (mostly from eYFP + -sorted RoB) and ependymal cells (GLAST À ) were used as controls since we know that almost all of these cells are transduced. Overall, we estimated a high transduction efficiency ranging from 46% to 93% for the cell types of the v-SVZ lineage and estimated 92% to 100% transduction in cells used as controls.
Lastly, we assessed whether the transduced cells show transcriptomic differences arising from the viral transduction itself. Both eYFP À and eYFP + aNSCs and TAPs showed high expression of commonly used G2/M-phase marker genes ( Figure 3G), which suggests that transduction with AAV1_P5 does not affect proliferation. Differential geneexpression analysis between eYFP + cells and eYFP À cells ( Figure 3H) identified only 18 differentially expressed genes (Table S5), indicating that AAV1_P5 transduction affects their transcriptome only mildly. Furthermore, we did not find any concerted upregulation of viral response genes in this comparison or when comparing eYFP + cells to eYFP À NeoR + cells ( Figure S4M) or naive v-SVZ lineage cells from Kalamakis et al. 12 ( Figure S4N). In conclusion, we have combined scRNAseq with lineage tracing using AAV1_P5 and found that transduction does not affect the expression of proliferation markers and overall only minimally affects the transcriptomic readout.
We next tested whether the transduction efficiency could be further optimized by the selection of promoter and number of injected vgs per mouse. To this end, we now packaged the CMV_Cre construct into the AAV1_P5 capsid and injected either 10 9 vgs per mouse as in Figures 2EÀ2J or an increased concentration of 10 10 vgs per mouse into tdTomato-flox mouse brains ( Figure S5A). In all conditions, tdTomato-labeled cells were detected at high numbers in the v-SVZ, confirming specific v-SVZ targeting by the AAV1_P5 capsid (Figures S5BÀS5D). Transduction of cells was over 60 times higher with the CMV_Cre construct (319.9 cells per section) ( Figure S5D) than with CAG_Cre (4.8 cells per section) ( Figure 2J) when injecting 10 9 vgs per mouse. By increasing the number of injected vgs from 10 9 to 10 10 , we were able to further increase the number of labeled cells ( Figure S5D) including NSCs/TAPs and ependymal cells ( Figures S5F and S5G). However, the increased viral load also moderately increased the proportion of labeled cells located outside of the v-SVZ ( Figure S5E).
We finally assessed the neurogenic function of transduced NSCs in vivo. To this end, we assessed the number of transduced NSCs in the v-SVZ and their neuronal progeny in the OB. 10 10 vgs/mouse of AAV1_P5 harboring the CMV_Cre construct were injected into the lateral ventricles of tdTomato-flox mice, and at 35 dpi, the number of labeled NSCs in the v-SVZ and OB interneurons was assessed ( Figure 4A). We observed a high heterogeneity in the number of labeled cells probably due to differences in the injection site. It should be noted that the given coordinates are always relative to the average brain of a WT mouse. Therefore, smallest differences in the volume or orientation of the ventricle by slight inclination of the head within the stereotactic frame are potential sources of variability of the injection site. One set of animals exhibited a lower number of labeled cells in the SVZ and OB than the other ( Figure 4B). Although a trend toward a reduced number of NSCs/TAPs at 35 dpi was detectable, NSCs still remained in the v-SVZ at this late time point ( Figure 4C), suggesting that AAV1_P5 also targeted qNSCs.
To estimate the extent of targeting of the NSC compartment, we took advantage of our previously developed mathematical modeling framework for stem cell dynamics of v-SVZ. 12 First, we extended our previously established model and calibrated it to the experimentally observed dynamics of TAPs and OB neurons (see Supplemental material [Mathematical modeling]). Instead of fitting the model to average cell counts across mice, we subdivided the data into two groups, with higher and lower labeling, as animals with high labeling in the v-SVZ exhibited a much higher number of labeled cells in the OB than animals with lower labeling (Figures 4D and 4E). Fitting of the model to the data, assuming that viral transduction does not affect cell kinetics and that the observed heterogeneity comes from different numbers of initially labeled NSCs and TAPs, the model indicates that approximately 57% of NSCs are labeled in the high-label group and 26% of NSCs in the other group (see Supplemental material). Moreover, the model indicates that in the low-labeled group, barely any TAP would be labeled at the initial time, whereas in the other group, a higher number of TAPs are initially labeled.
Finally, we employed our model to address whether the observed labeling would arise from direct targeting of qNSCs, aNSCs, or both.
To this end, we simulated two scenarios where either only qNSCs or only aNSCs are targeted ( Figure 4F). Our simulation indicates that the ratio of labeled qNSCs to aNSCs reaches the same value in both scenarios after approximately 4 days, due to transitions between the quiescent and active state. Altogether, comparison of a model fit to data is in line with the hypothesis that the number of initially transduced NSCs and TAPs differs between the two groups, that the cell dynamics exhibited by transduced cells are comparable to non- To validate the model prediction of the label efficiency of the AAV1_P5 vector, we performed a FACS quantification experiment to directly assess the percentage of NSC and progeny that is labeled by the virus 8 dpi ( Figure 4G). 5-month-old TiCY mice were injected with 10 9 vgs/mouse of AAV1_P5 harboring the CMV_Cre construct. FACS quantification analysis was performed as described previously ( Figures S3C, S3D, S6A, and S6B), and the results showed 30.46% labeling efficiency for NSCs ( Figure 4H; mean eYFP + -percentage of both samples), which is close to the 26% labeling efficiency predicted by the mathematical model (Supplemental material). The model also showed a good fit when applied to the FACS quantification experiment performed to choose the best candidate between AAV1_P5 and AAV9_A2. Moreover, the prediction of a high labeling group was validated by the observed labeling rate in the single-cell transcriptomics analysis (see Supplemental material).
DISCUSSION
Altogether, in this study, we have performed barcode-based in vitro and in vivo high-throughput screenings of two libraries of WT and engineered AAV capsids. 53 Targeting of NSCs and especially qNSCs has only been demonstrated in the hippocampal dentate gyrus with the capsid AAV r3.45 64 and the African green monkey isolate AAV4, 65 as well as recently in the v-SVZ using the newly engineered AAV variant SCH9. 42 Here, we have identified two lead candidates for efficient targeting of NSCs ex vivo and in vivo. We particularly characterized the novel capsid AAV1_P5 as highly region specific at targeting cells of the v-SVZ layer, including ependymal cells and NSCs, by IHC, FACS quantification, and scRNA-seq. We moreover show by IHC and scRNA-seq that NSCs targeted with AAV1_P5 were not noticeably affected in their migration and transcriptome and readily generated OB neurons. Furthermore, we demonstrate that the engineered capsid AAV1_P5 also labels qNSCs. We propose that qNSC labeling cannot only be achieved by direct targeting of qNSCs but also indirectly through transduction of aNSCs that would later give rise to qNSCs. Indeed, based on mathematical modeling of FACS counts, we predict that labeled cells redistribute between those states within less than 1 week. Therefore, the initial labeling proportion of qNSCs to aNSC is not crucial when stem cell dynamics are observed on a longer timescale.
AAV1_P5 clearly targets cells in the v-SVZ. Which molecular mechanism leads to efficient targeting of v-SVZ cells by AAV1_P5 is unknown. It was previously shown that the SCH9 variant binds heparan sulfate proteoglycans and galactose, both of which are present on NSCs in the v-SVZ. 42 AAV1_P5 may act via a similar mechanism that would lead to a specific tropism for v-SVZ cells, but other molecular mechanisms are also possible. For instance, AAV1_P5 may be unable to migrate deeply into the ventricular wall, which would favor transduction of NSCs, or it may be that AAV1_P5 has properties that favor its survival or activity in the cerebrospinal fluid. To date, there are only a few cases where such mechanisms underlying altered viral properties of synthetic AAV capsids have been successfully elucidated. [66][67][68][69] One example is the use of the avb8 integrin as a receptor for a keratinocyte-specific AAV2. 66 Another example was reported by several labs that have recently identified an interaction of AAV-PHP.B (a peptide-modified AAV9) with the glycosylphosphatidylinositol (GPI)-linked protein LY6A. [67][68][69] Other than these, however, the receptors or interactions that are targeted by peptide-engineered or shuffled AAV variants typically remain enigmatic, as do the intracellular mechanisms underlying their novel features. Hence, the identification of the receptor for AAV1_P5 will be the subject of future studies. In this looming work, it will then also be interesting to study whether AAV1_P5 interacts with other host cell factors that have been identified over the years as critical for transduction with WT capsids, such as the widely used AAV receptor AAVR 70 or intracellular elements such as the proteasome. 71 As a proof of concept, we show that AAV1_P5 labeling can be combined with scRNA-seq to characterize the transcriptomes of NSCs and their progeny from different brain regions. Surprisingly, the number of transduced off-target cells in this experiment ( Figure 3B) was much lower than in our previous FACS-based experiments ( Figure 2J). A possible explanation is that the main source of off targets, ependymal cells, are hard to detect in scRNA-seq experiments: a previous study 72 isolated 9,804 cells from the v-SVZ without marker preselection, and only 46 of them were ependymal cells. As a result, the low off-target percentages reported in Figure 3B should only be expected in scRNA-seq experiments. Our method of AAV1_P5 labeling, followed by scRNA-seq, paves the way for more complex lineage tracing experiments in vivo. Recent studies have used CRISPR-Cas9-induced genomic scars combined with scRNA-seq to enable clonal lineage tracing in embryonic development. 73,74 AAVs could be used to induce genomic scars in specific cells at specific time points to enable clonal lineage tracing in adult tissues. We use our scRNA-seq data to further corroborate our assessment that NSCs are efficiently targeted and remain functional after transduction. Future studies using electrophysiology are required to assess whether the progeny generated by transduced NSCs is fully functional and able to integrate into the neuronal circuits of the OB.
Finally, we identified the combination of the CMV promoter and AAV1_P5 capsid as ideally suited to efficiently transduce NSCs in the v-SVZ. Our finding that the CMV outperforms the CAG promoter differs from previous studies overexpressing plasmids via in utero electroporation in the mouse brain. 75,76 We also found that increased viral load resulted in higher labeling efficiency as expected but at the cost of some regional specificity. This trade-off must be considered when designing future experiments; e.g., when targeting cells outside of the v-SVZ must be absolutely avoided, it is advisable to inject a lower amount of vg. We conclude that the CMV promoter should be preferred over CAG when using AAV1_P5, injecting 10 10 vgs per mouse or alternatively 10 9 when regional specificity is crucial.
Future experiments will be needed to unravel and understand the mechanisms governing the properties of our candidates. Altogether, we believe that our study opens tantalizing avenues to genetically modify NSCs in their in vivo environment for the treatment of CNS disorders or brain tumors.
Animals
In this work, the mouse lines C57BL/6N, TdTomato-flox, and TiCY were used. All mice were male and were age matched to 8 weeks, except for TiCY mice, which were 5 months old (for FACS quantification) and 3 months old (for scRNA-seq). Animals were housed in the animal facilities of the German Cancer Research Center (DKFZ) at a 12-h dark/light cycle with free access to food and water.
All animal experiments were performed in accordance with the institutional guidelines of the DKFZ and were approved by the "Regierungspräsidium Karlsruhe" (Germany).
AAV vector production
The production of the AAV-barcoded library was done as previously published 77,78 with some modifications: 159 distinct barcodes were inserted into the 3 0 UTR of a YFP reporter under the control of a CMV promoter and encoded in a self-complementary AAV genome. Each of the barcodes was assigned to one AAV capsid from a total of 183 variants, which are described in more detail in the accompanying manuscript by Weinmann et al. 53 Altogether, this library production included 12 AAV-WTs (AAV1 to AAV9, AVVrh.10, AAVpo.1, and AAV12), 94 peptide display mutants, and 71 capsid chimeras, which were created by DNA family shuffling. Isolation of synthetic capsids was performed in specific tissues or in our recent screens of AAV libraries in cultured cells, mouse liver tissue, or muscle. 79 . These synthetic capsids include a set of 12 AAV serotypes that were previously modified by insertion of over 20 different peptides in exposed capsid loops and that were recently characterized in established or primary cells. 79 In the work of Weinmann et al., 53 8.5) and was immediately frozen at À80 C. In total, 5Â freeze-thaw cycles were performed with the cell pellet prior to sonication for 1 min, 20 s. The cell lysate was treated with Benzonase (75 U/mL; Merck) for 1 h at 37 C, followed by a centrifugation step at 4,000 Â g for 15 min. CaCl 2 was added to a final concentration of 25 mM, and the solution was incubated for 1 h on ice, followed by centrifugation at 10,000 g for 15 min at 4 C. The supernatant was harvested, and a 1 / 4 vol of a 40% polyethylene glycol (PEG 8000; BioChemica) and 1.915 M NaCl (Thermo Fisher Scientific) solution was added prior to incubation for 3 h on ice. After centrifugation for 30 min at 2,500 Â g and 4 C, the pellet was dissolved in resuspension buffer (50 mM HEPES; Gibco), 0.15 M NaCl (Thermo Fisher Scientific), and 25 mM EDTA (Sigma) and was dissolved overnight. The solution was then centrifuged for 30 min at 2,500 Â g and 4 C, and the supernatant was mixed with cesium chloride (CsCl; Sigma) to a final concentration of 0.55 g/mL. The refractive index was adjusted to 1.3710 using additional CsCl or buffer, as needed. Next, the vector particles were purified using CsCl gradient density centrifugation. Fractions with a refractive index of 1.3711 to 1.3766 comprising DNA-containing AAV particles were pooled and dialyzed against 1Â PBS with a Slide-A-Lyzer dialysis cassette according to the manufacturer's instructions (Thermo Fisher Scientific). Subsequently, the samples were concentrated by using an Amicon Ultra Centrifugal Filter (Millipore; 100,000 nominal molecular weight limit [NMWL], used to retain the viral particles) following the manufacturer's instructions. The volume of the samples was reduced to 250À300 mL. AAV vectors were finally aliquoted and stored at À80 C.
The production of the AAV1_P5_YFP and AAV9_A2_YFP viruses for the FACS analysis experiment was done as described above, with the only modification that the vectors were purified using two iodixanol gradients. Of note, the barcoded AAV library construct as well as the YFP construct were engineered as double-stranded AAV vectors. The constructs for CAG_Cre::GFP and CMV_Cre were engineered as a single-stranded AAV vector.
AAV vector titration
AAV vectors were titrated using quantitative real-time PCR as described in Senís et al. 80 For the CAG_Cre::GFP construct, the primers and probe GFP_forward (fwd), GFP_reverse (rev), and GFP_probe were used, whereas Cre_fwd, Cre_rev, and Cre_probe were used for the CMV_Cre construct (Table S1). The qPCR was performed on a C1000 Touch Thermal Cycler equipped with a CFX384 Real-Time System (Bio-Rad) with the following conditions: initial melting for 10 min at 95 C, followed by 40 cycles of denaturation for 10 s at 95 C and annealing/extension for 30 s at 55 C. A standard curve was considered as reliable when the coefficient of determination (R 2 ) was greater than 0.985. www.moleculartherapy.org
Stereotactic injection
AAV vectors were stereotactically injected into the lateral ventricle by using the following coordinates calculated to bregma: anteriorposterior (AP) À0.5 mm, medio-lateral (ML) À1.1 mm, dorsoventral (DV) 2.4 mm. Mice received either 10 9 or 10 10 vgs/mouse in a total volume of 10 mL. The AAV libraries were stereotactically injected into the lateral ventricle by using the following coordinates calculated to bregma: AP À0.5 mm, ML À1.1 mm, DV 2.4 mm. Mice received 4 Â 10 10 vgs/mouse in a total volume of 2 mL. Ex vivo-manipulated cells (7,000 FACS events) were injected into two areas of the v-SVZ using the following coordinates calculated to bregma: AP 0.7 mm, ML 1.6 mm, DV 2 mm and AP 0 mm, ML 1.7 mm, DV 2 mm.
Cell isolation and in vitro cultivation
The lateral v-SVZ was micro-dissected as whole mount as previously described. 81 Single-cell transcriptomic profiling by 10Â chromium 3 0 sequencing Stereotactic injection, single-cell suspension preparation, and sorting 3-month-old TiCY mice were stereotactically injected into the lateral ventricle with 10 9 vgs of the AAV1_P5_Cre capsid. After 5 weeks of chase time, the mice were sacrificed, and the SVZ, striatum, RMS, and OB were isolated. The latter three tissues were pooled as a single tube and were named RoB. From these tissues, a single-cell suspension was prepared as described before (Cell isolation and in vitro cultivation). From the SVZ, the cells sorted were eYFP + (O4/CD45/Ter119 negative, eYFP + ) and from the eYFP-negative (eYFP À ) cells, only GLAST + cells. From the RoB, only eYFP + cells were sorted. The total number of sorted events for the 2 days of the experiment was 12,000 for SVZ cells and 5,800 for cells of the RoB. 2 TiCY mice were pooled for each sorting day. All of the cells were sorted in a volume of 50 mL of fetal calf serum (FCS) 10% in PBS, from which 45 mL was used for loading the Chromium Next GEM Chip G.
Library preparation, sequencing, and mapping
One library per each sorting day was prepared by following the manufacturer's protocol (Chromium Next GEM Single Cell 3 0 version [v.] 3.1) and sequenced on a NovaSeq 6K PE 100 S1.
In order to quantify eYFP and NeoR (NeoR/kanamycin resistance gene) expression, entries for these transgenes were manually added to the FASTA and Gene Transfer Format (GTF) files of the mouse reference genome mm10-3.0.0 provided by 10X Genomics. scRNAseq reads were pseudoaligned and further processed with kallisto|bustools 83,84 to generate a gene  barcode count matrix.
Computational analysis of scRNA-seq data
Cell barcodes with less than 1,500 unique molecular identifiers (UMIs) or more than 15% mitochondrial reads were filtered, and the remaining cells were further analyzed in Scanpy v.1.5.1. 85 We used Scanpy to calculate G2/M-and S-phase scores for all cells, based on their expression of G2/M-and S-phase marker genes from Tirosh et al. 86 These scores were then regressed out of the count data to reduce the influence of the cell cycle on clustering.
The first 50 principal components of 3,324 highly variable genes were used for 2D visualization with Uniform Manifold Approximation and Projection (UMAP; n_neighbors = 35) and cell clustering with the Leiden algorithm (resolution = 0.5). Cell clusters were assigned to cell types based on the expression of NSC lineage marker genes previously described in Kalamakis et al. 12 and Llorens-Bobadilla et al. 6 and ependymal cell markers from Shah et al. 58 (Figure 3C). To identify the location of cells from RoB, kernel density estimates of cell density in the 2D UMAP space were calculated for both samples. Since sample #1 contains more RoB cells, and sample #2 contains more v-SVZ cells, we subtracted both densities to highlight cells that most likely stem from RoB (orange cells in Figure S4H).
In order to estimate transduction efficiency from scRNA-seq data, we used the following model, based on the usual approach of modeling RNA-seq counts by the NB distribution: For non-transduced cells, we assume that they express NeoR such that an expected fraction m R of all of their mRNA transcripts originates from this gene. For each individual cell j, the actual expression strength q R j of the gene varies around this expectation according to a gamma distribution with mean m R and variance a R m R . The observed number of UMIs is then modeled as a Poisson variable: k R j q R j $ Poisðs j q R j Þ, where s j is the total UMI count for cell j, summed over all genes. Marginalizing out q R j https://www.codecogs.com/eqnedit.php? latex=q_j%5E%5Ctext%7BR%7D -0, we find k R j https://www.code cogs.com/eqnedit.php?latex=k_j%5E%5Ctext%7BR%7D -0 to follow a NB distribution with mean s j m R https://www.codecogs.com/eqnedit. php?latex=s_j%20%5Cmu_%5Ctext%7BR%7D -0 and dispersiona R . As we are looking at a non-transduced cell, the UMI count k Y j https:// www.codecogs.com/eqnedit.php?latex=k%5E%5Ctext%7BY%7D_ j -0 for eYFP is, of course, zero.
Differential gene expression was assessed by summing UMI counts of cells within a group to yield pseudobulk samples for testing in DESeq2 v.1.29.7. 87 eYFP + cells were tested against both eYFP À cells and eYFP À NeoR + cells. Testing eYFP + versus eYFP À has the advantage of greater statistical power due to higher cell numbers, but some eYFP À cells may be transduced cells with eYFP dropout. Thus, we performed both comparisons, yielding similar results. To account for the unequal distribution of eYFP + and eYFP À cells along the lineage ( Figure S4H), pseudobulk groups were formed per cluster and sample, and the cluster identity was added as a covariate in DESeq2.
To enable comparison of v-SVZ cells from 12 with our eYFP + cells, both datasets were integrated with Seurat's SCTransform integration workflow 88 using our cells as reference. The integrated dataset was clustered, and differential expression was assessed as above, using the shared clusters as covariate. Genes with the Gene Ontology (GO) term "GO: 0009615-response to virus" were highlighted.
FACS analysis of AAV-injected mice
FACS analysis for testing the transduction efficiency of the candidate viruses was performed by two methods. The first method consisted of www.moleculartherapy.org injecting 5-month-old TiCY mice with the AAV1_P5_Cre virus, and after 8 days, SVZ and OB cells were FACS analyzed ( Figures 4G and 4H). In the second method, we injected 2-month-old C57BL/6N mice with AAV1_P5_YFP and AAV9_A2_YFP viruses and analyzed them after 6 days ( Figures S3C and S3D). RNA isolation and cDNA synthesis RNA was isolated by using the PicoPure RNA Isolation Kit (Thermo Fisher Scientific). For RNA isolation of in vitro-transduced cells, 1,500 cultured NSCs per set were lysed in 100 mL extraction buffer. For isolation of FACS in vivo-transduced cells, batches of 500 cells or less were generated and were lysed in 100 mL extraction buffer. Up to 6 batches (2,500 cells) were obtained per set, depending on the cell type (Tables S2 and S3). The cell-containing extraction buffer was incubated for 30 min at 42 C, and the lysate was frozen at À80 C to increase the amount of isolated RNA. The cell lysate was mixed 1:1 with 70% ethanol, and RNA was extracted according to the guidelines of the PicoPure RNA Isolation Kit (Thermo Fisher Scientific). RNA was dissolved in 11 mL nuclease-free H 2 O. The cDNA synthesis was performed as described in Picelli et al. 89 by using locked nucleic acid-template switch oligo (TSO) (Table S1) and by using either 14 cycles for in vitro-cultured NSCs or 15 cycles (>300 cells per batch) or 16 cycles (<300 cells per batch) for FACS in vivo-transduced cells for the cDNA enrichment step. After purification 89 using AMPure XP beads (Beckman Coulter), cDNA was dissolved in 10 mL H 2 O.
Barcode amplification PCR and NGS library preparation
Barcodes were PCR amplified by using 10 ng cDNA as input material. Therefore, the PCR primers barcode_forward (Bar_fwd) and barcode reverse (Bar_rev) that bind up and downstream of the 15-bp-long barcodes within the according cDNA were engineered, and the Phusion High-Fidelity DNA Polymerase (Thermo Fisher Scientific) was used according to its manual in combination with 10 mM dNTPs (Thermo Fisher Scientific) ( Table S1). The PCR was performed on a T100 Thermal Cycler (Bio-Rad) with the following conditions: initiation for 30 s at 98 C, followed by 35 cycles of denaturation for 10 s at 98 C, annealing/extension for 20 s at 72 C, and a final step for 5 min at 72 C. The result was a 113-bp-long PCR amplicon that includes the barcode with its 15-bp-long random DNA sequence. The PCR amplicon was AM-Pure XP Bead purified (Beckman Coulter) 89 with a bead:sample ratio of 0
Microscopy and cell quantification
All images were acquired with a Leica TCS SP5 Acousto-Optical Beam Splitter(AOBS) confocal microscope equipped with a UV diode 405 nm laser, an argon multiline (458À514 nm) laser, a helium-neon 561 nm laser, and a helium-neon 633 nm laser. Images were acquired as multichannel confocal stacks (z plane distance 3 mm) in 8-bit format by using a 20Â or 40Â oil-immersion objective at a resolution of 1,024 Â 1,024 and 200 Hz. For quantification of the v-SVZ and total brain sections, tile scans of the whole ventricle or the whole coronal brain section were acquired with a total z stack size of 25 mm. To quantify the OB, tile scans of the whole OB covering the tissue thickness were acquired. For stained cells from in vitro culture, 4À9 fields of view were imaged. For representative images (2,048 Â 2,048 resolution, 100 Hz), the maximum intensity of a variable number of z planes was stacked to generate the final z projections. Representative images were cropped, transformed to RGB color format, and assembled into figures with Inkscape (inkscape. org). For cell quantification, ImageJ (NIH) was used including the plug-in cell counter to navigate through the z stacks. To quantify cells in the OB, the volume of the OB was calculated by multiplying the entire area of every OB section (including the glomerular layer [GLL]) with the entire z stack size. Then we converted cubed micrometers to cubed millimeters. Finally, cell counts were given as cells/cubed millimeters OB. To elucidate the labeling efficiency of the different AAV variants in the total v-SVZ (medial, dorsal, and lateral wall of the lateral ventricle), the cells were counted on 25 mm-thick coronal sections and are given as cells per 25 mm section. Mainly NSCs located in the lateral wall of the ventricle generate OB neurons during homeostasis. Since a particular area of the lateral v-SVZ serves cells to a particular volume of the OB, cell numbers were counted for the mathematical modeling of the lateral v-SVZ only. The length of the lateral ventricular wall was measured in a coronal section and multiplied with the z stack size (25 mm) to estimate the area of the lateral v-SVZ. Afterward, cells in the lateral v-SVZ were counted and normalized to the lateral v-SVZ area. Data are given as cells per cubed millimeters.
NGS screening of barcoded AAV capsid variantscomputational analysis NGS samples were sequenced and demultiplexed by the DKFZ Genomics and Proteomics Core Facility using bcl2fastq 2.19.0.316. This resulted in two (paired-end) FASTQ files per sample. Each FASTQ consists of reads resulting from the targeted barcode amplification and up to 50% PhiX DNA that was spiked in to increase library complexity.
Each AAV variant is associated with a unique 15-mer barcode sequence. To quantify the most successful AAV, we simply counted how often each barcode occurred in each FASTQ file, bearing in mind the following pitfalls: (1) Barcode sequences might occur outside of the amplicon by chance, e.g., in the PhiX genome. (2) Barcodes might have sequencing errors.
(3) Barcodes occur on the forward and reverse strand.
To circumvent issues (1) and (2), we opted for a strategy where we only count barcodes matching the expected amplicon structure. This was achieved with the following regex (regular expression; de- Assigning barcodes to AAV capsids Raw 15-mer counts were further processed in R. Most observed 15-mers matched a known barcode exactly (library #1: 74%; library #3: 87%), which allowed us to assign them to a unique AAV variant. The remaining 15-mer counts were added to the counts of the closest known barcode, allowing for a maximum of two mismatches.
Normalization
Each sequenced sample corresponds to one tube with up to 500 FACS cells. To downweigh samples with lower cell numbers, barcode counts were scaled by the respective number of FACS events (usually 500; Table S2). Barcode counts of the same cell type and biological replicate (termed "sets") were then summed. The AAV libraries used for transduction contain slightly unequal proportions of AAV variants, which means that some AAV variants may have an advantage due to increased starting concentration. To remedy this problem, barcode counts were further scaled by their abundance in the transduction library (as determined by Weinmann et al. 53 ) (Table S6), so that barcode counts corresponding to more frequent AAV capsids were decreased and vice versa.
To account for sequencing depth of the individual samples, normalized barcode counts were divided by the total number of valid barcodes in that sample, yielding normalized barcode proportions. A potential source of bias is that amplicons with different barcodes may have different RT-PCR efficiencies. A previous study 49 on ten barcoded AAV variants found no such bias, but nonetheless, we evaluated one possible source of bias, barcode GC-content, in our own data. We found no significant association between barcode GC-www.moleculartherapy.org content and mean barcode proportion across all samples in either library ( Figures S2L and S2M).
Identification of candidate AAVs with high transduction efficiency
To identify the most promising AAV variants, AAVs were ranked by the mean normalized barcode proportion within and across cell types (Figures 1DÀ1J). AAV1_P5 and AAV9_A2 performed consistently well across replicates of both experiments and were selected for further validation.
Mathematical modeling
A detailed description on how the mathematical modeling was developed is given in Supplemental material.
Statistics
Statistical analyses were performed with R v.4.0.2 using one-way ANOVA followed by Tukey's honest significant difference (HSD) post hoc test unless otherwise noted. Tukey's HSD p values were corrected for multiple testing with the Benjamini-Hochberg procedure. The homogeneity of variance assumption of ANOVA was assessed with Levene's test, and the normality assumption was assessed with the Shapiro-Wilk normality test. The respective p values are indicated in the figure legends. Figures were plotted with the R package ggplot2 and SigmaPlot 12.5.
Data and code availability
All sequencing data are available at the NCBI Gene Expression Omnibus (GEO) under GEO: GSE145172.
All scripts used in the analysis are available at https://github.com/ LKremer/AAV-screening.
|
v3-fos-license
|
2022-04-29T13:33:51.201Z
|
2022-04-28T00:00:00.000
|
248422950
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "9cff862932eb066374d44dc938ce37b261d34d7a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:612",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "456a32d36c244a01d81984568b2346d551a16118",
"year": 2022
}
|
pes2o/s2orc
|
Insight into the CBL and CIPK gene families in pecan (Carya illinoinensis): identification, evolution and expression patterns in drought response
Background Calcium (Ca2+) serves as a ubiquitous second messenger and plays a pivotal role in signal transduction. Calcineurin B-like proteins (CBLs) are plant-specific Ca2+ sensors that interact with CBL-interacting protein kinases (CIPKs) to transmit Ca2+ signals. CBL-CIPK complexes have been reported to play pivotal roles in plant development and response to drought stress; however, limited information is available about the CBL and CIPK genes in pecan, an important nut crop. Results In the present study, a total of 9 CBL and 30 CIPK genes were identified from the pecan genome and divided into four and five clades based on phylogeny, respectively. Gene structure and distribution of conserved sequence motif analysis suggested that family members in the same clade commonly exhibited similar exon-intron structures and motif compositions. The segmental duplication events contributed largely to the expansion of pecan CBL and CIPK gene families, and Ka/Ks values revealed that all of them experienced strong negative selection. Phylogenetic analysis of CIPK proteins from 14 plant species revealed that CIPKs in the intron-poor clade originated in seed plants. Tissue-specific expression profiles of CiCBLs and CiCIPKs were analysed, presenting functional diversity. Expression profiles derived from RNA-Seq revealed distinct expression patterns of CiCBLs and CiCIPKs under drought treatment in pecan. Moreover, coexpression network analysis helped to elucidate the relationships between these genes and identify potential candidates for the regulation of drought response, which were verified by qRT–PCR analysis. Conclusions The characterization and analysis of CBL and CIPK genes in pecan genome could provide a basis for further functional analysis of CiCBLs and CiCIPKs in the drought stress response of pecan. Supplementary Information The online version contains supplementary material available at 10.1186/s12870-022-03601-0.
domain but transmit Ca 2+ signals to their downstream targets via their interactors [3].
The CBL protein generally contains an important structural component consisting of four elongation factor hand (EF-hand) motifs for calcium binding. CBL proteins function in various biological processes by activating their downstream targets, CBL-interacting protein kinases (CIPKs), also called (SNF)-related kinases (SnRK3s) [4]. CBL and CIPK form the CBL-CIPK complex to function in different signalling pathways in plants [5]. CIPK proteins are composed of a serine/threonine kinase domain at the N-terminus and a NAF/FISL domain at the C-terminus [6]. The autoinhibitory NAF domain is conserved with 24 amino acid residues, and the activation of plant CIPKs is mediated via the interaction of this domain with CBL proteins [7].
The CBL and CIPK families are two plant-specific gene families, and family members have been identified in numerous species. For example, 10 CBL and 26 CIPK family members were identified in Arabidopsis, and 10 CBL and 33 CIPK members were found in rice [1,8]. Ten CBL and 27 CIPK family members were found in Populus trichocarpa, a model woody plant [9,10]. This phenomenon reveals that CBL may interact with one or more CIPK proteins [11].
Drought is one of the major abiotic stresses that affect plant growth, development, and productivity [12]. The increase in the frequency of drought, with the decrease in soil moisture, is one of the future challenges that affect our society [13]. Understanding the mechanism of how plants respond to drought has been reported at different levels, including epigenetic regulation, metabolic changes, and molecular mechanisms [14]. Numerous genes were upregulated or downregulated under drought, and some of them have been functionally analysed in model plants, such as Arabidopsis and rice [15]. The CBL-CIPK signalling pathways were reported to play important roles in responding to various environmental stresses, including drought stress. Multiple CBL and CIPK genes could be induced under drought stress in grapevine and tea plants [16,17]. AtCIPK1 participates in the defence response to drought stress by interacting with both AtCBL1 and AtCBL9, and cbl1 and cipk1 mutant plants are hypersensitive to drought stress [18].
Pecan (Carya illinoinensis [Wangenh.] K. Koch) is an economically important nut tree of the Carya genus that is native to the United States and Mexico, and is now cultivated on six continents [19]. Pecan nuts, the seeds of C. illinoinensis, are a rich source of unsaturated fatty acids, vitamins, and numerous bioactive constituents and are commonly consumed snack [20]. The production of pecan nuts was approximately 300 million pounds in the United States in 2020, with a value of approximately $400 million (https:// www. nass. usda. gov/). Pecan nut productivity is commonly affected by various biotic and abiotic stresses. Recently, the availability of chromosome-level genome assembly of pecan has allowed us to characterize the pecan CBL and CIPK gene families, and transcriptome data have helped to analyse their expression patterns in different tissues and under drought stress [21]. In the current work, nine CBL and 30 CIPK genes in pecan were identified, and the evolutionary relationships and duplication events were also analysed. The expression patterns of the two gene family members in various tissues and in response to drought stress were investigated. The precise annotation of the two gene families is the first step to fully understand the roles in pecan drought response.
Identification of the CBL and CIPK gene families in pecan
To identify the CBL and CIPK proteins, all pecan (Carya illinoinensis) cv. Pawnee protein sequences were downloaded from the Phytozome database v13 (https:// phyto zome-next. jgi. doe. gov/) [22]. Ten AtCBL and 26 AtCIPK protein sequences from Arabidopsis were retrieved from The Arabidopsis Information Resource (TAIR) database (https:// www. arabi dopsis. org/). The full-length AtCBL and AtCIPK sequences were aligned with MEGA 7 software (https:// www. megas oftwa re. net) [23]. Then, the alignments were selected to build hidden Markov model (HMM) profiles using the hmmbuild program in HMMER v3.3.2 (http:// www. hmmer. org/), and we searched the two HMM profiles against the pecan genome using HMMER with an E-value <1e-10 [24]. The candidate proteins were further examined using Pfam (http:// pfam. xfam. org/) and SMART software (http:// smart. embl-heide lberg. de/) to confirm the presence of key domains [25]. The EF-hand motif was used for verification of CiCBL family members, while both the kinase and NAF domains were selected for verification of CiCIPK proteins.
The molecular weights (MWs) and isoelectric points (pIs) of CiCBL and CiCIPK proteins were calculated using the ExPASY program (https:// web. expasy. org).
Analysis of gene structure and conserved motifs
The coding sequences and genomic DNA sequences of CiCBL and CiCIPK genes were collected to analyse structural features using TBtools v1.09832 [31].
The amino acid sequences of CiCBLs and CiCIPKs were used to predict the conserved motifs with the MEME online tool (http:// meme-suite. org/ tools/ meme) [32].
Chromosomal locations of CBL and CIPK genes in pecan
The chromosomal locations of CiCBL and CiCIPK genes were retrieved from the phytozome database, and the chromosomal images were visualized using the Circos program in Tbtools software [31].
Gene duplication and selection pressure analysis
All of the CBL and CIPK protein sequences from the pecan genome were searched against themselves using NCBI-BLAST 2.7.1+ [33]. Collinearity analyses of the CBL and CIPK gene families were performed using MCScanX (Multiple Collinearity Scan toolkit) software (http:// chibba. pgml. uga. edu/ mcsca n2/) [34].
To detect the selection pressure of the duplication events, the CDSs of the CiCBL and CiCIPK genes were aligned using ClustalW v2.0 software [35]. Then, the synonymous substitution (Ks) and nonsynonymous substitution (Ka) of tandem and segmental duplication events were calculated using TBtools software, and the selection pressure was evaluated by the Ka/Ks ratios.
Plant materials, growth conditions and sample collection
Leaf, mature pistillate and staminate flower, young fruit, and seed samples were collected from nine randomly selected nine-year-old healthy pecan trees of the 'Pawnee' cultivar at the experimental station of Nanjing Forestry University, which located in Jurong City, China (119°9′6″E, 31°52′45″N). Ten-month-old seedlings propagated by pecan seeds (collected from 'Pawnee' trees) were served as rootstocks. Mature staminate flowers were collected in April, leaves and mature pistillate flowers were harvested in May, young fruits were sampled at 60 days after flowering in July, and seeds were collected in October 2019. Root samples were collected from threemonth-old pecan seedlings that were propagated by seeds, and harvested from 'Pawnee' trees. All the samples were harvested on sunny days (8:00 to 10:00 am).
For drought treatment, 'Pawnee' grafted seedlings were used in this study. The commercial pecan cultivar 'Pawnee' was selected as scion, and patch budding was applied for grafting at the experimental station of Nanjing Forestry University in August 2019. Pecan seedlings that propagated by seeds (harvested from 'Pawnee' trees in October 2018) were used as rootstocks. The grafted plants were placed in 12-L plastic containers containing a soil mixture of peat, vermiculite, and perlite (5: 3: 2 by volume). After 12 months, the grafted pecan seedlings were then moved to a climate chamber with a photoperiod of 14 h of light at 24 °C/ 10 h of dark at 22 °C, and 60-70% relative humidity. Distilled water was irrigated twice every week. The seedlings were grown in wellwatered conditions for 1 month, then they were moved to a growth chamber at 24/22 °C day/night temperatures with a 14/10-h photoperiod, and water was withheld for 15 days. Pecan leaves were collected at 0, 3, 6, 9, 12, and 15 d during drought treatment.
The harvested tissue samples were frozen in liquid nitrogen and stored at − 70 °C until RNA was isolated. Each sample was collected from at least three plants, and three biological repetitions were carried out for each treatment.
Measurement of proline content and SOD activity
The free proline content was detected according to the ninhydrin method, and the absorbance was measured at 520 nm [36]. Superoxide dismutase (SOD) activity was measured using a Total SOD Assay Kit (A001-1, Nanjing Jiancheng Bioengineering Institute, Nanjing, China) following the manufacturer ' s instructions.
RNA isolation and qRT-PCR analyses
Harvested tissue samples were ground to powder in liquid nitrogen. Total RNA was extracted from various tissues using TRIzol reagent (Invitrogen, Carlsbad, USA) following the manufacturer's instructions. Genomic DNA was removed using a DNase I kit (Qiagen, Hilden, Germany), and RNA quality was detected with an Agilent 2100 bioanalyzer (Agilent Technologies, CA, USA) and quantified using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Wilmington, USA).
For qRT-PCR (quantitative real-time PCR) analysis, 1 μg of total RNA was used to synthesize first-strand cDNA using the Prime-Script RT Reagent Kit (Takara, Dalian, China). qRT-PCR was performed on an ABI 7500 Real Time PCR system (Applied Biosystems ™ , Foster City, USA) using TB Green ™ Premix Ex Taq ™ II (TaKaRa, Shiga, Japan). Specific primers for the CBL and CIPK genes were designed using the IDT PrimerQuest online tool (https:// sg. idtdna. com/ Prime rQuest/ Home/ Index). An actin gene (CiPaw.03G124400) that used in previous studies was applied as an internal control for normalization [37]. Relative quantification of CBL and CIPK genes was determined by the 2 -∆∆Ct method [38]. The PCR cycling conditions were as follows: initial denaturation at 95 °C for 30 s, then 40 cycles of 95 °C for 5 s and 60 °C for 15 s.
Transcriptome analysis
cDNA libraries were constructed and sequenced using an Illumina NovaSeq 6000 platform by Gene Denovo Biotechnology Co. (Guangzhou, China). The FPKM (fragments per kilobase per million of reads mapped) values of CBL and CIPK genes were calculated by RSEM software to quantify gene expression levels [39]. Sequence data have been uploaded to the NCBI (National Center for Biotechnology Information) database (https:// www. ncbi. nlm. nih. gov/) under the accession number GSE179336 and PRJNA799663.
Coexpression analysis
The Pearson correlation coefficient (PCC) values between different gene pairs of the two family members were calculated with SPSS Statistics 24 based on the expression data of CiCBLs and CiCIPKs during pecan response to drought treatment. A coexpression network was then built to investigate the relationship between CiCBL and CiCIPK genes based on PCCs, and the network was then visualized with Cytoscape version 3.8.2 software [40].
Subcellular localization assays
To investigate the localization of selected CiCIPKs, the coding regions of CiPaw.01G129000, CiPaw.07G161900 and CiPaw.13G065400 were amplified using Pfu DNA polymerase (TransGen, Beijing, China), and the PCR products were cloned upstream of a GFP (green fluorescent protein) gene into the pBWA(V)HS vector driven by the 35S promoter (BioRun, Wuhan, China) to generate the constructs. The protoplasts were extracted from 3-week-old Arabidopsis seedlings and transformed according to the polyethylene glycol (PEG) method [41]. The transformed protoplasts were incubated at 24 °C for 15-18 h, and then the fluorescence signal was determined under a Nikon C2-ER confocal scanning microscope (Nikon, Kyoto, Japan). Each construct was tested and imaged in at least four protoplasts.
Statistical analysis
Statistical analyses were carried out using SPSS 24 software. The results were shown as the means ± SE (standard errors) of three biological replicates. One-way ANOVA and Duncan's multiple range test were selected to compare the significance of differences (P < 0.05).
Genome-wide investigation of CiCBL and CiCIPK genes
After screening the pecan genome by domain confirmation, a total of 9 CBLs and 30 CIPKs were identified (Additional file 1: Table S1). Sequence analyses of the two gene family members showed that the full-length CiCBL genes varied from 2129 to 13,131 base pairs (bp), and CiCIPK genes varied from 1296 to 24,147 bp. The CiCBL and CiCIPK proteins consisted of 179-252/398-492 amino acids (aa), with the relative molecular weights (MWs) of CBLs ranging from 21,121.18 to 29,077.27 Da (Da), while the MWs of CIPKs ranged from 45,423.42 to 55,346.56 Da. Interestingly, the isoelectric points (pIs) of CBL proteins were conserved in pecan, which varied from 4.6 to 4.88, while the pIs of CIPKs varied from 5.67 to 9.39, and 86.67% (26/30) of them had high pIs (pI > 7). All 9 CBL and 30 CIPK proteins were hydrophilic with negative grand average of hydropathicity (GRAVY) values. Detailed information for the two gene family members is listed (Additional file 1: Table S1).
Gene structural and conserved domain analysis of the pecan CBL and CIPK families
The evolutionary relationships of pecan CBLs and CIPKs were investigated according to phylogenetic trees, which were built using pecan and Arabidopsis protein sequences. Nine CiCBLs were divided into four clades (I-V), and Clade IV was composed of the maximum number of 4 members. Thirty CiCIPK proteins were also divided into five clades (A-E), and Clade E contained 16 CIPKs (Fig. 1). The classification result was consistent with previous reports in Arabidopsis [17,42].
Furthermore, the conserved motifs of CBL and CIPK proteins from pecan and Arabidopsis were visualized (Fig. 1). Ten different motifs with variable lengths were identified in CBLs from pecan and Arabidopsis, and all these proteins contained motifs 1, 3, and 4. Most genes in the same clade exhibited similar motif patterns; for example, CiPaw.02G141100 and CiPaw.02G141200 in clade IV possessed motif 10. Four CiCBLs in clade IV possessed motif 5 (Fig. 1A). The detailed sequence information of the 10 motifs is provided in Additional file 1: Table S2. Twenty conserved motifs were identified in CIPK proteins, and a similar phenomenon also occurred in the pecan CIPK gene family (Fig. 1B). Motifs 5, 13, 7, 4, 2, 1, and 3 at the N-terminus were composed of the catalytic kinase domain. Additionally, motif 9 was the NAF motif, which played a key role in interacting with CBL proteins (Additional file 1: Table S3) [4].
The gene structure analyses showed that each CiCBL gene contained multiple introns, which ranged from 7 to 9, while the intron numbers of AtCBLs ranged from 6 to 9, suggesting that the intron numbers were relatively conserved (Fig. 1A). However, we found that pecan CIPK genes could be divided into two main clusters: an intron-poor cluster (< 4 introns per gene) and an intronrich cluster (> 10 introns per gene) [43,44]. All the intron-poor CiCIPK genes that contained 0-2 introns were clustered to clade E; however, the intron-rich members containing 11-16 introns were gathered to the other four clades (A-D) (Fig. 1B).
Chromosomal location and duplication events among pecan CBL and CIPK genes
Genome chromosomal location analyses revealed that 9 CiCBLs were distributed on six out of the sixteen chromosomes. Chromosome 11 contained the largest number of CiCBL genes, with three, chromosome 2 had two, and the other four were located on chromosomes 5, 9, 10, and 12 (Fig. 3). The 30 CiCIPK genes were unevenly mapped on 12 pecan chromosomes, except chromosomes 5, 6, 11, and 12. Specifically, chromosome 10 contained the maximum of 5 CiCIPK genes, followed by chromosomes 1 and 9, which both contained 4. In contrast, chromosomes 3, 4, 15 and 16 had only one CiCIPK gene (Fig. 3). These results suggested that genetic variations occurred during the evolutionary process of pecan.
Furthermore, MCScanX was applied to analyse the duplication events in the pecan CBL and CIPK gene families. In the pecan CBL gene family, seven segmental duplication events with 8 genes and one tandem duplication event were found, suggesting that most CiCBL genes were generated from segmental duplication (Additional file 1: Table S4). In addition, twelve segmental duplication events with 20 CiCIPKs were identified (Additional file 1: Table S4). The above results showed that segmental duplication events played a central role in the evolution of CiCBL and CiCIPK genes.
The synonymous (Ks) and nonsynonymous (Ka) duplication events were further calculated, and the ratio of Ka/Ks could be used to explore the selection pressures influencing sequence divergence (Additional file 1: Table S4). A value of Ka/Ks < 1 indicates negative selection, Ka/Ks > 1 indicates positive selection, and Ka/Ks = 1 indicates neutral selection. Amino acid replacements that increased fitness were retained by positive selection, whereas replacements that reduced fitness were removed by negative selection [37]. The Ka/Ks values of the segmental and tandem duplication events from pecan CBL and CIPK gene families ranged from 0.05 to 0.45, showing that these genes might experience strong negative selection (Additional file 1: Table S4). regia, P. trichocarpa, and S. purpurea). In total, 106 CBL and 283 CIPK candidate proteins were identified from the 14 genomes and applied to construct phylogenetic trees of the CBL and CIPK families (Fig. 4). The numbers of CBLs in the 14 detected species ranged from 1 (B. braunii) to 14 (S. purpurea), while the numbers of CIPKs varied from 1 (B. braunii) to 35 (O. sativa, S. purpurea). We found that the number of CBLs was commonly smaller than that of CIPKs among these species, except M. polymorpha, which contained 3 CBL and 2 CIPK members (Additional file 1: Table S5). Phylogenetic trees of CBLs and CIPKs were built after sequence alignment analysis. For the phylogeny of the CBL gene family (Fig. 4A), the phylogenetic tree was clearly divided into four clades, which was consistent with our previous results in Fig. 1A. Interestingly, CBL members in Clades I and II were all from seed plants, and CBLs in the moss and liverwort were all grouped in Clades III and IV. For the CIPK gene family (Fig. 4B), the phylogenetic tree was classified into five clades, which was also consistent with previous results (Fig. 1B). The intron-poor CIPK genes were all categorized in Clade E and first appeared in A. trichopoda, the basal angiosperm (Fig. 4B). However, CIPK genes in the moss, liverwort, and lycophyte all contained multiple introns, which were grouped in Clades C and D.
Expression profiling of CiCBL and CiCIPK genes in different pecan tissues
The expression profile of a gene might help to elucidate its biological function. To determine the temporal and spatial expression levels of the pecan CBL and CIPK genes, we investigated the expression profiles of CiCBLs and CiCIPKs using RNA-Seq data of six different tissues of pecan, including seeds, roots, leaves, young fruits, staminate flowers, and pistillate flowers (Fig. 5). Among the nine CiCBL genes, three Clade IV members, CiPaw.09G110800, CiPaw.10G083600 and CiPaw.12G140900, exhibited similar expression patterns in different tissues, and the remaining member, CiPaw.11G207000, was highly expressed in all six tissues (Fig. 5A). Other CiCBLs showed tissue-specific expression patterns. Surprisingly, CiPaw.02G141100 and CiPaw.02G141200 in Clade I both showed high expression levels in the staminate flower, indicating that they might function in the development of staminate flowers in pecan. CiPaw.05G196600 had high expression levels in leaf and seed tissues; however, CiPaw.11G119600 was expressed at low levels in all detected tissues.
The expression data of thirty CiCIPK genes in six tissues showed that different genes exhibited different expression patterns (Fig. 5B). Two CIPK genes (CiPaw.15G175200 and CiPaw.16G114100) generated from segmental duplication both exhibited high expression levels in all tissues. The majority of CIPK genes exhibited tissue-specific expression patterns, suggesting that they play roles in different biological processes. For example, CiPaw.10G060300 was highly expressed only in staminate flowers, and CiPaw.08G037800 exhibited high expression levels in leaves. CiPaw.09G218800, CiPaw.01G192800, and CiPaw.04G172400 showed relatively high expression levels in the root, seed, and staminate flower tissues.
Expression patterns and coexpression networks of CiCBL and CiCIPK genes in response to drought
Drought is one of the most important environmental stress problems that inhibit plant growth [14]. As a positive role in plants response to drought, proline was significantly accumulated in pecan seedlings under drought, especially after treatment for 6 d (Additional file 2: Fig. S2A). The accumulation of reactive oxygen species (ROS) has been found to be stimulated in plants under drought stress, resulting in oxidative stress. Plants were protected against the negative effects of ROS by a complex antioxidant system including SODs, which play a crucial role in the removal of ROS [46]. To investigate the SODs responsible for the scavenging of ROS, the activity of SOD was tested in pecan seedlings after drought treatment, and we found the SOD activity also increased significantly (Additional file 2: Fig. S2B).
The CBL-CIPK module has been reported to be involved in the response to environmental stresses, especially drought stress [42,47]. To investigate the roles of CiCBLs and CiCIPKs in response to drought, their expression profiles were analysed using RNA-Seq datasets. In total, 9 available CBL and 29 CIPK genes showed their expression patterns under drought stress (Fig. 6). Two CiCBL genes (CiPaw.02G141100 and CiPaw.11G119600) showed low expression levels under drought treatment, and CiPaw.05G196600 and CiPaw.11G207000 gradually decreased and showed the lowest expression levels after 15 days of drought application (Fig. 6A). In contrast, CiPaw.09G110800 and CiPaw.11G071800 were gradually upregulated by drought at different time points. Twenty-nine CiC-IPK genes showed various expression patterns in pecan subjected to drought stress (Fig. 6B). Six CiCIPKs were expressed at low levels, while three genes were downregulated. Most of the remaining CIPK genes were upregulated, ten of which gradually increased and peaked at 15 days of drought application.
A coexpression network was further built to explore the mutual relationships of pecan CBL and CIPK genes by analysing their transcript levels under drought. Within the network, 34 nodes (9 CBLs and 25 CIPKs) and 172 edges were found, and 17 nodes contained more than ten edges, suggesting that these nodes were tightly correlated (Additional file 2: Fig. S3). CiPaw.11G071800 contained the maximum numbers of edges, which included 5 CBLs and 15 CIPKs. According to the 172 coexpression events, 105 exhibited significantly positive correlations, and the remaining 67 exhibited significantly negative correlations.
To validate the reliability of the RNA-Seq data, qRT-PCR was applied to analyse the expression levels of pecan CBL and CIPK genes under drought stress. Based on the RNA-Seq and coexpression results, 5 CiCBL and 10 CiCIPK genes were selected for further confirmation after drought stress was imposed (Fig. 7). The specific primers used in the study were listed in Additional file 1: Table S6. As shown in Fig. 7A, the expression patterns of five CiCBL genes were consistent with the previous RNA-Seq results (Fig. 6). Four CiCBLs were downregulated, while CiPaw.11G071800 was significantly induced after 12 and 15 d of drought treatment. The expression patterns of 90% CiCIPKs were consistent between RNA-Seq and qRT-PCR results, except for CiPaw.14G052000, which exhibited different expression patterns (Fig. 7B). Unlike CiCBLs, most CIPK genes were enhanced by drought treatment, especially after 15 d of treatment, indicating that they might play roles in the response to drought. For example, CiPaw.01G129000 was gradually upregulated under drought stress and displayed the highest expression level at 15 days after treatment, which was more than 40-fold higher than that of the control (0 d).
Subcellular localization of CiCIPKs
The subcellular localization of a protein might help to reveal its function. According to the expression (Figs. 6 and 7) and coexpression analysis (Additional file 2: Fig. S3), three CiCIPK genes (CiPaw.01G129000, CiPaw.07G161900, and CiPaw.13G065400) that coexpressed with more than ten genes were induced gradually by drought treatment, and showed the highest expression levels after 15 d of drought. These CiC-IPKs were selected as candidate genes, and the coding regions of the three genes were fused to the green The results revealed that the control GFP was distributed throughout the cell, whereas the CiPaw.01G129000-GFP fluorescence signal was only found in the cytoplasm, suggesting that the CiPaw.01G129000 protein was localized to the cytoplasm. Moreover, the CiPaw.07G161900-GFP and CiPaw.13G065400-GFP fusion proteins were detected in both the cytoplasm and nucleus (Fig. 8).
Discussion
The role of calcium in stress signal transduction has been well established in plants [48]. CBLs are plant-specific Ca 2+ sensors that interact with and activate CIPKs to transduce Ca 2+ signals [4]. To date, CBL and CIPK family members have been identified in model plants and major crops, including Arabidopsis, canola, rice, and poplar [8,9,49]. However, very little is known in pecan.
In the current study, 9 CiCBL and 30 CiCIPK members were found in the pecan genome (Fig. 1). Similar to those CBL proteins in Arabidopsis and canola [49], four EF-hand motifs were found in the nine CiCBL proteins (Fig. 2). As a type of Ca 2+ sensor, CBL proteins are able to bind Ca 2+ through EF-hand motifs [10]. In addition, two CiCBL proteins started with a conserved N-terminal myristoylation site, which might play a role in membrane association of the CBL-CIPK complex by switching calcium with myristoyl [50]. CIPK proteins commonly possess an N-terminal kinase domain and a conserved C-terminal NAF domain, and all CiCIPKs contained these two domains (Fig. 1B). CBLs bind to the short NAF domains to release from the kinase domain and switch the kinase into an active conformation [51]. Expression analysis of CiCBL and CiCIPK genes in pecan exposed to drought stress. Expression patterns of 5 CiCBL (A) and 10 CiCIPK (B) genes were analysed using qRT-PCR. The actin gene (CiPaw.03G124400) was selected as the reference gene. Lowercase letters represent significant differences (P < 0.05) according to Duncan's multiple range test. Error bars indicate the means ± SE obtained from three biological replicates Comparison of the numbers of CBL and CIPK genes in pecan with other sequenced plant genomes revealed that the two gene families have expanded multiple times in evolutionary history (Additional file 1: Table S5) [52]. In the green algae Botryococcus braunii, both CBL and CIPK contained only one family member, and the family sizes increased quickly following evolution to seed plants. Gene duplication commonly plays a key role in the expansion of a gene family during evolution and functions in adaptation to environments by obtaining new gene functions in plants [53]. Segmental/whole-genome duplications and tandem duplications contribute to the evolutionary expansion of gene families [54]. In pecan, segmental duplication occurred in both CiCBLs and CiCIPKs, and contributed largely to the expansion of the two families, while only one tandem duplication event was detected in the CBL family (Fig. 3). CIPKs resulting from both segmental and tandem duplications were found in Arabidopsis, rice, maize, and grape [16,43]. In contrast, only tandem duplication events were detected in grape CBLs, and tandem duplications played major roles in Medicago CBL and CIPK families [16,55]. Negative selection functions in the process of removing deleterious mutations to prevent functional divergence; however, positive selection plays a role in new advantageous mutation collection and spreading throughout the population [56]. The Ka/Ks ratios of all the segmental and tandem duplication events in pecan CBLs and CIPKs showed that they were driven by negative selection (Additional file 1: Table S4). Gene duplication can lead to accumulate degenerative mutations, and negative selection may result in stabilizing selection via removing deleterious variations that arise, indicating these CiCBLs and CiCIPKs under negative selection were functionally conserved, respectively [57]. The CBL and CIPK genes of Medicago have also undergone strong negative selection pressure during evolution [55].
GFP Chlorophyll Bright field Merge
CiPaw.07G161900 CiPaw.01G129000 CiPaw.13G065400 According to the phylogenetic analysis, pecan CBLs and CIPKs were classified into four and five clades, which was consistent with previous findings in grape and pepper (Fig. 4) [16,58]. Gene structures such as exon-intron organizations and intron numbers are imprints of evolution in several gene families [59]. Eukaryotic genes could be divided into intron-rich, intron-poor (less than four introns), or intronless (no introns), and genes of earlydiverging plant species were mostly intron-rich [60]. The intron numbers in CiCBLs were very conserved, ranging from 7 to 9; however, the CiCIPKs were clearly divided into an intron-rich cluster (clades A, B, C, and D) and an intron-poor cluster (clade E) (Fig. 1). This phenomenon of gene structure in CIPK genes widely exists in Arabidopsis, canola, grape, and soybean [1,16,42,49]. Interestingly, the intron-poor cluster members in the CIPK gene family first appeared in seed plants, while CIPKs in algae, moss, spikemoss, and fern all possessed multiple introns (Fig. 4) [5,42]. These results suggested that intron loss and gain events functioned in the evolution of plant CIPK genes. Interestingly, clustering expression results suggested that Arabidopsis CIPK genes could be induced by environmental stresses, and some CIPK genes in the two clusters shared similar expression patterns in response to environmental stresses [43]. For example, three CIPKs (CIPK8, CIPK21 and CIPK24) in intronrich cluster and two (CIPK6 and CIPK13) in intron-poor cluster were all involved in the regulation of salt stress [61][62][63][64][65]. For CBLs, proteins in the same clade have high sequence similarities (Fig. 2), when fused with GFP, three CBLs including CBL2, CBL3 and CBL6 in clade IV were localized at the tonoplast, while CBL1 and CBL9 in Clade III were both detected at the plasma membrane [66]. Moreover, CBL1 and CBL9 both functioned in K + homeostasis under low K + condition via interacting with CIPK23 in Arabidopsis [67]. These two CBLs also play a crucial role in pollen germination and tube growth [68]. CBL2 and CBL3 coexpressed with CIPK21 to respond to salt stress by regulating ion and water homeostasis [63]. Although genes in the same clade were considered to be functionally identical, CIPKs and CBLs with high sequence similarities may show distinct functions, such as CBL1 showed distinctively different functions from CBL9 in response to ABA treatment [69].
Drought is a major environmental stress that profoundly affects the growth, development, and productivity of plants. CBL-CIPK complexes have been previously proven to play important roles in drought stress signalling pathways [42]. Our RNA-Seq data under drought treatment revealed that the expression levels of some CiCBLs and CiCIPKs were drastically changed (Fig. 6). Most CiCBL genes were downregulated in response to drought, except CiPaw.11G071800, which was significantly induced (Fig. 7A). Orthologous AtCBL9 from Arabidopsis has been shown to be essential for the drought stress response. AtCBL9 was involved in the drought-induced abscisic acid (ABA) production process, which was highly inducible by drought and treatments with the plant hormone ABA, and the cbl9 mutant seedlings were more sensitive to drought [70]. Unlike CiCBLs, the majority of CiCIPK genes were upregulated under drought stress (Fig. 6B), which was further validated by qRT-PCR (Fig. 7B). As protein kinases, CIPKs play central roles in the response to drought in plants, and RNA-Seq results of drought treatment in grapevine revealed that the expression levels of many kinase genes changed, including CIPKs [71]. Interestingly, most CIPKs in soybean and cassava were drought-responsive genes, especially family members belonging to the intron-poor clade [42,72]. AcCIPK18, an intronless CIPK gene in pineapple, is a positive regulator of drought stress, and the overexpression lines showed significantly stronger drought tolerance than wild-type plants [73]. CaCIPK3 is an intronless gene in pepper that participates in the response to various stresses, including drought. Knockdown of CaCIPK3 improves sensitivity to drought, while overexpression of CaCIPK3 increases drought tolerance by enhancing the activities of antioxidant defence systems [47]. The CcCBL1-CcCIPK14 complex positively regulates drought tolerance through modulation of flavonoid biosynthesis in pigeon pea. CcCIPK14-overexpressing (CcCIPK14-OE) plants had enhanced drought tolerance, but this phenomenon was reversed in CcCBL1-RNAi CcCIPK14-OE double transgenic plants [74].
In conclusion, a total of 9 CBLs and 30 CIPKs were annotated in pecan at the genome scale and were classified into four and five clades, respectively. The gene structure and conserved domains, chromosome locations, duplication events, evolutionary relationships, and expression patterns of CiCBL and CiCIPK genes were investigated. The results indicated that CiCBLs and CiCIPKs might function in development and drought response in pecan. Overall, this research provides important information concerning pecan CBLs and CIPKs, and will be helpful for further functional investigation.
|
v3-fos-license
|
2017-07-30T18:52:34.037Z
|
2016-11-01T00:00:00.000
|
41112669
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4360/8/11/402/pdf",
"pdf_hash": "17f3fbe673b0be7cb2dc1caed94b1859bcea7ba4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:614",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Medicine"
],
"sha1": "17f3fbe673b0be7cb2dc1caed94b1859bcea7ba4",
"year": 2016
}
|
pes2o/s2orc
|
Patterned Fibers Embedded Microfluidic Chips Based on PLA and PDMS for Ag Nanoparticle Safety Testing
A new method to integrate poly-dl-lactide (PLA) patterned electrospun fibers with a polydimethylsiloxane (PDMS) microfluidic chip was successfully developed via lithography. Hepatocyte behavior under static and dynamic conditions was investigated. Immunohistochemical analyses indicated good hepatocyte survival under the dynamic culture system with effective hepatocyte spheroid formation in the patterned microfluidic chip vs. static culture conditions and tissue culture plate (TCP). In particular, hepatocytes seeded in this microfluidic chip under a flow rate of 10 μL/min could re-establish hepatocyte polarity to support biliary excretion and were able to maintain high levels of albumin and urea secretion over 15 days. Furthermore, the optimized system could produce sensitive and consistent responses to nano-Ag-induced hepatotoxicity during culture. Thus, this microfluidic chip device provides a new means of fabricating complex liver tissue-engineered scaffolds, and may be of considerable utility in the toxicity screening of nanoparticles.
Introduction
In recent years, nanotechnology has experienced dramatic developments leading to its wide application in areas such as solar energy, cosmetics, food production, and drug delivery [1]. However, the potential toxicity of nanoparticles remains a concern; they readily enter the body and relocate within metabolically active organs, with the liver representing the main organ for nanoparticle accumulation and biotransformation. Although many in vivo models have been developed to evaluate the potential toxicity of nanoparticles, improved methods are needed to decrease the numbers of animals required for testing along with the experimental expenses, and to improve the accuracy, comparability, and reproducibility of experimental results. However, previous studies have shown that many in vitro toxicity testing methods fail to identify the hazards of nanoparticles, as primary hepatocytes rapidly lose their morphology and specific functions-such as the activity of phase I and phase II enzymes and the production of plasma proteins-under traditional incubation methods [2]. In contrast, it has been demonstrated that hepatocytes cultured as spheroids are able to promote cell-cell communication and gap junctions, form bile canaliculi, and express specific transporters. Thus, many methods have been used to create spatial variations and topologies to mediate cell aggregates; however, these have exhibited difficulty in mimicking the exact microenvironment of hepatocytes, combating poor oxygen and nutrient diffusion [3,4].
To support long-term maintenance of hepatocytes, researchers have used perfusion culture systems to supply transport proteins and large molecules to cultured hepatocytes [5]. However, these systems all showed inherent limitations such as high shear, lack of physiologically relevance, requirement for high cell numbers, and low viability of hepatocytes [6]. To overcome these restrictions, many miniaturization perfusion systems were developed to sufficiently mimic the in vivo situation, which is an important consideration for liver tissue engineering and pharmacological drug screening. In addition, miniaturization perfusion systems offer a very well-suited microenvironment for cultured cells, allowing the control of features such as culture media, velocity of cell culture fluid, and liquid concentration [7].
In native liver tissues, hepatocytes are arranged into a single cordlike structure and separated by adjacent sinusoids [8]. Many researchers have attempted to mimic the microenvironment of hepatocytes, including oxygen content, nutrient delivery, metabolite removal, and shear stress. Sudo et al. cultured primary hepatocytes and rat/human microvascular endothelial cells (MVECs) on each sidewall of a collagen-gel scaffold between two microfluidic channels, demonstrating that hepatocytes and MVECs could exchange secreted proteins by diffusional transport under flow conditions [9]. Zhang et al. reported that primary hepatocytes on a galactosylated microporous collagen-gel membrane could preserve the original morphology of hepatocytes and maintained high levels of urea production and cytochrome P450 activity for 14 days [10]. Lee et al. also developed a 3D liver-on-a-chip platform to investigate the interaction of hepatocytes and hepatic stellate cells. Notably, this liver chip was supplied with a continuous flow of medium through osmotic pumping, which played an important role in the formation of tight hepatocyte spheroids and thereby improved liver-specific function over a long duration [11]. However, these systems for hepatocyte cultivation comprise 2D monolayer cultures, and cells embedded in such substrates are not reflective of 3D cultivation conditions [12]. Additionally, they provide only limited long-term hepatocyte cultures for investigating protective drug effects and the underlying mechanisms of drug toxicity [13]. Although several 3D scaffolds have been developed to facilitate the establishment of microfluidic channels to maintain hepatocyte function, remaining obstacles include the inability of many scaffolds to offer a defined stiffness for hepatocytes, the lack permeability for the transport of large macromolecules, and hampered cell-cell interactions [14].
Electrospun fibers have received increasing attention over last several decades as a component of cell culture scaffolding; in particular, polymer fibers have offered several advantages in mimicking the size scale of fibrous extracellular matrix (ECM) and providing a very high fraction of surface available for cell interaction. As microscale interconnected pores are essential to transport oxygen and nutrient supplies for cell growth and are beneficial for the adhesion and viability of primary hepatocytes [3], electrospun fibers have emerged as promising candidates and useful substrates in the field of hepatic tissue engineering. Several researchers have shown that the incorporation of electrospun fibrous mats into microfluidic chips could be widely used in cell culture. For example, Lee et al. developed a polydimethylsiloxane-based microfluidic chip and evaluated the feasibility of human bone marrow-derived mesenchymal stem cell culture under perfusion conditions [15]. Patric et al. utilized lithography methods to integrate patterned electrospun fiber pads with microfluidic networks, yielding good attachment and growth of fibroblasts [16]. In addition, microfluidic networks have also been shown to efficiently generate spatially and temporally defined liquid microenvironments. Furthermore, electrospun fiber-based microfluidic networks exhibit the potential for wide use in bioassays as well. Because electrospun fibrous mats possess high specific surface area and macro-and mesoporous structures, they exhibit high levels of protein absorbance. For example, Liu et al. assembled electrospun nanofibrous polyvinylidene fluoride (PVDF) membranes into a polydimethylsiloxane (PDMS) chip and found that the composite microfluidic system increased the levels of protein adsorption and improved the sensitivity in immunoassays [17].
Despite such advances, the integration of microfluidic chips with fibrous mats systems has not been reported for use in hepatocyte culture. Poly-DL-lactide (PLA) is a synthetic polyester, its chemical composition provides a hydrophobic nature and linear structure which contributes to an excellent spinability process during electrospinning. Furthermore, it is biocompatible, biodegradable, and nontoxic. In this study, we developed and evaluated a method to integrate PLA micropatterned fibers with microfluidic networks. Primary hepatocytes were cultured in micropatterned fibrous mats embedded in 3D microfluidic chips and the behaviors of the hepatocytes were investigated to measure the effects of the interplay of topography, substrate surface properties, and soluble microenvironments. The underlying hypothesis of this work is that the optimized perfused culture system might offer multifunctional benefits and provide a biomimetic hepatocyte culture environment, thus enabling long-term maintenance of hepatocytes with high degrees of liver-specific characteristics. Here, the utility of the optimized perfused cultures in ascertaining the toxic effects of silver nanoparticles (nano-silver; Ag NPs) was tested in comparison with the determinations established by conventional culture-based cytotoxicity. The system as described in this study thereby represents an important platform for studying or controlling phenotype expression in tissue engineering and for toxicity screening of nanoparticles.
Fabrication of Microfluidic Chips
Microfluidic chips were fabricated by standard soft lithography. Briefly, the microchannel structure was designed using L-edit (Tanner EDA, Wilsonville, OR, USA) and produced with a chrome photomask using an E-beam mask lithography system (Mark 40, CHA Industries, Fremont, CA, USA). The mask was utilized to photo pattern 400 µm thick spin coated SU-8 (SU-8 2050, MicroChem Co., Newton, MA, USA) onto silicon wafers (Tianjin Institute of Semiconductors, Tianjin, China). The fabricated mold was then silanized with 1H, 1H, 2H, 2H-perfluorooctyltrichlorosilane (78560-45-9, AlfaAesar, Ward Hill, MA, USA) in a desiccator for more than 30 min at room temperature to prevent undesired bonding of PDMS to the mold. PDMS prepolymer with 1:10 (v/v) curing agent to base ratio was poured on the mold and cured for 2 h at 95 • C to obtain the final microchannel structure.
A biopsy punch with an inner diameter of 1.0 mm was used to punch holes through the PDMS for the inlet and outlet ports. The microchannels of the microfluidic chip were 200 µm wide and approximately 3 mm long. The height of the channels was 400 µm throughout the whole network. The PDMS device was then cured in a 60 • C oven for more than 2 h to promote the bonding and ensure full curing of the PDMS to enhance cell compatibility.
Fabrication of Patterned Fibers
Previous studies have shown that patterned fibers with a width of 200 µm and a maximum thickness of approximately 100 µm could provide a platform to maintain the viabilities and functions of hepatocytes [18]. To form fluid channels across the patterned fibers and reconstruct the perfused engineered liver, microchannels sized 200 µm wide and 400 µm high were chosen and designed. The patterned collector was constructed on a glass template patterned with an electrically conductive circuit as previously described [4]. In brief, an insulating glass substrate was coated with a silver layer by direct-current (DC) sputtering (Sunicoat 594L, Sunic System Ltd., Sokcho, Korea), coated with a second layer of photoresist (MicroChem Inc., Newton, MA, USA), then covered with the photomask containing parallel strips of 200 µm in width, and exposed using a lithography machine (Suss Mircotec MA6, SUSS MicroTec, Garching, Germany). As the exposed regions became soluble, the glass substrate was rinsed to remove the photoresist followed by etching away silver in the exposed area. After removing the rest of the photoresist, the glass patterned collector was obtained as a collector for the electrospinning process. PLA was dissolved in chloroform at 15 wt %. The polymer solution was added into a 2 mL syringe attached with a metal capillary shaped for clinical use. A syringe pump (Zhejiang University Medical Instrument Company, Hangzhou, China) was used to feed the polymer solution to maintain a steady flow at 1 mL/h from the capillary outlet. The distance between the capillary tip and patterned collector was set to about 15 cm and the electrospinning voltage was controlled within 20 kV using a high-voltage statitron (Tianjin High Voltage Power Supply Company, Tianjin, China) to obtain the patterned PLA fibers.
Construction of Perfusion Systems
To ensure that the patterned fibrous mats were placed in the center of the probing channel, the PDMS channel was aligned manually using a stereomicroscope. The bottom layer was aligned and irreversibly bonded with the top layer using 3M Scotch Tape [17], and short Teflon tubing (Genetec AB, Västra Frölunda, Sweden) was glued (Elastosil A07, Wacker Silicones AG, München, Germany) on the inlet and outlet ports. They were later used to connect the microfluidic chip to the pump. The pump setup used was designed to control the flow rate for each inlet and was assembled in our lab. After the assembly of the chip, rhodamine B fluorescent dye was perfused by the pump system into the microchannel to determine whether the system exhibited problems of liquid leakage. We also used the pump system to place a drop of solution on one inlet of the channels and to remove solutions from these microchannels by applying vacuum at the outlets.
Cell Culture and Microscopic Investigations
Prior to use in cell culture, the bonded microfluidic channel was sterilized with 75% ethanol for 10 min and washed extensively with Dulbecco's Modified Eagle Medium (DMEM). Primary hepatocytes were isolated from the livers of adult SD rats using a two-step collagenase perfusion procedure [19]. Male SD rats weighting 120-150 g were purchased from Sichuan Dashuo Biotech Inc. (Chengdu, China) and all animal protocols were approved by the University Animal Care and Use Committee. Only hepatocytes with viability of >90%, as determined by the trypan blue exclusion assay, were used. A primary hepatocyte suspension (500 µL of 1 × 10 3 cells/µL) in medium (DMEM with 10% (v/v) fetal bovine serum (FBS)) was loaded into the microchannel from the inlet and the entire process was monitored under a microscope. The cells were allowed to attach for 10 h without any flow; subsequently, fresh medium from the inlets was re-established at a varied flow rate for each channel. The perfusion system was then placed in an incubator with 5% CO 2 at 37 • C for 15 days. For analysis, at each time point, the microchannel was washed with phosphate-buffered saline (PBS) and the cells were fixed with 4% formaldehyde for 2 h at 4 • C under static conditions. The viability of hepatocytes cultured within the device was analyzed using a live/dead assay as described previously. Briefly, the microfluidic system was incubated with 50 mM calcein-AM and 25 mg/mL propidium iodide for 30 min at 37 • C in the dark, then the sample was washed extensively with PBS and observed under a confocal laser scanning microscope (CLSM, Olympus FV1000S, Tokyo, Japan).
To investigate the feasibility of assembly of microfluidic chips and patterned fibers, the system was incubated with 1 mg/mL rhodamine B solution overnight. After washing with distilled water, the microfluidic chips with embedded patterned fibers were observed by CLSM under the excitation and emission wavelengths of 550 and 620 nm for rhodamine B. The patterning features of the electrospun fibrous mats and microfluidic chips were observed using an optical microscope (Nikon Eclipse TS100, Tokyo, Japan). The morphologies of the electrospun fibers were investigated using a scanning electron microscope (SEM, FEI Quanta 200, Eindhoven, The Netherlands) equipped with a field-emission gun and Robinson detector after 2 min of gold coating to minimize the charging effect. The spheroid sizes were evaluated after processing the CLSM images by Image-Pro 6.0 (Media Cybernetics Inc., Bethesda, MD, USA) [3]. Briefly, CLSM images were taken by z-stack scanning with a step size of 5 µm. The images were stacked and reconstructed by Image-Pro 6.0, the desired spheroid for counting was selected, and counting was carried out by clicking on the fluorescein localization in spheroid in the image. The total area covered by calcein-AM-containing cells were displayed on the image.
Assessment of Hepatocyte Function
Albumin and urea secretion were analyzed by measuring the concentration of albumin and urea in the media. After culturing for 3, 7, 11, and 15 days, the albumin secretion was measured by enzyme-linked immunosorbent assay (ELISA) using a commercial rat albumin quantitation kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). Urea levels were measured with commercially available kits (Nanjing Jiancheng Bioengineering Institute) based on its specific reaction with diacetylmonoxime. Absorbance was measured with a Quant microplate spectrophotometer (Elx-800, BioTek Instrument Inc., Winooski, VT, USA) and standard curves were generated using purified rat albumin or urea dissolved in culture media. All functional data were normalized to 10 6 cells. The biliary excretion of hepatocytes was determined by FDA staining following procedures described previously [3]. Briefly, hepatocytes cultured within the device were incubated in culture media containing 3 mg/mL FDA at 37 • C for 1 h. After extensive washes with PBS, samples were observed by CLSM under the excitation and emission wavelengths of 488 and 533 nm, respectively. These images were processed by Image-Pro 6.0 to quantify the fluorescein localization in the intercellular sacs between hepatocytes. The biliary excretion of an FDA-staining image was indicated by the ratio of the area of excreted fluorescein in the intracellular sacs to the total area covered by FDA-containing cells; more than five original images were randomly chosen for each sample.
Nanoparticle-Induced Hepatotoxicity Testing
In order to obtain a homogeneous dispersion of Ag NPs (no color change and no precipitation), the Ag NPs were stably diluted with deionized water using physical mixing and sonication several times, and then sterilized by passing the solution through a 0.22-micron microfilter. Finally, Ag NPs (120 µg/mL) were prepared in DMEM without serum [20]. After the hepatocytes were exposed to nanoparticles for 24 h, the various toxicity endpoints were evaluated in control and nanoparticle-exposed cells.
The membrane damage of the Ag NP-exposed cells was assessed by measuring the activity of lactate dehydrogenase (LDH) in the cells and media using a commercial diagnostic kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) as previously described [4]. Briefly, after culturing for 7 and 15 days, the media were collected and centrifuged. The activity of the LDH released from the cytosol of damaged cells was assessed using the LDH kit and the optical density at 450 nm was measured using an Elx-800 spectrophotometer (BioTek Instrument Inc., Winooski, VT, USA). The total LDH release of cells was determined after lysis in 0.8% polyoxyethylene octyl phenyl ether (Triton TM X-100) at 37 • C for 45 min. The LDH leakage after Ag NP treatment for 24 h was normalized to the value without Ag NP treatment.
Statistical Methods
All the data presented are expressed as the mean ± standard deviation (SD), and analysis of variance (ANOVA) were used for statistical evaluation of the data. A probability value (p) of less than 0.05 was considered to be statistically significant. Figure 1 shows the schematic illustration of the fabrication process of a patterned fiber-embedded microfluidic chip. The device was constructed using two layers: the bottom layer with patterned fibers and a channel in the top layer covering the patterned fiber chambers. Firstly, the top layer was fabricated from PDMS by standard soft lithography methods. Briefly, microfluidic networks were drawn by Tanner L-Edit software and printed with high resolution on a photomask by an E-beam mask lithography system. The convex rectangular master mold was replicated by using SU-8 followed with UV exposure and a PDMS layer with a rectangular channel structure was created by the separation of SU-8 from the PDMS base mold (Figure 1a). The bottom layer of patterned PLA fibers was obtained using a patterned collector as described previously, with some modifications [3]. Briefly, an insulating glass substrate was coated with a layer of positive photoresist, then covered with the photomask and exposed using a lithography machine. A silver layer was then deposited on the glass substrate by DC sputtering. After removing the rest of the photoresist, the glass substrate, representing a micropatterned silver circuit, served as a collector for the electrospinning process ( Figure 1b). Finally, the top and bottom layer chambers were aligned and bonded via 3M Scotch Tape [17], which was necessary for creating the coaxial-flow channels without clogging (Figure 1c). The 3D schematic illustration of the patterned fiber-embedded microfluidic chip is shown in Figure 1d. by an E-beam mask lithography system. The convex rectangular master mold was replicated by using SU-8 followed with UV exposure and a PDMS layer with a rectangular channel structure was created by the separation of SU-8 from the PDMS base mold (Figure 1a). The bottom layer of patterned PLA fibers was obtained using a patterned collector as described previously, with some modifications [3].
Assembly of PDMS Chip with PLA Patterned Fibers
Briefly, an insulating glass substrate was coated with a layer of positive photoresist, then covered with the photomask and exposed using a lithography machine. A silver layer was then deposited on the glass substrate by DC sputtering. After removing the rest of the photoresist, the glass substrate, representing a micropatterned silver circuit, served as a collector for the electrospinning process ( Figure 1b). Finally, the top and bottom layer chambers were aligned and bonded via 3M Scotch Tape [17], which was necessary for creating the coaxial-flow channels without clogging ( Figure 1c). The 3D schematic illustration of the patterned fiber-embedded microfluidic chip is shown in Figure 1d.
Optimization of Patterned Fiber-Embedded Microfluidic Chips
Previous studies have shown that patterned fibers with a width of 200 μm could provide a platform to maintain the viability and function of hepatocytes [4]. To form a fluid channel across the patterned fibers and reconstruct the perfused engineered liver, microchannels sized 200 μm wide and 400 μm high were designed. Figure 2a shows an image of the patterned fiber-embedded microfluidic chip. The device contained four rectangular microchannels 10.0 mm long and 200 μm wide; parallel microchannels were separated by a 3.0 mm gap (Figure 2b). The topography of the patterned fibrous mats was close to that of the microchannels and the thickness of the PLA fibrous mats was approximately 100 μm, which did not block the flow (Figure 2c). Figure 2d shows the image of a single rectangular shape of patterned fibers. Owing to electrostatic forces, patterned fibers showed a similar topological structure and dimension as the collector configuration. In addition, Coulombic interactions between the
Optimization of Patterned Fiber-Embedded Microfluidic Chips
Previous studies have shown that patterned fibers with a width of 200 µm could provide a platform to maintain the viability and function of hepatocytes [4]. To form a fluid channel across the patterned fibers and reconstruct the perfused engineered liver, microchannels sized 200 µm wide and 400 µm high were designed. Figure 2a shows an image of the patterned fiber-embedded microfluidic chip. The device contained four rectangular microchannels 10.0 mm long and 200 µm wide; parallel microchannels were separated by a 3.0 mm gap (Figure 2b). The topography of the patterned fibrous mats was close to that of the microchannels and the thickness of the PLA fibrous mats was approximately 100 µm, which did not block the flow (Figure 2c). Figure 2d shows the image of a single rectangular shape of patterned fibers. Owing to electrostatic forces, patterned fibers showed a similar topological structure and dimension as the collector configuration. In addition, Coulombic interactions between the nanofibers and collector induced fibers to arrange along the strip, causing the fiber to be directionally arranged as shown in Figure 2e. To evaluate whether the system showed problems of liquid leakage, rhodamine B fluorescent dye was perfused from two side microchannels into the patterned fibrous mat-embedded microfluidic chip at a velocity of 10 µL/min, which yielded no evidence of rhodamine B leakage (Figure 2f). We then continued to increase the velocity of rhodamine B, and after injecting rhodamine B into the microchannel with a syringe at over 20 µL/min, the fluorescent dye exhibited leakage from one microchannel to another. Therefore, we chose velocities of 5, 10, and 20 µL/min for the subsequent experiments.
Hepatocyte Viability
Calcein-AM and propidium iodide were used to examine hepatocyte viabilities under different flow rates (0, 5, 10, and 20 μL/min). After 3 days of culture, it was observed that all hepatocytes were dead (red and yellow signal) on tissue culture plates (TCP) (Figure 3). The number of living hepatocytes on the fibers was apparently higher than that of TCP, because hepatocytes on nanofibers generally retain preferential cell bioactivity during a 3-day culture period. However, some hepatocytes started to lose activity (yellow and yellow) under the flow rate of 0 μL/min (no flow), and we could observe abundant dead hepatocytes in the patterned fiber-embedded microfluidic chip as hepatocytes under static conditions rapidly disaggregate from the lack of nutrients. Therefore, cell viability was significantly lower in hepatocytes under static conditions than under conditions of flow, and fewer dead hepatocytes were detected under the three flow rates than static conditions after 3 days of culture. In particular, dead cells were rarely detected under the flow rate of 10 μL/min compared with other rates, demonstrating that medium-flow positively affects the viability of spheroids during the initial culture period. Death and shedding of hepatocytes occurred continuously when exposed to the flow rate of 20 μL/min. These findings are consistent with a report by Tilles et al., who showed that whereas increasing the perfusion flow rate enhances the delivery of
Hepatocyte Viability
Calcein-AM and propidium iodide were used to examine hepatocyte viabilities under different flow rates (0, 5, 10, and 20 µL/min). After 3 days of culture, it was observed that all hepatocytes were dead (red and yellow signal) on tissue culture plates (TCP) (Figure 3). The number of living hepatocytes on the fibers was apparently higher than that of TCP, because hepatocytes on nanofibers generally retain preferential cell bioactivity during a 3-day culture period. However, some hepatocytes started to lose activity (yellow and yellow) under the flow rate of 0 µL/min (no flow), and we could observe abundant dead hepatocytes in the patterned fiber-embedded microfluidic chip as hepatocytes under static conditions rapidly disaggregate from the lack of nutrients. Therefore, cell viability was significantly lower in hepatocytes under static conditions than under conditions of flow, and fewer dead hepatocytes were detected under the three flow rates than static conditions after 3 days of culture. In particular, dead cells were rarely detected under the flow rate of 10 µL/min compared with other rates, demonstrating that medium-flow positively affects the viability of spheroids during the initial culture period. Death and shedding of hepatocytes occurred continuously when exposed to the flow rate of 20 µL/min. These findings are consistent with a report by Tilles et al., who showed that whereas increasing the perfusion flow rate enhances the delivery of nutrients and removal of waste, an extremely high flow rate might induce excessive wall shear stress that is detrimental to hepatocytes [21].
Time Course of Hepatocyte Size Distribution
To determine the effects of flow rates on the size of spheroids, CLSM images were taken of hepatocyte spheroids formed at different culture conditions (summarized in Figure 4a). The size of hepatocytes without flow was essentially unchanged from day 7 to day 15; with death and shedding of hepatocytes occurring continuously throughout the culture period, the number of adhered hepatocytes was reduced (Figure 4a). Hepatocytes spontaneously formed spheroids after 3 days of culture in the patterned fiber-embedded microfluidic chip, wherein the hepatocyte size slightly increased with flow rates and times. In addition, fewer dead cells were detected at the flow rates of 5 and 20 μL/min compared with the others. The seeded hepatocyte aggregates perfused at the middle flow rate of 10 μL/min showed good cell bioactivity without death of inner hepatocytes during the 15-day culture period, exhibiting a compact polyhedral spheroidal morphology similar to 3D morphologies observed in vivo (Figure 4b). This indicated that the appropriate flow rate through the device exerted a marked effect on maintaining aggregation and spheroid formation. As shown in Figure 4a, spheroids at the flow rate of 20 μL/min became irregular and started to disaggregate at 15 days, whereas those at the flow rate of 10 μL/min maintained their smooth surface (reflecting tighter junctions) throughout the entire culture period, demonstrating quite substantial differences between the two rates during long-term culture (Figure 4b). Figure 4c summarizes the sizes of the few spheroids formed after hepatocytes were cultured on static systems. Hepatocytes self-aggregated and invariably formed into spheroids when exposed to low flow rates. When hepatocytes were cultured under the flow rate of 5 μL/min, some cells formed spheroids with a size of approximately 70 μm, which was close to the results shown in Figure 4a. These findings indicated that an apparent increase in the spheroid size occurred with the increase of flow rate and flow time [21]. The majority of hepatocytes were incorporated into spheroids with an average diameter of 105.33 ± 4.54 μm after 15 days of culture at the flow rate of 10 μL/min; however, hepatocyte spheroids formed at the flow rate of 20 μL/min showed small amounts of spheroid disaggregation, significantly lower than that observed at the flow rate of 10 μL/min (p < 0.05).
Time Course of Hepatocyte Size Distribution
To determine the effects of flow rates on the size of spheroids, CLSM images were taken of hepatocyte spheroids formed at different culture conditions (summarized in Figure 4a). The size of hepatocytes without flow was essentially unchanged from day 7 to day 15; with death and shedding of hepatocytes occurring continuously throughout the culture period, the number of adhered hepatocytes was reduced (Figure 4a). Hepatocytes spontaneously formed spheroids after 3 days of culture in the patterned fiber-embedded microfluidic chip, wherein the hepatocyte size slightly increased with flow rates and times. In addition, fewer dead cells were detected at the flow rates of 5 and 20 µL/min compared with the others. The seeded hepatocyte aggregates perfused at the middle flow rate of 10 µL/min showed good cell bioactivity without death of inner hepatocytes during the 15-day culture period, exhibiting a compact polyhedral spheroidal morphology similar to 3D morphologies observed in vivo (Figure 4b). This indicated that the appropriate flow rate through the device exerted a marked effect on maintaining aggregation and spheroid formation. As shown in Figure 4a, spheroids at the flow rate of 20 µL/min became irregular and started to disaggregate at 15 days, whereas those at the flow rate of 10 µL/min maintained their smooth surface (reflecting tighter junctions) throughout the entire culture period, demonstrating quite substantial differences between the two rates during long-term culture (Figure 4b).
Evaluation of Hepatocyte Function
The hepatocyte spheroid is considered to sustain cell viability for extended culture periods and maintain high levels of liver-specific functions [23]. Figure 5 summarizes the albumin secretion and urea synthesis of hepatocytes on different substrates. Hepatocytes cultured on TCP and patterned fibers rapidly lost their functional characteristics, and the level of albumin and urea production decreased throughout 15 days of culture. In addition, no significant differences were observed between patterned fibers without chip and the flow rate of 0 μL/min. Compared with patterned fibers without chip, the albumin and urea secretion of cells under the flow rate of 0 μL/min showed around 65% and 45% decrease after 15 days, indicating that patterned fibers do not represent a determining factor affecting the maintenance of liver function in the patterned fibrous mat-based microfluidic chip system. Although the albumin production of hepatocytes under dynamic conditions showed a slight decrease throughout 15 days of incubation compared to that of static culture (p < 0.05) (Figure 5a), hepatocytes under the flow rate of 5 μL/min sustained a significantly higher level at 23.14 ± 2.55 μg/10 6 cells/day. As shown Figure 5b, similar results were noted for urea synthesis, for which Figure 4c summarizes the sizes of the few spheroids formed after hepatocytes were cultured on static systems. Hepatocytes self-aggregated and invariably formed into spheroids when exposed to low flow rates. When hepatocytes were cultured under the flow rate of 5 µL/min, some cells formed spheroids with a size of approximately 70 µm, which was close to the results shown in Figure 4a. These findings indicated that an apparent increase in the spheroid size occurred with the increase of flow rate and flow time [21]. The majority of hepatocytes were incorporated into spheroids with an average diameter of 105.33 ± 4.54 µm after 15 days of culture at the flow rate of 10 µL/min; however, hepatocyte spheroids formed at the flow rate of 20 µL/min showed small amounts of spheroid disaggregation, significantly lower than that observed at the flow rate of 10 µL/min (p < 0.05). Previous studies have shown that the size of aggregates was primarily determined by the flow rate and that primary hepatocytes were particularly sensitive to the shear stress induced by the flow dynamics of the microfluidic environment [22]. In our study, the flow rate of 20 µL/min might have accelerated the detachment and the death of the cells inside the device after 11 days.
Evaluation of Hepatocyte Function
The hepatocyte spheroid is considered to sustain cell viability for extended culture periods and maintain high levels of liver-specific functions [23]. Figure 5 summarizes the albumin secretion and urea synthesis of hepatocytes on different substrates. Hepatocytes cultured on TCP and patterned fibers rapidly lost their functional characteristics, and the level of albumin and urea production decreased throughout 15 days of culture. In addition, no significant differences were observed between patterned fibers without chip and the flow rate of 0 µL/min. Compared with patterned fibers without chip, the albumin and urea secretion of cells under the flow rate of 0 µL/min showed around 65% and 45% decrease after 15 days, indicating that patterned fibers do not represent a determining factor affecting the maintenance of liver function in the patterned fibrous mat-based microfluidic chip system. Although the albumin production of hepatocytes under dynamic conditions showed a slight decrease throughout 15 days of incubation compared to that of static culture (p < 0.05) (Figure 5a), hepatocytes under the flow rate of 5 µL/min sustained a significantly higher level at 23.14 ± 2.55 µg/10 6 cells/day. As shown Figure 5b, similar results were noted for urea synthesis, for which hepatocytes under the flow rate of 5 µL/min also showed a slight decrease to about 19.47 ± 1.92 µg/10 6 cells/day after 15 days of incubation. Of the three flow rates, hepatocytes cultured at the flow rate of 10 µL/min exhibited the greatest degree of albumin secretion and urea synthesis throughout the culture period, which was about 10-and 15-fold higher, respectively, than that of hepatocytes cultured on TCP for 15 days (p < 0.05). However, the albumin and urea production of primary hepatocytes perfused at 20 µL/min was lower than that of hepatocytes cultured at the flow rate of 10 µL/min, similar to the trend of viability and spheroid formation of the cultured hepatocytes. hepatocytes under the flow rate of 5 μL/min also showed a slight decrease to about 19.47 ± 1.92 μg/10 6 cells/day after 15 days of incubation. Of the three flow rates, hepatocytes cultured at the flow rate of 10 μL/min exhibited the greatest degree of albumin secretion and urea synthesis throughout the culture period, which was about 10-and 15-fold higher, respectively, than that of hepatocytes cultured on TCP for 15 days (p < 0.05). However, the albumin and urea production of primary hepatocytes perfused at 20 μL/min was lower than that of hepatocytes cultured at the flow rate of 10 μL/min, similar to the trend of viability and spheroid formation of the cultured hepatocytes.
Bile Canaliculi Formation by Hepatocytes
Hepatocyte membrane polarity is evidenced by extended 3D bile canalicular structures, which are responsible for transport functions and drug metabolism [24]. In our study, the biliary excretory function of hepatocyte spheroids was investigated by incubation with FDA, which enters cells via passive diffusion and is hydrolyzed by intracellular esterases into fluorescein prior to excretion by bile canaliculi. Hepatocytes cultured under static systems exhibited subtle unconnected dots in the apical domain between adjacent hepatocytes, illustrating limited bile canaliculi structure formation
Bile Canaliculi Formation by Hepatocytes
Hepatocyte membrane polarity is evidenced by extended 3D bile canalicular structures, which are responsible for transport functions and drug metabolism [24]. In our study, the biliary excretory function of hepatocyte spheroids was investigated by incubation with FDA, which enters cells via passive diffusion and is hydrolyzed by intracellular esterases into fluorescein prior to excretion by bile canaliculi. Hepatocytes cultured under static systems exhibited subtle unconnected dots in the apical domain between adjacent hepatocytes, illustrating limited bile canaliculi structure formation (Figure 6a). Conversely, strong fluorescence signals were detected along the border of hepatocytes in all dynamic systems, indicating that those hepatocytes maintained their ability to uptake chemicals and efflux bile acid [25]. In particular, in hepatocytes cultured under the flow rate of 10 µL/min, the fluorescein appeared at the intercellular borders of hepatocytes and formed a tightly fused 3D morphology. Fluorescence quantitation, as shown in Figure 6b, further validated the observed differences, indicating that the dynamic conditions significantly enhanced the formation of bile canaliculi (p < 0.05), and that the flow rate of 10 µL/min exhibited significantly stronger biliary excretory function than the other rates (p < 0.05).
Toxicity Testing
The toxic effects of Ag NPs were evaluated quantitatively utilizing the LDH assay, which detects the amount of LDH that leaks out through the plasma membrane of damaged cells. Figure 7 summarizes the relative LDH leakage from cultured hepatocytes on days 7 and 15 after treatment with 120 μg/mL Ag NPs for 24 h. Notably, a constant decrease in the relative LDH leakage was detected for hepatocytes under static conditions after Ag NP treatment. This indicated that under
Toxicity Testing
The toxic effects of Ag NPs were evaluated quantitatively utilizing the LDH assay, which detects the amount of LDH that leaks out through the plasma membrane of damaged cells. Figure 7 summarizes the relative LDH leakage from cultured hepatocytes on days 7 and 15 after treatment with 120 µg/mL Ag NPs for 24 h. Notably, a constant decrease in the relative LDH leakage was detected for hepatocytes under static conditions after Ag NP treatment. This indicated that under static conditions, the hepatocytes could not form spheroids and thus could not maintain their functions over a long period, resulting in a relative insensitivity to nanotoxicity. In contrast, the ability of hepatocytes under dynamic conditions to form compact spheroids led to their viabilities being significantly compromised compared to those of cells grown on TCP and patterned fibers throughout the study (p < 0.05). Furthermore, the consistency of the nanotoxicity response of hepatocytes under the flow rate of 10 µL/min on day 7 and day 15 was also notable; specifically, no apparent change in the relative LDH leakage from hepatocytes under this flow rate was observed following Ag NP treatment on days 7 (55.62% ± 4.53%) and 15 (54.48% ± 4.41%). This phenomenon indicated that hepatocytes under this flow rate condition were more sensitive to the Ag NP-induced toxicity than were cells under other growth conditions.
Discussion
Most previous bioreactor and microfluidic chips have been established to form hepatocyte spheroids by providing the beneficial effects of nutrients and oxygen as well as shear stress [26]. However, these involved 2D monolayer hepatocyte cultures. In the current study, we aimed to develop a 3D fluid cultivation system for long-term hepatocyte culture. In the native environment, hepatocytes are arranged into a single cordlike structure and are separated by sparse ECM [27]. To establish cell-cell and cell-ECM communication similar to that in native liver structures, scalable and readily available patterned fibrous mats and microfluidic chips were used in the 3D system.
In the current microfluidic system, the thickness of the patterned fibers was about 100 μm, which was lower than that of the probing channel and did not affect the microstructures of the fibers. The process of assembly essentially comprised liquid exchange and rendered the patterned fiberembedded microfluidic chips highly compact. The morphology of patterned fibers was also considered critical for hepatocytes, as preliminary studies demonstrated that the crossing-points between fibers and fiber alignment might be important for increasing the initial attachment and spreading of hepatocytes. After patterned fibers were deposited on the collector, the adhesive forces remained between the patterned fibers and the collector such that it was not necessary to peel off the patterned fibers from the glass. Instead, the PDMS and the glass could be adhered together using a piece of Scotch Tape, which tightly adhered to the PDMS slab by van der Waals forces; notably, this adherence was crucial to the performance of the system. Unlike the other methods for chip assembly, our technique has several advantages: simple and inexpensive fabrication, no leakage, reversible sealing, and easy manipulation. Furthermore, the direction of patterned fibers aligned with the direction of flow, thus forcing the cells to migrate uniaxially, thereby concentrating the cells in close
Discussion
Most previous bioreactor and microfluidic chips have been established to form hepatocyte spheroids by providing the beneficial effects of nutrients and oxygen as well as shear stress [26]. However, these involved 2D monolayer hepatocyte cultures. In the current study, we aimed to develop a 3D fluid cultivation system for long-term hepatocyte culture. In the native environment, hepatocytes are arranged into a single cordlike structure and are separated by sparse ECM [27]. To establish cell-cell and cell-ECM communication similar to that in native liver structures, scalable and readily available patterned fibrous mats and microfluidic chips were used in the 3D system.
In the current microfluidic system, the thickness of the patterned fibers was about 100 µm, which was lower than that of the probing channel and did not affect the microstructures of the fibers. The process of assembly essentially comprised liquid exchange and rendered the patterned fiber-embedded microfluidic chips highly compact. The morphology of patterned fibers was also considered critical for hepatocytes, as preliminary studies demonstrated that the crossing-points between fibers and fiber alignment might be important for increasing the initial attachment and spreading of hepatocytes. After patterned fibers were deposited on the collector, the adhesive forces remained between the patterned fibers and the collector such that it was not necessary to peel off the patterned fibers from the glass. Instead, the PDMS and the glass could be adhered together using a piece of Scotch Tape, which tightly adhered to the PDMS slab by van der Waals forces; notably, this adherence was crucial to the performance of the system. Unlike the other methods for chip assembly, our technique has several advantages: simple and inexpensive fabrication, no leakage, reversible sealing, and easy manipulation. Furthermore, the direction of patterned fibers aligned with the direction of flow, thus forcing the cells to migrate uniaxially, thereby concentrating the cells in close proximity to one another and likely accelerating aggregate formation. Accordingly, the hepatocyte aggregates on patterned fibers exhibited extensive cell-cell contact and tight junctions, and preserved their function and cell activity better than cells exposed to TCP over a 15-day culture period. Therefore, the inclusion of both channels and fibers allows this system to offer a large potential for use in miniaturized tissue engineering and high-throughput drug screening.
In addition, although continuous supply of fresh medium was provided to hepatocytes during the culture time, we noted that more dead cells occurred on TCP and patterned fibers without chip cultures under static systems, owing to limited diffusion. This finding was consistent with previous reports that microfluidic chips could provide a hydrodynamic environment and induce shear stress, and that nutrition could infiltrate into hepatocytes to significantly promote cell adhesion, and, accordingly, to better maintain the cell phenotype [28]. One notable aspect of our system was that few dead cells were observed under the flow rate of 10 µL/min, which might contribute to the balanced fluid movement of oxygen and nutrient exchange [29]. Conversely, the total dead cell numbers of hepatocytes cultured at flow rates of 5 and 20 µL/min increased. This phenomenon might be due to the extremely slow and fast flow, respectively, with the former offering insufficient supply of nutrients and oxygen to the cells and the latter subjecting hepatocytes to high laminar shear stress [30]. Generally, it is considered that the flow rate should be equal to the critical perfusion rate to guarantee that cells receive sufficient nutrients and oxygen in dynamic culture systems [31].
Notably, previous studies have shown that the size of the aggregates is crucial to hepatocyte function [32]. We further found that different levels of flow influenced the size distribution of the hepatocytes. During the first 72 h after seeding, small aggregates (40-50 µm) were formed; these clusters grew in size during the following 2 days, with the size of hepatocytes cultured under flow conditions gradually increasing as result of aggregation through cell-cell interactions and fluidic features. Hepatocyte spheroids under flow conditions became larger than those of under static conditions and visibly increased in size between day 7 and day 15, whereas the size of hepatocytes under static conditions remained the same during culture time, as was confirmed by immunostaining ( Figure 4a). This result demonstrated the effect of continuous nutrient, oxygen, and cytokine transport and removal of metabolic wastes caused by interstitial flow [33]. Figure 4b shows that hepatocyte spheroids at flow rates of 5 and 20 µL/min increased gradually from day 1 (34.52 ± 2.48 µm and 42.78 ± 2.27 µm) to day 11 (61.06 ± 2. 82 µm and 75.47 ± 3.04 µm), whereas for the next 4 days they increased only marginally (71.08 ± 3. 02 µm and 76.86 ± 3.15 µm), possibly because hepatocytes within these flow rates were also subject to any significant mass-transfer limitations [34]. Although previous studies have shown that the central hepatocytes become hypoxic when the aggregate diameter is 100 µm or more [35], in the current study, hepatocytes cultured at the optimal flow rate of 10 µL/min formed spheroids with a maximum size of 105.33 ± 4.54 µm.
To the best of our knowledge, perfusion systems further enhance hepatocyte function under dynamic culture conditions compared to that observed under static culture [36]. In the current study, patterned fibers and microfluidic chips were able to recapitulate cellular microenvironments in vitro on a microscale. Patterned fibers in the perfusion system provided ample adhesive sites and secure shields for hepatocytes to endure shear stress, and the cell arrangement could be easily manipulated and formed spheroids. Accordingly, the albumin and urea secretion for all dynamic culture groups exhibited higher yields than those obtained during static culture ( Figure 5). Such marked maintainability is likely explained by the higher cell vitality and number of hepatocyte spheroids arising under dynamic culture. Bile canaliculi formation represents one of the key phenotypes of hepatocyte polarization, ensuring the efficient polarized transport of hepatic tissues. Compared to hepatocytes cultured under the static system, functional bile canaliculi networks were obviously present in the hepatocyte spheroids formed under dynamic culture [37]. Especially under the flow rate of 10 µL/min, bile canaliculi were well-developed, and the respective hepatocytes formed multiple junctions within spheroids. The ultrastructures of such bile canaliculi have been shown to facilitate the diffusive delivery of culture media to the center of hepatic spheroids, which is an important parameter affecting cell morphology, function, and physiological responsiveness [38]. Therefore, the patterned fibrous mat-based microfluidic chip likely represents a useful tool for fundamental studies of hepatic functions in physiological conditions and in more complex drug-transport studies.
Based on previously studies, exposure to 120 µg/mL Ag NPs was used to model nanoparticle toxicity in order to test the sensitivity, consistency, and feasibility of the system [39]. Owing to the well-maintained metabolic functions of hepatic spheroids, cells under dynamic culture exhibited higher sensitivity to Ag NP-induced toxicity than those grown under static conditions on both day 7 and day 15 (Figure 7). It has been extensively reported that sphere-shaped hepatic clusters are not only able to mimic the original morphology of the liver, but also provide functional hepatocytes in vitro [40]. Notably, the hepatocytes grown in the platform utilized in the current study could form physiologically relevant spheroids, potentially allowing the extension of the current level of cytotoxicity assessment from the cellular to the tissue level thereby reducing prediction error and improving prediction accuracy. In addition, following the treatment of hepatocytes with Ag NPs for 24 h, approximately 50% of the cells grown under perfusion culture had died. This phenomenon was similar to previous findings such as those by Firouz et al., who reported an Ag NPs IC 50 value (the nanoparticle concentration causing 50% cell death) of 121.7 µg/mL [20]. Furthermore, the viability of hepatocytes under the flow rate of 10 µL/min after Ag NP treatment on day 7 and day 15 was similar, indicating that this flow rate achieved consistent, reliable, and high predictive values, and that it also extended the functional period up to 15 days. In contrast, hepatocytes under other growth conditions exhibited highly variable cell viability results at these time points following Ag NP treatment. Overall, however, hepatocytes that exhibited lower functioning on day 15 appeared to be more vulnerable to Ag NP-induced toxicity than those with poor function at 7 days (Figure 7), likely because of the absence of key physiological processes such as the transport of NPs via homotypic cell-cell interactions [41]. These results demonstrate the feasibility of hepatocytes cultured in this system as a potential in vitro testing model for the prediction of nanotoxicity, although further cellular and molecular investigations would be needed for a better understanding of the underlying mechanisms.
Conclusions
In this study, a new method to integrate patterned electrospun fibers within a microfluidic chip was developed and successfully demonstrated. The combined effects of microfluidic channels and patterned fibers on hepatocyte behavior were studied. Hepatocytes cultured in this system under an optimized flow condition exhibited restored hepatocyte polarity and biliary excretion, and maintained liver-specific functions over at least 15 days. Furthermore, hepatocytes under the flow rate of 10 µL/min produced sensitive and consistent Ag NP toxicity responses at different time points, demonstrating the feasibility of the patterned fiber-embedded microfluidic chips as a potential in vitro screening model to evaluate the potential toxicity of nanoparticles. Author Contributions: Yaowen Liu developed the original idea and the protocol, abstracted and analyzed data, wrote the manuscript, and is guarantor. Shuyao Wang and Yihao Wang contributed to the development of the protocol, abstracted data, and prepared the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2015-09-18T23:22:04.000Z
|
2015-09-01T00:00:00.000
|
956636
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0136542&type=printable",
"pdf_hash": "0391f1f0058e755a6c3f6fcb99fd5d0454111328",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:618",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "0391f1f0058e755a6c3f6fcb99fd5d0454111328",
"year": 2015
}
|
pes2o/s2orc
|
Differential Ratios of Omega Fatty Acids (AA/EPA+DHA) Modulate Growth, Lipid Peroxidation and Expression of Tumor Regulatory MARBPs in Breast Cancer Cell Lines MCF7 and MDA-MB-231
Omega 3 (n3) and Omega 6 (n6) polyunsaturated fatty acids (PUFAs) have been reported to exhibit opposing roles in cancer progression. Our objective was to determine whether different ratios of n6/n3 (AA/EPA+DHA) FAs could modulate the cell viability, lipid peroxidation, total cellular fatty acid composition and expression of tumor regulatory Matrix Attachment Region binding proteins (MARBPs) in breast cancer cell lines and in non-cancerous, MCF10A cells. Low ratios of n6/n3 (1:2.5, 1:4, 1:5, 1:10) FA decreased the viability and growth of MDA-MB-231 and MCF7 significantly compared to the non-cancerous cells (MCF10A). Contrarily, higher n6/n3 FA (2.5:1, 4:1, 5:1, 10:1) decreased the survival of both the cancerous and non-cancerous cell types. Lower ratios of n6/n3 selectively induced LPO in the breast cancer cells whereas the higher ratios induced in both cancerous and non-cancerous cell types. Interestingly, compared to higher n6/n3 FA ratios, lower ratios increased the expression of tumor suppressor MARBP, SMAR1 and decreased the expression of tumor activator Cux/CDP in both breast cancer and non-cancerous, MCF10A cells. Low n6/n3 FAs significantly increased SMAR1 expression which resulted into activation of p21WAF1/CIP1 in MDA-MB-231 and MCF7, the increase being ratio dependent in MDA-MB-231. These results suggest that increased intake of n3 fatty acids in our diet could help both in the prevention as well as management of breast cancer.
Introduction
Breast cancer is the most common malignancy and one of the leading cause of cancer-related deaths in women worldwide [1,2]. Several factors have shown promise in reducing breast cancer incidence rates wherein change in lifestyle, especially diet, has proven to be the most popular measure. The role of nutrition in the prevention of cancer has been well established and it has been shown to suppress the transformative, hyper-proliferative and inflammatory processes that initiate carcinogenesis [3].
During the past few years, there has been a wealth of information regarding the role of long chain polyunsaturated fatty acids (LCPUFAs) in health and disease [4][5][6][7]. n3 FA such as ALA (Alpha-linolenic acid) [8], EPA (Eicosapentaenoic acid) [9] and DHA (Docosahexaenoic acid) [10] have been reported to exhibit anti-cancer activity whereas n6 PUFAs such as linoleic acid (LA) and arachidonic acid (AA) [11][12][13] have been reported to contribute towards the development of cancer. EPA and DHA are essential fatty acids, which human body cannot synthesize and thus should be obtained from diet. AA, EPA and DHA occur in the diet in animal tissue lipids [14]. Fish oil is highly rich in EPA and DHA, and has been suggested for different populations due to health benefits [15]. EPA and DHA together have been recommended in various conditions such as coronary, CVD, CHD, Alzheimer, postpartum depression and bipolar depression, rheumatoid arthritis, pregnancy, lactation and infancy and even cancer [15]. In our recent study, we found that supplementation of fish oil capsules, containing EPA:DHA in the ratio of 1.5:1, in breast cancer patients undergoing chemotherapy, significantly improved their serum antioxidant levels as well as quality of life parameters [16].
Several studies have reported an inverse correlation between the ratios of n6/n3 fatty acids (FAs) and the risk of developing breast cancer [17,28,29]. The consumption of n6 PUFAs has considerably increased in the recent times. The current western diet has n6/n3 ratio ranging from 20-25/1 compared to the ratio of 1/1 that was prevalent in the diet of our ancestors [30]. High n6/n3 ratios favor the formation of pro-inflammatory eicosanoids from LA [31] that leads to the development of various disorders including cancer [32]. In vivo studies using corn oil (n6 FA) and its different ratios with fish oil (n3 FA) (n6/n3 ratio: 1/1, 1/1.5/1/9) [33,34] have established the antineoplastic potential of n3 PUFAs in breast cancer as well as in colon cancer (n6/n3 ratio: 1/1, 1/2.5) [35,36]. Few other studies as reviewed in [37] have reported protective effects of varying n6/n3 ratios in breast cancer. However, to our knowledge, the effect of equal (1/1), low (1/2.5, 1/4, 1/5, 1/10) and high (2.5/1, 4/1, 5/1, 10/1) ratios of n6/n3 PUFAs on cell viability, lipid peroxidation and total cellular fatty acid composition have not been studied in detail in breast cancer cell lines. In addition, we are for the first time reporting the modulation of tumor regulatory MARBPs (nuclear matrix associated Matrix Attachment Region binding proteins) such as SMAR1 (scaffold/matrix attachment region binding protein 1) and Cux/CDP(CCAAT-displacement protein/cut homeobox), in response to different n6/n3 FAs. Interestingly, the expression of cell cycle regulatory protein p21 WAF1/CIP1 was modulated in MDA-MB-231 and MCF7 cells treated with different n6/n3 FA ratios. This is the only study in vitro showing the effect of high and low ratios of n6/n3 FAs on cellular mechanisms in cancerous and non-cancerous cells.
Conjugation of BSA with Omega Fatty Acids
Eicosapentaenoic acid (EPA), Docosahexanoic acid (DHA) and Arachidonic acid (AA) were dissolved in absolute ethanol and stored at -20°C. The fatty acids were conjugated with delipidated, endotoxin-free serum albumin (3mM) to give a concentration of 10 mM stock with a ratio of fatty acids to BSA as 3/1 [8,39]. The conjugated omega fatty acids were incubated at 37°C for 30 min in CO 2 incubator and stored at -20°C and before use; they were diluted to the required concentration with 10% DMEM.
Cell viability study by MTT dye reduction assay
The cell viability was determined by MTT assay in breast cancer cell lines (MCF7 and MDA-MB-231) in presence of different concentrations of n3 and n6 fatty acids and compared with non-cancerous immortalized MCF10A cell lines. The cells were seeded at a density of 2×10 4 cells/well density in 96-well plates and grown for 24 h. The cells were treated with different ratios (1/1, 2.5/1, 4/1, 50 and 10/1) of n3 (EPA and DHA) and n6 (AA) fatty acids. Table 1 shows the various ratios of fatty acids that were used in the experiment. After 24h, media was removed, MTT solution (5 mg/ml) was added to each well and the cells were cultured for another 4 h at 37°C in 5% CO 2 incubator. The formazan crystals formed were dissolved in 90 μl of SDS-DMF (20% SDS in 50% DMF) [40]. After 15 min, the amount of colored formazan derivative was determined by measuring optical density (OD) at 570 nm (OD 570-630 nm) using BMG FLUOstar Omega microplate reader.
Cell growth analysis
The assay was performed as described previously [22]. Briefly, MCF7, MDA-MB-231 and MCF10A cells were seeded at a density of 2×10 4 cells/well in 24-well plates in triplicates. Next day, the cells were treated with different concentrations of n3 and n6 FA ratios for 24 h as shown in Table 2. The cells were harvested and counted for viability using trypan blue dye exclusion method.
Lipid peroxidation assay
All the cells were seeded at density 2×10 4 cells/well in a black 96-well plate and incubated at 37°C in CO 2 incubator. Next day, the cells were treated with various n3 and n6 FA ratios and incubated in CO 2 incubator at 37°C for 24 h. The following day, the medium was removed and the cells were washed with 1X PBS and incubated with fluorescent indicator, cis-parinaric acid (CPNA) (Invitrogen) as described previously [41]. Briefly, the cells were incubated with 10μM cis-parinaric acid and incubated for 60 min at 37°C in dark. 20μM tert-butyl hydroperoxide (TBHP) was kept as a positive control and was incubated with CPNA dye. After incubation, cells were washed twice with 1X PBS. 100μl 1X PBS was added to each well, and the fluorescence readings were immediately taken on FLUOstar Omega-Multi-mode micro plate reader (BMG LABTECH) with maximum excitation and emission wave length of 320 nm and 420 nm, respectively. Decreased fluorescence reflects increased lipid peroxidation.
Cellular fatty acid analysis
Breast cancer cell lines were plated at 8×10 5 cells/well density in 6 well plates and were incubated in 10% DMEM supplemented with n3 and n6 fatty acids for 24 h. The procedure for fatty acid analysis used in our study was adapted from the Manku et al [42]. Trans-esterification of total cellular lipid fractions was carried out with hydrochloric acid-methanol using a Perkin Elmer gas 118 chromatograph (SP 2330, 30m capillary column, Supelco, PA, USA). Briefly, cell pellets were dissolved in 500 μL 1X PBS and mixed with 4 ml of Methanol/HCl/ BHT (94.7/5.3/0.0005, v/v/w) in a 15-ml screw cap vial. The vials were sealed and incubated at 80°C for 2 h and then cooled on ice for 30 min. The total methylated fatty acids were extracted by adding 2 ml hexane (2N), and the layers were separated by centrifugation in a swinging rotor at 3000 rpm for 15 min at room temperature. The hexane layers were carefully removed and collected in a separate vial. The hexane extract was completely dried by passing argon gas and stored at -20°C until analyzed. The methylated fatty acids were re-suspended in 100μL of chloroform, and 1μL was injected in GC column. Helium was used as carrier gas at 1 ml/min. Oven temperature was held at 150°C for 10 min, programmed to rise from 150 to 220°C at10°C/min, and at 220°C for 10 min. The detector temperature was 275°C, and the injector temperature was 240°C. The column was calibrated by injecting the standard fatty acid mixture in approximately equal proportion. The data was recorded and the peaks were identified as per the retention time of the standard fatty acids (Sigma, USA) run under the identical conditions. Individual fatty acids were expressed as a relative percentage of total analyzed fatty acids.
Data analysis
The data analysis was done by using GraphPad prism 5 (San Diego, USA). The results have been presented as mean±SEM. Data was analyzed by using analysis of variance (ANOVA) test to compare the means. A significant F test was followed by post hoc Turkey's multiple comparison test. Kruskal-Wallis test was used wherever the homogeneity of variance test failed. For the comparisons between effect of low and high n6/n3 FAs, data was analyzed by unpaired student's t-test. The level of significance used was p<0.05.
Results
Differential ratios of n6 and n3 FA regulated the viability and proliferation of breast cancer cells We first analyzed the effect of low and high ratios of n6 and n3 fatty acids (AA/EPA+DHA) on the regulation of viability and growth of breast cancer cell lines, MDA-MB-231 and MCF7. The study was preceded by following preliminary tests to determine the appropriate concentrations of individual EPA, DHA and AA that would be used in different n6/n3 ratios. It was found that upto 280 μM concentration, the individual fatty acids (EPA, DHA and AA) were almost non-toxic to MCF10A, HEK 293 and HaCaT (Fig 1); as well as MDA-MB-231 and MCF7 (Fig 2).
To determine the effective ratios of AA/EPA+DHA that would decrease cell viability, we tested two different concentration, 200 and 280 μM, of total fatty acids wherein the ratio of n6 (AA) and n3 (EPA and DHA) FAs that was used has been mentioned in Tables 1 and 2, respectively. We initially kept the total FA concentration (of each n6/n3 ratio) at 200 μM (Table 1) and treated the cells with different ratios. After performing MTT assay, we didn't find any effect on cell viability of MDA-MB231 and MCF7 (Table 3). So, we increased the total FA concentration to 280 μM (Table 2), which was found to affect the cell viability of breast cancer cells significantly (Fig 3) and thus further experiments were performed with 280 μM concentrations of total FAs. In the experiments, EPA:DHA was maintained at 1.5:1 ratio based on the composition of standard commercial fish oil supplements [15] as well as our recent report [16].
It was observed that at equal ratio (1:1) of n6/n3 fatty acids, MDA-MB-231, MCF7 and MCF10A showed upto 84, 80 and 92% survival (p<0.05) compared to the untreated control (UC) cells, respectively (Fig 3). However, with decreasing n6/n3 FA ratios, a significant ratiodependent decrease in cell survival of only breast cancer cell lines (MDA-MB-231 and MCF7) (Fig 3A and 3B) was observed, with almost no reduction in cell viability of MCF10A (p<0.05) ( Fig 3C). It was observed that compared to equal ratio (1:1) of n6/n3, 1:10 ratio showed significant decrease in the viability of breast cancer cells, MDA-MB-231 (p<0.001) and MCF7 (p<0.01). On the other hand, with increasing ratios of n6/n3fatty acids, a ratio-dependent decrease in the cell survival was observed not only in the breast cancer (Fig 3A and 3B) but also in MCF10A ( Fig 3C). Similar results were obtained in non-cancerous transformed cell lines, HaCaT and HEK293, wherein low n6/n3 did not affect their viability compared to higher ratios (S1 Fig). Next, we evaluated the effect of different ratios of n6/n3 FAs on the proliferation of both cancerous and non-cancerous cells by trypan blue dye exclusion method. Compared to the untreated control cells, significant reduction in cell growth was observed in all the cell lines after treatment with 1:1 ratio of n6/n3 fatty acids (Fig 4). However, low n6/n3 FA ratios selectively reduced the proliferation of only breast cancer cells (Fig 4A and 4B) and not MCF10A (Fig 4C) compared to the untreated control cells. At 1:10 ratio, there was a profound decrease in the proliferation of breast cancer cells (Fig 4A and 4B)(p<0.001). On the other hand, increasing ratios of n6/n3 induced a ratio-dependent decrease in the proliferation of both the breast cancer (Fig 4A and 4B) and non-cancerous cell lines ( Fig 4C). Similar results were obtained in HaCaT and HEK293 cells wherein low ratios didn't affect their proliferation compared to higher ratios (S2 Fig).
Thus, compared to lower ratios of n6/n3, higher ratios decreased the viability and proliferation of both the breast cancer and non-cancerous cell lines.
Varying n6/n3 FA modulated lipid peroxidation in breast cancer cells
It was observed that with low n6/n3 FA, there was a ratio-dependent decrease in cis-parinaric acid fluorescence intensity that is inversely proportional to lipid peroxidation levels [41]. Thus, there was increase in LPO activity in breast cancer cells compared to untreated control cells, the increase being more pronounced in MDA-MB-231 ( Fig 5A) compared to MCF7 (Fig 5B). However, there was no lipid peroxidation in MCF10A (Fig 5B). Contrarily, increasing n6/n3 fatty acid ratios induced a significant increase in LPO not only in breast cancer cells (Fig 5A and 5B) but also in MCF10A (Fig 5C) compared to the untreated control cells (p<0.001). Interestingly, 1:1 ratios of n6/n3 didn't induce any LPO in the treated cell lines. It was interesting to note that irrespective of the increasing n6/n3 ratios, the increase in LPO remained constant at~57% to ~60% in MDA-MB-231 from 2.5/1 to 10/1 ( Fig 5A);~25% to 29% in MCF7 cells from 2.5/1 to 5/1 and 56% for 10/1 (Fig 5B). Similar trend was observed in HaCaT and HEK293 cells wherein low n6/n3 didn't induce any LPO in the cells whereas high n6/n3 induced LPO in all the ratios (S3 Fig). Interestingly, 1:5 and 1:10 ratios of n6/n3 induced significant increase in the LPO in both MDA-MB-231 and MCF7 (p<0.001) (Fig 5A and 5B). Tert-butyl hydroperoxide (TBHP) was used as a positive control in the experiment. Thus, low n6/n3 FA ratios specifically induced LPO in breast cancer cells without affecting the non-cancerous cells.
Omega 3 fatty acids regulated the expression of MAR binding tumor regulatory proteins
We wanted to evaluate the effect of varying n6/n3 ratios on the expression of MAR binding proteins since they are recently considered as primary diagnostic and/or prognostic markers [43]. We found that in both MDA-MB-231 and MCF7, all the ratios of n3 and n6 FAs significantly increased the expression of tumor suppressor MARBP SMAR1, compared to the untreated control cells (Fig 6A-6D). The expression of SMAR1 was significantly increased in MDA-MB-231 treated with low n6/n3 ratios (Fig 6A) compared to those treated with high n6/ n3 (Fig 6B) (p<0.05). In MCF7, SMAR1 expression was increased after treatment with low n6/ n3; however with high n6/n3, expression of SMAR1 was not significantly reduced (p = 0.069) (Fig 6C and 6D). Interestingly, there was a significant decrease in the expression of tumor activator MARBP Cux/CDP in low n6/n3 FA ratios in both MDA-MB-231 and MCF7, compared to the untreated control cells (Fig 6A and 6C, respectively) (p<0.05). On the other hand, the higher ratios of n6/n3 (from 4/1 upto 10/1) were found to significantly increase the expression of Cux/CDP in both MDA-MB-231 and MCF7 (Fig 6B and 6D, respectively) compared to the untreated control cells (p<0.05). Among all the tested n6/n3 ratios, 1:10 ratio showed profound decrease in Cux/CDP expression in both the breast cancer cell lines. In MCF10A, compared to the untreated control cells, SMAR1 expression was increased in all the ratios of low n6/n3, the increase being more significant in 1:4, 1:5 and 1:10 ratios ( Fig 6E). However, high n6/n3 FAs, particularly, 5:1 and 10:1 ratios, decreased the expression of SMAR1 significantly (Fig 6F). Surprisingly, 1:1 and 1:2.5 of low n6/n3 showed increase in the expression of Cux/CDP that was later decreased in 1:4, 1:5 and 1:10 n6/n3 ratios (Fig 6E). On the contrary, high n6/n3 increased the expression of Cux/CDP more significantly at 4:1, 5:1 and 10:1 ratios (Fig 6F).
Differential ratios of n6/n3 regulate p21 WAF1/CIP1 expression in breast cancer cells It was found that there was a significant increase in p21 WAF1/CIP1 expression in MDA-MB-231 ( Fig 7A) and MCF7 (Fig 7C) cells treated with low n6/n3 FAs; the increase being ratio Next day, lipid peroxidation was analyzed by using cis-parinaric acid and the values have been plotted in terms of percentage fluorescent intensity. Decrease of cis-parinaric acid fluorescence is proportional to increase in lipid peroxidation. Data has been presented as mean±SEM of three independent experiments, each conducted in triplicates. ƍp<0.01 and *p<0.001 compared to UC; ¥p<0.01 and ¤p<0.001 compared to 1:1; ₭p<0.01, Φp<0.01 and Θp<0.001 compared to 1:2.5; #p<0.05, Λp<0.01 and Øp<0.001 compared to 1:4; $p<0.05, Up<0.01 and ϙp<0.001 compared to 1:5; βp<0.05, ßp<0.001 as compared to 1:10; 1p<0.01 and Δp<0.001 compared to 2.5:1; ‡p<0.01 as compared to 4:1; Ŧp<0.05 compared to 5:1; μp<0.01 compared to 5:1; €p<0.01 compared to 10:1. Fig 7D for MCF7). Thus activation of SMAR1, by n3 fatty acids, lead to the activation of p21 WAF1/CIP1 that could regulate the cell growth. Both the cell lines showed efficient incorporation of respective fatty acids (EPA, DHA and AA) since supplementation of each fatty acid increased its own level in MDA-MB-231 and MCF7 cells (Fig 8). In MDA-MB-231, compared to untreated control cells, there was an increase in EPA and DHA content in low n6/n3 FA ratios, which decreased with increasing ratios ( Fig 8A). Interestingly, there was a reduction of AA in presence of low n6/n3 FA, which increased in higher ratios (Fig 8A) compared to both untreated control cells as well as 1/1 ratio of n6/n3.
Similarly, in MCF7 cells, compared to untreated control cells, there was an increase in EPA and DHA content at low n6/n3 with a concomitant decrease at higher ratios (Fig 8B). An interesting observation was that MCF7 cells showed high AA in all the FA ratios compared to the untreated control cells (Fig 8B) as reported earlier [44]. However, compared to 1/1 ratio, with Omega 6/3 Fatty Acid Ratio Regulate Breast Cancer lower n6/n3 ratios, there was a decrease in AA and non-significant increase in higher ratios ( Fig 8B). Nevertheless, compared to the untreated control cells, there was a significant increase in AA levels.
We further analyzed the effect of different ratios on the total n6/n3 FA and EPA+DHA/AA ratios in both MDA-MB-231 and MCF7 (Fig 9). In MDA-MB-231, the untreated control cells as well as those treated with 1:1 ratio of n6/n3 showed a high content of total n6/n3 FA ( Fig 9A). However, when cells were treated with low ratios of n6/n3, there was a significant decrease in total n6/n3 that remained almost constant in 1:2.5, 1:4 and 1:5 ratios with more decrease at 1:10 ratio (Fig 9A). In MCF7, the untreated control cells didn't have high total n6/n3 (Fig 9B). However, at 1:1 ratio, there was a significant increase in total n6/n3 that decreased with lower ratios of n6/n3 (Fig 9B). At higher n6/n3 ratios, there was increase in total n6/n3 in both MDA-MB-231 and MCF7 (Fig 9A and 9B).
In MDA-MB-231, EPA+DHA/AA was found to increase at low n6/n3 compared to untreated control cells or 1:1 ratio, the increase being more at 1:10 ( Fig 9A). In MCF7, EPA +DHA/AA was found to increase compared to 1:1 ratio, Interestingly, the untreated control cells had more EPA+DHA/AA that decreased at 1:1 ratio, but retained its original level at 1:5 ratio of low n6/n3, and increased significantly at 1:10 ratio (Fig 9B). At high n6/n3, EPA+DHA/ AA decreased significantly compared to untreated control and 1:1 ratios (Fig 9A and 9B).
Among all the ratios tested, 1:10 ratio of low n6/n3 showed significant increase in EPA and EPA+DHA/AA compared to untreated control and 1:1 ratio in both the breast cancer cell lines. On the other hand, compared to 1:1 ratio of low n6/n3, AA and total n6/n3 were significantly decreased at 1:10 ratio.
Discussion
Various sources of information suggest that human beings have evolved on a diet with a ratio of omega-6 to omega-3 essential fatty acids (EFA) of 1 [32]. However, nowadays, the ratio has drastically increased due to change in dietary pattern having high propensity towards westernized diet that has n6/n3 ratio of 15/1-16.7/1 [45,46]. Thus, in the present study, for the first time high and low ratios of n6/n3 FAs were used to mimic diets of different populations [32,47] in vitro to evaluate their effect on cellular responses. Equal ratio (1:1) of n6/n3 was chosen to mimic the ancestral diet; 4:1 and 5:1 ratios were chosen to mimic Japanese and Indian rural diet, respectively; 10:1 was chosen to mimic US, UK and European dietary pattern that showed excessive intake (between 15-16) of n6 FAs [32,47]. 1:2.5 ratio of n6/n3 FAs was chosen based upon previous research in animal models [35,36,48,49]. Moreover, our common diet generally consist of varying levels of n3 and n6 fatty acids, and thus, it becomes essential to systematically analyze the effects of varying ratios of these fatty acids on growth regulation of breast cancer cells. In the current study, we found that low ratios of n6/n3 fatty acids preferentially killed the breast cancer cells; modulated their lipid peroxidation and critically controlled the expression of tumor regulatory MARBPs.
Low n6/n3 FAs were preferentially cytotoxic to the breast cancer cells and not to the noncancerous cells. Contrarily, higher ratios affected the viability of all the cell lines similar to that of chemotherapeutic drugs that can not differentiate between normal and cancerous cells. Various studies have reported that PUFAs such as AA, GLA, EPA and DHA are differentially metabolized by normal and tumor cells as well as drug-sensitive and drug resistant cells [50][51][52][53][54]. This explains as to how different ratios of n6/n3 exhibited differential effect on cell viability of different cell lines. The normal cells metabolize PUFAs to produce cytoprotective lipids such as lipoxins, resolvins and protectins whereas cancerous cells generate toxic hydroperoxy fatty acids [50,51]. Earlier studies have reported unesterified arachidonic acid as a signal for induction of apoptosis in cells. Exogenous AA has been shown to induce apoptosis in colon cancer and other cell lines including HEK 293 [55]. This could be one of the reasons as to how high n6/n3 ratios could affect the viability and proliferation of all the cell types compared to the lower ratios.
We have previously reported that ALA increased lipid peroxidation in both the breast and cervical cancer cell lines with a simultaneous decrease in cell proliferation [8]. A similar trend was observed in the breast cancer cell types treated with different n6/n3 ratios. The decrease in cell proliferation was found to positively correlate with the increase in lipid peroxidation levels. AA has been reported to generate more thiobarbituric acid-reactive material (TBARM) [56] thereby inducing more cytotoxicity compared to DHA. Interestingly, MCF7 cells (ER/PR positive), showed less lipid peroxidation compared to MDA-MB-231 (ER/PR negative), in response to fatty acid treatment. The reason could be that estrogen is known to increase the resistance towards ROS generation [57,58]. Recently, it was shown that supplementation of 1/1 ratio of fish/corn oil in rats, induced with colon cancer, decreased ROS, thioredoxin reductase (TrxR) and apoptosis with a concomitant increase in the antioxidant activity [48] in the initiation phase. The absence of LPO in cells treated with 1/1 ratio of n6/n3 FAs, observed in the current report, could be probably due to generation of antioxidants that neutralized the free radicals. The cytotoxic action of omega 3 fatty acids seems to depend on their ability to increase free radical generation and lipid peroxidation [50,52,59] that damage a variety of enzymes, proteins and DNA, thereby, leading to cell death. Thus, a right balance between n6 and n3 fatty acids is highly important in regulating the tumor growth.
Both the breast cancer cell lines showed a significant increase in total n3 FAs with a corresponding decrease in total n6 FAs in low n6/n3 compared to higher ratios. The ratio of EPA +DHA/AA also increased in low n6/n3 FA and decreased in higher ratios in both the cell types. Interestingly, MCF7 showed higher percentage of AA compared to that of MDA-MB-231. Estrogens have been known to promote the metabolic conversion of PUFAs more rapidly and thus synthesis of AA and DHA from their precursors may be enhanced through an ER-dependent pathway [60,61]. Moreover, treatment of the cells with n3 PUFAs decreased the levels of AA more in MDA-MB-231 than in MCF7, which has been reported earlier [44]. High fat diet (primarily in the form of n6 PUFAs) has been known to be one of the factors that elevates estrogen (E2) levels and its circulating levels during pregnancy has been reported to increase the risk of developing breast cancer [62].
In the present report, we are for the first time showing that the omega fatty acids modulate the expression of MARBPs such as SMAR1 and Cux/CDP. MARBPs help in attachment of MARs, AT-rich DNA sequences, to the nuclear matrix (NM) resulting into organization of genomic DNA into topologically independent loop domains that are implicated in transcription, replication, repair, recombination, demethylation and chromatin accessibility [63,64]. Changes in chromatin structure could modulate gene expression resulting into genomic instability that may lead to cellular transformation and malignant outgrowth. Thus, MARBPs that control chromatin organization play key role in cancer progression. Aberrant expression of MARBPs such as PARP, CUTL1, HMG (I/Y), RUNX1-3, SATB1, SATB2, PcG (polycomb group of proteins), SAFB1/2, SMAR1 etc. has been implicated in several cancers [65][66][67]. SMAR1, a tumor suppressor MARBP, is down regulated in majority of cancers including breast cancer [64,68] as well as in MCF7 and MDA-MB-231 [69]. Cux/CDP/CUTL1/Cux-1 is another MARBP that is significantly increased in high-grade carcinomas and its expression is inversely correlated with breast cancer survival [70]. Cux/CDP has been shown to enhance tumor cell invasion and migration, besides causing malignancies in several organs and cell types [70,71]. After treatment of the breast cancer cells with the low n6/n3, there was an enhanced expression of SMAR1 with a simultaneous decrease in the expression of Cux/CDP. Interestingly, the endogenous level of Cux/CDP was quite high in untreated control cells that were reduced after treatment with low n6/n3 ratios. The expression of SMAR1 has been reported to be inversely correlated with that of Cux/CDP in breast cancer [64]. It was interesting to note that MCF10A cells showed increased expression of SMAR1 when treated with low n6/n3 ratios and decreased expression in presence of high ratios. Moreover, Cux/CDP was decreased in presence of low n6/n3 ratios and increased in higher ratios. Surprisingly, 1:1 and 1:2.5 of low n6/n3 FAs increased the expression of Cux/CDP that was later decreased in cells treated with 1:4, 1:5 and 1:10 n6/n3 ratios (Fig 6E). The up-regulated Cux/CDP expression could be justified by its role in organ development such as brain, limb, lung, kidney, cell differentiation, adhesion, motility and invasiveness [72,73]. A recent report has shown that ectopic expression of CDP/Cux in MCF10A cells stimulated cell migration and invasion capacity compared to the non-expressing cells [74]. These results suggest that if non-cancerous cells are exposed to higher concentrations of n6 fatty acids, they may tend to acquire cancerous phenotype through deregulation of tumor marker proteins. Thus, regulation of MARBPs by omega 3 fatty acids could be a potential therapeutic target for regulation of cancer growth.
It is known that SMAR1 activates p53 and induces p21 expression that in turn regulates G 1 or G 2 checkpoints of the cell cycle [63]. In this context, we found that in untreated breast cancer cells, SMAR1 and p21 WAF1/CIP1 expressions were highly reduced compared to that of Cux/ CDP. However, low n6/n3 FAs not only induced expression of SMAR1 but also activated cell cycle regulatory protein, p21 WAF1/CIP1 , which in turn regulated breast cancer growth. Contrarily, high n6/n3 ratios not only increased CDP/Cux but also decreased SMAR1 and p21 WAF1/CIP1 expressions, thereby promoting breast cancer growth.
Our results show that differential ratios of n6/n3 fatty acids could significantly affect the growth kinetics of breast cancer cells and regulate chromatin modulatory proteins. Even though the ratio of 1:10 of n6/n3 FAs show promising results, the overall data indicates that even minimal increase in n3 FAs in our diet could regulate cellular machinery resulting into improved health.
Conclusion
Varying ratios of n6 and n3 fatty acids in the diet can modulate the intrinsic signal transduction mechanisms that in turn can regulate the cell growth. Even though it is quite impossible to revert back to our ancestral diet, however, efforts can be made to reduce the ratio of n6/n3 fatty acids in our diet. By modifying the fatty acid composition of the cells, many aspects of cancer cell metabolism could be regulated. Thus, the risk of cancer would be reduced by restricting the intake of n6 fatty acids in our diet. HaCaT and HEK293 cells were treated with different ratios of n6 (AA) and n3 (EPA +DHA) FA for 24h. Next day, the number of viable cells was counted using trypan blue dye exclusion assay. Data has been presented as mean±SEM of three independent experiments, each conducted in triplicates. ƍp<0.01 and à p<0.001 compared to UC; ¤p<0.001 compared to 1:1; Fp<0.01 and Θp<0.001 compared to 1:2.5; Λp<0.01 and Øp<0.001 compared to 1:4; Up<0.01 and ϙp<0.001 compared to 1:5; ×p<0.001 and ßp<0.001 compared to 1:10; ρp<0.05 and Δp<0.001 compared to 2.5:1; ‡p<0.01 and Sp<0.001 compared to 4:1, μp<0.01 compared to 5:1. HaCaT and HEK293 cells were treated with different ratios of n6 (AA) and n3 (EPA +DHA) ratios for 24h. Next day, lipid peroxidation was analyzed by using cis-parinaric acid and the values have been plotted in terms of percentage fluorescent intensity. Decrease of cisparinaric acid fluorescence is proportional to increase in lipid peroxidation. Data has been presented as mean±SEM of three independent experiments, each conducted in triplicates. % p<0.05, ƍp<0.01 and à p<0.001 compared to UC; ¥p<0.01 compared to 1:1; ₭p<0.01, Fp<0.01 and Θp<0.001 compared to 1:2.5; Λp<0.01 and Øp<0.001 compared to 1:4; $p<0.05, Up<0.01 and ϙp<0.001 compared to 1:5; βp<0.05, ×p<0.001 and ßp<0.001 compared to 1:10. (TIF)
|
v3-fos-license
|
2018-06-21T13:03:54.736Z
|
2018-06-04T00:00:00.000
|
46935644
|
{
"extfieldsofstudy": [
"Medicine",
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.3758/s13428-018-1046-3.pdf",
"pdf_hash": "409c813e2aae520f6d46529b06e77291d8bdaedd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:619",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "52938c658ab468830da0ad00252ca62ebcc05e37",
"year": 2018
}
|
pes2o/s2orc
|
Testing moderator hypotheses in meta-analytic structural equation modeling using subgroup analysis
Meta-analytic structural equation modeling (MASEM) is a statistical technique to pool correlation matrices and test structural equation models on the pooled correlation matrix. In Stage 1 of MASEM, correlation matrices from independent studies are combined to obtain a pooled correlation matrix, using fixed- or random-effects analysis. In Stage 2, a structural model is fitted to the pooled correlation matrix. Researchers applying MASEM may have hypotheses about how certain model parameters will differ across subgroups of studies. These moderator hypotheses are often addressed using suboptimal methods. The aim of the current article is to provide guidance and examples on how to test hypotheses about group differences in specific model parameters in MASEM. We illustrate the procedure using both fixed- and random-effects subgroup analysis with two real datasets. In addition, we present a small simulation study to evaluate the effect of the number of studies per subgroup on convergence problems. All data and the R-scripts for the examples are provided online.
a pooled correlation matrix with a random-or fixed-effects model. In the second stage of the analysis, a structural equation model is fitted to this pooled correlation matrix. Several alternative models may be tested and compared in this stage. If all variables were measured on a common scale across studies, analysis of covariance matrices would also be possible (Cheung & Chan, 2009). This would allow researchers to study measurement invariance across studies. In this paper we focus on correlation matrices although the techniques that are discussed are directly applicable to covariance matrices.
Researchers often have hypotheses about how certain parameters might differ across subgroups of studies (e.g., Rosenbusch, Rauch, and Bausch (2013)). However, there are currently no straightforward procedures to test these hypotheses in MASEM. The aims of the current article are therefore: 1) to provide guidance and examples on how to test hypotheses about group differences in specific model parameters in MASEM; 2) to discuss issues with regard to testing differences between subgroups based on pooled correlation matrices; and 3) to show how the subgroup models with equality constraints on some parameters can be fitted using the metaSEM (Cheung, 2015b) and OpenMx packages (Boker et al., 2014) in R (R Core Team, 2017).
Specifically, we propose a follow-up analysis in which the equality of structural parameters across studies can be tested. Assuming that there are hypotheses on categorical study-level variables, the equality of specific parameters can be tested across subgroups of studies. In this way, it is possible to find a model in which some parameters are equal across subgroups of studies and others are not. More importantly, it helps researchers to identify how studylevel characteristics can be used to explain differences in parameter estimates.
Methods to model heterogeneity in meta-analysis
With regard to how to handle heterogeneity in a metaanalysis, two dimensions (or approaches) can be distinguished (e.g., Borenstein, Hedges, Higgins, and Rothstein (2009)). The first dimension concerns whether to apply a fixed-or a random-effects model, while the second dimension is about whether or not to include study-level moderators. Two classes of models can be differentiated: the fixed-effects model and the random-effects model. The fixed-effects model allows conditional inference, meaning that the results are only relevant to the studies included in the meta-analysis. The random-effects model allows for unconditional inference to studies that could have been included in the meta-analysis by assuming that the included studies are samples of a larger population of studies (Hedges & Vevea, 1998).
The fixed-effects model (without moderators) usually assumes that all studies share the same population effect size, while the fixed-effects model with moderators assumes that the effects are homogeneous after taking into account the influence of moderators. The random-effects model assumes that the differences across studies are random. The random-effects model with moderators, known as a mixedeffects model, assumes that there will still be random effects after the moderators are taken into account.
Methods to model heterogeneity in MASEM
The above framework from general meta-analysis is also applicable to MASEM. Table 1 gives an overview of the suitability, and the advantages and disadvantages of using different combinations of fixed-versus random-effects MASEM, with or without subgroups. Case 1 represents overall analysis with a fixed-effects model. Fixed-effects models are very restrictive, (i.e. the number of parameters to be estimated is relatively small), which makes them easy to apply. However, homogeneity of correlation matrices across studies may not be realistic, leading to biased significance tests (Hafdahl, 2008;Zhang, 2011).
One way to account for heterogeneity is by estimating between-study heterogeneity across all studies in the random-effects approach (Case 2 in Table 1). By using a random-effects model, the between-study heterogeneity is accounted for at Stage 1 of the analysis (pooling correlations), and the Stage 2 model (the actual structural model of interest) is fitted on the averaged correlation matrix. Under the random-effects model, study-level variability is considered a nuisance. An overall random-effects analysis may be the preferred choice when moderation of the effects by study-level characteristics is not of substantive interest (Cheung & Cheung, 2016).
Subgroup analysis is more appropriate than overall random-effects analysis in cases where it is of interest to determine how the structural models differ across levels of a categorical study-level variable, (Cases 3 and 4 in Table 1). In a subgroup analysis, the structural model is fitted separately to groups of studies. Within the subgroups, one may use random-or fixed-effects modeling (Jak, 2015). Fixed-effects subgroup analysis is suitable if homogeneity of correlations within the subgroups is realistic. Most often, however, heterogeneity within subgroups of studies is still expected, and fixed-effects modeling may be unrealistic. In such cases, random-effects subgroup analysis may be the best choice. A possible problem with a random-effects subgroup analysis is that the number of studies within each subgroup may become too small for reliable results to be obtained.
We focus on the situation in which researchers have an a priori idea of which study-level characteristics may moderate effects in the Stage 2 model. That is, we do not consider exploratory approaches, such as using cluster analysis to find homogeneous subgroups of studies (Cheung & Chan, 2005a).
Besides the random-effects model and subgroup analysis, Cheung and Cheung (2016) discuss an alternative approach to addressing heterogeneity in MASEM, called "parameter-based MASEM". Since this approach also has its limitations, and discussing them is beyond the scope of the current work, we refer readers to their study for more details. We focus on TSSEM, in which subgroup analysis is the only option to evaluate moderator effects.
Currently used methods to test hypotheses about heterogeneity in MASEM
A disadvantage of the way subgroup analysis is commonly applied, is that all Stage 2 parameters are allowed to be different across subgroups, regardless of expectations about differences in specific parameters. That is, differences in parameter estimates across groups are seldom tested in the structural model. For example, Rosenbusch et al. (2013) performed a MASEM analysis on data from 83 studies, Use if: There is a specific hypothesis about subgroups, and homogeneity within subgroups is realistic There is a specific hypothesis about subgroups, and homogeneity within subgroups is not realistic Subgroups + 1) Small number of parameters 1) Accounts for additional heterogeneity within subgroups 2) Sometimes the only option (e.g. with a small number of studies) 2) Allows for unconditional inference
3) Posibility to test subgroup differences in parameters
3) Posibility to test subgroup differences in parameters -1) Only allows for conditional inference 1) Large number of parameters (larger than without subgroups) 2) Need to dichotomize continuous moderator 2) Need to dichotomize continuous moderator 2) Biased parameter estimates if homogeneity does not hold 3) Number of studies per subgroup might get too small testing a model in which the influence of the external environment of firms on performance levels is mediated by the entrepreneurial orientation of the firm. They split the data into a group of studies based on small sized firms and medium-to-large sized firms, to investigate whether the regression parameters in the path model are moderated by firm size. However, after fitting the path model to the pooled correlation matrices in the two subgroups, they compared the results without using any statistical tests. Gerow et al. (2013) hypothesized that the influence of intrinsic motivation on individuals' interaction with information technology was greater when the technology was to be used for hedonistic applications than for practical applications. They fitted the structural model to a subgroup of studies with hedonistic applications, a subgroup of studies with practical applications, and a subgroup of studies with a mix of applications. However, to test for differences between the subgroups, they performed t-tests on the four pooled Stage 1 correlation coefficients in the subgroups, ignoring the estimates in the actual path models altogether. These approaches are not ideal because researchers cannot test whether some of the parameters, those that may be of theoretical interest, are significantly different across groups.
More often than using subgroup analysis, researchers address the moderation of effect sizes using standard metaanalysis techniques on individual effect sizes, before they conduct the MASEM analysis. They use techniques such as meta-regression or ANOVA-type analyses (Lipsey & Wilson, 2001). Independent of the moderation effects, the MASEM is then performed using the full set of studies. Examples of this practice can be found in Drees and Heugens (2013), Earnest, Allen, and Landis (2011) and Jiang, Liu, Mckay, Lee, and Mitchell (2012). A disadvantage of this approach is that moderation is tested on the correlation coefficients, and not on specific parameters in a structural equation model. Most often, this is not in line with the hypothesis of interest. For example, the moderator hypotheses of Gerow et al. (2013), were about the direct effects in the path model but not about covariances and variances. Although subgroup analysis to test heterogeneity has previously been conducted (see Haus et al. 2013), we think that instructions regarding the procedures are needed because most researchers who apply MASEM still choose to address issues of moderation outside the context of MASEM.
Overview of this article
In the next sections, we briefly introduce fixed-and randomeffects TSSEM and propose a follow-up analysis to address heterogeneity using subgroup analysis. We discuss some issues related to testing the equality of parameters using pooled correlation matrices. Next, we illustrate the procedure using an example of testing the equality of factor loadings across study-level variables of the Hospital Anxiety and Depression Scale (HADS) with data from Norton, Cosco, Doyle, Done, and Sacker (2013) as well as with an example of testing moderation by socio-economic status (SES) in a path model linking teacher-child relations to engagement and achievement (Roorda, Koomen, Spilt, & Oort, 2011). To facilitate the use of the proposed procedure, detailed reports of the analyses, including data and R-scripts, are provided online at www.suzannejak.nl/masem code. Finally, we present a small simulation study to evaluate the effect of the number of studies included in a MASEM analysis on the frequency of estimation problems.
Fixed-effects TSSEM
The fixed-effects TSSEM approach was proposed by Cheung and Chan (2005b). They performed a simulation study, comparing the fixed-effects TSSEM approach to two univariate approaches (Hunter & Schmidt, 2015;Hedges & Olkin, 1985) and the multivariate GLS-approach (Becker, 1992(Becker, , 1995. They found that the TSSEM approach showed the best results with respect to parameter accuracy and false positive rates of rejecting homogeneity.
Stage 1
In fixed-effects TSSEM, the correlation matrices in the individual studies are assumed to be homogenous across studies, all being estimates of one common population correlation matrix. Differences between the correlation matrices in different studies are assumed to be solely the result of sampling error. The model that is fitted at Stage 1 is a multigroup model in which all correlation coefficients are assumed to be equal across studies. Fitting this model to the observed correlation matrices in the studies leads to an estimate of the population correlation matrix P F , which is correctly estimated if homogeneity indeed holds.
Stage 2
In Stage 2 of the analysis, weighted least squares (WLS) estimation (Browne, 1984) is used to fit a structural equation model to the estimated common correlation matrix from Stage 1. The proposed weight matrix in WLS-estimation is the inverse asymptotic variance covariance matrix of the Stage 1 estimates of P F , i.e., W F = V −1 F (Cheung & Chan, 2005b). These weights ensure that correlation coefficients that are based on more information (on more studies and/or studies with larger sample sizes) get more weight in the estimation of the Stage 2 parameters. The Stage 2 analysis leads to estimates of the model parameters and a χ 2 measure of fit.
Stage 1
In random-effects TSSEM, the population effects sizes are allowed to differ across studies. The between-study variability is taken into account in the Stage 1 analysis. Estimates of the means and the covariance matrices in random-effects TSSEM are obtained by fixing the sampling covariance matrices to the known values (through definition variables, see Cheung (2015a), and using full information maximum likelihood to estimate the vector of means, P R , and the between-studies covariances, T 2 (Cheung, 2014).
Stage 2
Fitting the Stage 2 model in the random-effects approach is not very different from fitting the Stage 2 model in the fixedeffects approach. The values in W R from a random-effects analysis are usually larger than those obtained from a fixedeffects analysis, because the between-studies covariance is added to the construction of the weight matrix. This results in relatively more weight being given to smaller studies, and larger standard errors and confidence intervals, than with the fixed-effects approach.
Using subgroup analysis to test parameter heterogeneity
The basic procedure for subgroup analysis comprises separate Stage 1 analyses for the subgroups. The Stage 1 analyses may be in the fixed-effects framework, hypothesizing homogeneity within subgroups, or in the random-effects framework, assuming that there is still substantive betweenstudy heterogeneity within the subgroups. In a subgroup MASEM analysis, it is straightforward to equate certain parameters across groups at Stage 1 or Stage 2 of the analysis. The differences in the parameters across groups can be tested using a likelihood ratio test by comparing the fit of a model with across-groups equality constraints on certain parameters with a model in which the parameters are freely estimated across groups.
Testing heterogeneity in Stage 1 parameters
Although we focus on testing differences in Stage 2 parameters, in some situations it may be interesting to test the equality of the pooled correlation matrices across subgroups. In order to test the hypothesis that the correlation matrices from a fixed-effects subgroup analysis, P F , are equal across subgroups g, one could fit a model with the constraint P F g1 = P F g2 . Under the null hypothesis of equal correlation matrices across groups, the difference in the -2 log-likelihoods of the models with and without this constraint asymptotically follows a chi-square distribution with degrees of freedom equal to the number of constrained correlation coefficients. Similarly, one could perform this test on the averaged correlation matrices from a random-effects Stage 1 analysis. With random-effects analysis, it may additionally be tested if the subgroups differ in their heterogeneity covariance matrices T 2 g . When the researcher's hypotheses are directly about Stage 2 parameters, one may skip testing the equality of equal correlation matrices across subgroups. The equality of between-studies covariance matrices may still be useful to reduce the number of parameters to be estimated in a random-effects analysis. This issue is discussed further in the general discussion.
Testing heterogeneity in Stage 2 parameters
For ease of discussion, we suppose that there are two subgroups. Given the two Stage 1 pooled correlation matrices in the subgroups g, say, P g , a structural model can be fitted to the two matrices. For example, one could fit a factor model in both groups: where with p observed variables and k common factors, g is a full p by k matrix with factor loadings in group g, g is a k by k symmetrical matrix with factor variances and covariances in group g, and g is a p by p symmetrical matrix with residual (co)variances in group g. The covariance structure is identified by setting diag( g ) = I. Since the input is a correlation matrix, the constraint diag(P g ) = I, is required to ensure that the diagonals of P g are always ones during estimation.
In order to test the equality of factor loadings across groups, a model can be fitted in which g = . Under the null hypothesis of equal factor loadings, the difference in chi-squares of the models with g = g and g = asymptotically follows a chi-square distribution with degrees of freedom equal to the difference in the number of freely estimated parameters. If the difference in chi-squares is considered significant, the null hypothesis of equal factor loadings is rejected.
The approach of creating subgroups with similar study characteristics and equating parameters across groups is suitable for any structural equation model. For example, in a path model, it may be hypothesized that some or all direct effects are different across subgroups of studies, but variances and residual variances are not. One could then compare a model with equal regression coefficients with a model with freely estimated regression coefficients to test the hypothesis. Also, the subgroups approach can be applied using fixed-effects or random-effects analyses.
Issues related to testing equality constraints based on correlation matrices in TSSEM
Structural equation models are ideally fitted on covariance matrices. In MASEM, and meta-analysis in general, it is very common to synthesize correlation coefficients. One reason for the synthesis of standardized effect sizes is that different studies may use different instruments with different scales to operationalize the variables of interest. The analysis of correlation matrices does not pose problems when the necessary constraints are included (Bentler & Savalei, 2010;Cheung, 2015a). However, it should be taken into account that fitting models to correlation matrices with TSSEM implies that all parameter estimates are in a standardized metric (assuming that all latent variables are scaled to have unit variances, which is recommended in TSSEM (Cheung, 2015a)).
When we compare models across subgroups in TSSEM, we are thus comparing parameter estimates that are standardized with respect to the observed and latent variables within the subgroups (Cheung, 2015a;Steiger, 2002). This may not necessarily be a problem -sometimes it is even desirable to compare standardized coefficients (see Kwan and Chan (2011)). For example, van den Boer, van Bergen, and de Jong (2014) tested the equality of correlations between three reading tasks across an oral and a silent reading group. However, it is important to be aware of this issue and to interpret the results correctly. Suppose that a standardized regression coefficient from variable x on variable y β * yx , is compared across two subgroups of studies, g 1 and g 2 . The standardized direct effects in the subgroups are given by: and where β represents an unstandardized regression coefficient, β * represents a standardized regression coefficient, and σ represents a standard deviation. In the special case that the standard deviations of x and y are equal within subgroups, in each subgroup the standardized coefficient is equal to the unstandardized coefficient, and the test of H 0 : β * yx g1 = β * yx g2 is equal to the test of H 0 : β yx g1 = β yx g2 . In fact, this not only holds when the standard deviations of the variables are equal in the subgroups, but in general when the ratio of σ x over σ y is equal across subgroups. For example, when σ x and σ y in group 1 are respectively 2 and 4, and the σ x and σ y in group 2 are respectively 1 and 2, the standardized regression coefficient equals the unstandardized coefficient times .5 in both groups. In this case, a test of the equality of the standardized regression coefficients will lead to the same conclusion as a test of the unstandardized regression coefficients.
However, in most cases the ratio of standard deviations will not be exactly equal across groups. Therefore, when testing the equality of regression coefficients in a path model, one has to realize that all parameters are in a standardized metric. The conclusions may not be generalizable to unstandardized coefficients. Whether the standardized or the unstandardized regression coefficients are more relevant depends on the research questions (Bentler, 2007). In the context of meta-analysis, standardized coefficients are generally preferred (Cheung, 2009;Hunter & Hamilton, 2002).
In a factor analytic model, several methods of standardization exist. Parameter estimates may be standardized with respect to the observed variables only, or with respect to the observed variables and common factors. In MASEM, it is recommended that the common factors be identified by fixing their variances to 1 (Cheung, 2015a). All results obtained from a MASEM-analysis on correlation matrices are thus standardized with respect to the observed variables and the common factor. As a consequence of this standardization, the residual variances in are effectively not free parameters, but the remainder of diag(I) − diag( T ) (Cheung, 2015a).
Similar to path analysis, when testing the equality of factor loadings across subgroups in MASEM, the results may not be generalizable to unstandardized factor loadings, due to across-group differences in the (unknown) variances of the indicators and common factors. Moreover, if all standardized factor loadings are set to be equal across groups, this implies that all standardized residual variances are equal across groups. Note that although one may be inclined to denote a test of the equality of factor loadings a test of weak factorial invariance (Meredith, 1993), this would strictly be incorrect, as weak factorial invariance pertains to the equality of unstandardized factor loadings.
Examples
In this section, we present two examples of the testing of moderator hypotheses in MASEM using subgroup analysis. Example 1 illustrates the testing of the equality of factor loadings using factor analysis under the fixed-effects model (Case 1 and 3 from Table 1). Example 2 illustrates the testing of the moderation of direct effects using path analysis under the random-effects model (Case 2 and 4 from Table 1). The R-syntax for the examples can be found online (http://www.suzannejak.nl/masem code).
Introduction
The HADS was designed to measure psychological distress in non-psychiatric patient populations (Zigmond & Snaith, 1983), and is widely used in research on distress in patients. The instrument consists of 14 items: the odd numbered items are designed to measure anxiety and the even numbered items are designed to measure depression. The items are scored on a 4-point scale. Some controversy exists regarding the validity of the HADS (Zakrzewska, 2012). The HADS has generally been found to be a useful instrument for screening purposes, but not for diagnostics purposes (Mitchell, Meader, & Symonds, 2010). Ambiguous results regarding the factor structure of the HADS led to a meta-analytic study by Norton et al. (2013), who gathered correlation matrices of the 14 HADS items from 28 published studies. Using meta-analytic confirmatory factor analysis, they found that a bi-factor model that included all items loading onto a general distress factor and two orthogonal anxiety and depression factors provided the best fit to the pooled data. Of the 28 studies evaluated by Norton et al., 10 considered non-patient samples and 18 were based on patient samples. As an illustration we will test the equality of factor loadings across studies based on patient and non-patient samples.
Analysis
All of the models were fitted using the metaSEM, and OpenMx packages in the R statistical platform. First we fit the Stage 1 and Stage 2 models with a fixed-effects model to the total set of studies (illustrating Case 1 from Table 1). The stage 1 analysis using the fixed-effects model involved fitting a model to the 28 correlation matrices in which all correlation coefficients were restricted to be equal across studies. Misfit of this model would indicate inequality of the correlation coefficients across studies. Stage 2 involved fitting the bi-factor model that Norton et al. (2013) found to have the best fit to the data (see Fig. 1).
Next, two subgroups of studies were created, one group with the 10 non-patient samples and the other with the 18 patient samples (illustrating Case 3 from Table 1). First, the Stage 1 analyses were performed in the two groups separately, leading to two pooled correlation matrices. Then, the factor model without equality constraints across subgroups was fitted to the data. Next, three models in which the factor loadings of the general distress factor, anxiety factor and depression factor respectively were constrained to be equal across patient and non-patient samples were tested. If the equality constraints on the factor loadings led to a significantly higher chi-square statistic, the (standardized) factor loadings would be considered to differ across groups.
Exact fit of a proposed model is rejected if the χ 2 statistic is found to be significant. Exact fit will rarely hold in MASEM, due to the large total sample size. Therefore, as in standard SEM, it is common to use approximate fit to assess the fit of models. Approximate close fit is associated with RMSEA-values under .05, satisfactory approximate fit with RMSEA-values under .08, and bad approximate fit is associated with RMSEA-values larger than .10 (MacCallum, Browne, and Sugawara, 1996). In addition to the RMSEA, we will evaluate the CFI (Bentler, 1990) and the standardized root mean squared residual (SRMR). CFI-values above .95 and SRMR-values under .08 are considered satisfactory (Hu & Bentler, 1999). For more information about the calculation and use of fit-indices in SEM we refer to Schermelleh-Engel et al. (2003).
Overall Stage 1: Testing homogeneity and pooling correlation matrices
The Stage 1 model did not have exact fit to the data, χ 2 (2,457) = 10,400.04, p <.01. Approximate fit was acceptable according to the RMSEA (.064, 95% CI: [.063 ; .066]), but not according to the CFI (.914) and SRMR (.098). Based on the CFI and SRMR, one should not continue to fit the structural model, or use random-effects modeling. However, in order to illustrate the modeling involved in Case 1, we will continue with Stage 2 using overall fixedeffects analysis. Table 2 shows the pooled correlation matrix based on the fixed-effects Stage 1 analysis. Fig. 1 The bi-factor model on the HADS-items Overall Stage 2: Fitting a factor model to the pooled correlation matrix Norton et al. (2013) concluded that a bifactor model showed the best fit to the data. We replicated the analyses and found that, indeed, the model fit is acceptable according to the RMSEA (χ 2 (63) = 2,101.48, RMSEA = .039, 95% CI RMSEA: [.037 ; .040], CFI = .953, SRMR = .033). The parameter estimates from this model can be found in Table 3. All items loaded substantially on the general factor, and most items had smaller loadings on the specific factor. Contrary to expectations, Item 7 has a negative loading on the anxiety factor. Note: est = parameter estimate, lb = lower bound, ub = upper bound, General, Anxiety and Depression refer to the factor loadings associated with these factors, refers to residual variance common correlation matrix does not have acceptable fit in the patient group, indicating that not all heterogeneity is explained by differentiating patient and non-patient samples, we continue with Stage 2 analysis as an illustration of the procedure when the interest is Case 2 (see Table 1).
Subgroup Stage 2: Testing equality of factor loadings
The fit of the models with freely estimated factor loadings and with equality constraints on particular sets of factor loadings can be found in Table 4. The RMSEAs of all models indicated close approximate fit. However, the χ 2 -difference tests show that the factor loadings cannot be considered equal for any of the three factors. Figure 2 shows a plot of the standardized factor loadings in the two groups. For the majority of the items, the factor loadings are higher in the patient group than in the non-patient group.
Discussion
We found that the factor loadings of the bi-factor model on the HADS differed across the studies involving patients versus studies involving non-patients. The items were generally found to be more indicative of general distress in the studies with patient samples than in the studies with non-patient samples. A possible reason for this finding is that the HADS was developed for use in hospital settings, and thus was designed for use with patients. In practice, researchers may continue with the analysis by testing the equality of individual factor loadings across subgroups. For example, the factor loading of Item 2 from the Depression factor seems to differ more across groups than the other factor loadings for this factor. Such follow-up analyses may give more insight into specific differences across subgroups. However, it is advisable to apply some correction on the significance level, such as a Bonferroni correction, when testing the equality of several parameters individually.
A problem with these data is that the HADS is scored on a 4-point scale, but the analysis was performed on Pearson product moment correlations, assuming continuous variables. This may have led to underestimated correlation coefficients. Moreover, it would have been informative to analyze covariance matrices rather than correlation matrices, enabling a test on weak factorial invariance. However, the standard deviations were not available for most of the included studies.
We used fixed-effects overall and subgroup analysis, although homogeneity of correlation matrices did not hold. Therefore, it would have been more appropriate to apply random-effects analysis. However, due to the relatively large number of variables and the small number of studies, a random-effects model did not converge to a solution. Even the most restrictive model with only a diagonal T 2 that was set to be equal across subgroups did not solve this problem.
The results that were obtained should thus be interpreted with caution, as the Type 1 errors may be inflated. The next example shows random-effects subgroup-analysis, which may be the appropriate framework in most cases.
Introduction
In this example we use random-effects subgroup analysis to test moderation by SES in a path model linking teacherchild relations to engagement and achievement. Children with low SES are often found to be at risk of failing in school and dropping out (Becker & Luthar, 2002). According to Hamre and Pianta (2001), children at risk of failing in school may have more to gain from an ability to adapt to the social environment of the classroom than children who are doing very well at school. Therefore, it can be expected that the effects of teacher-child relations may be stronger for children with lower SES. Roorda, Koomen, Spilt, and Oort (2011) performed a meta-analysis on correlation coefficients between measures of positive and negative teacher-student relations, engagement and achievement. They used univariate moderator analysis, and found that all correlations were larger in absolute value for studies with relatively more students with low SES. In the current analysis, we will test the moderation of the specific effects in a path model. We will use 45 studies reported by Roorda et al. (2011) andJak, Oort, Roorda, andKoomen (2013), which include information about SES of the samples. Note: df and χ 2 refer to the difference in df and χ 2 in comparison with Model 1 Fig. 2 A plot of the estimated factor loadings and 95% confidence intervals for the patient group (red) and non-patient group (grey) Note: We show the absolute value of the factor loading of Item 7 on the Anxiety factor
Analysis
First we will perform a random-effects Stage 1 and Stage 2 analysis on the total sample of studies (representing Case 2 from Table 1). Next, we split the studies into two subgroups based on SES (representing Case 4 from Table 1). We will fit the hypothesized path model (see Fig. 3) to a group of studies in which the majority of the respondents were indicated to have low SES (24 studies), and a group of studies for which the majority of the sample was indicated with high SES (21 studies). Note that SES is a continuous moderator variable in this case (percentages). We split the studies in two groups based on the criterion of 50% of the sample having low SES. Then, we test the equivalence of the direct effects across groups by constraining the effects to be equal across subgroups. Using a significance level of .05, if the χ 2 statistic increased significantly given the increased degrees of freedom when adding equality constraints across groups, one or more of the parameters would be considered significantly different across groups. Note that dichotomizing a continuous variable is generally not advised. In this example we dichotomize the moderator in order to illustrate subgroup analyses. Moreover, in TSSEM, the analysis of continuous moderator variables is not yet well developed.
Results
Overall Stage 1: Random-effects analysis The pooled correlations based on the random-effects analysis can be found in Table 5. When a random-effects model is used, an I 2 value may be calculated. It can be interpreted as the proportion of study-level variance in the effect size (Higgins & Thompson, 2002). The I 2 values (above the diagonal) show that there is substantial between-studies variability in the correlation coefficients, ranging from .79 to .94.
Overall Stage 2: Fitting a path model
We fitted a path model to the pooled Stage 1 correlation matrix, in which positive and negative relations predicted achievement indirectly, through engagement. Exact fit of this model was rejected (χ 2 (2) = 11.16, p <.05). However, the RMSEA of .013 (95% CI = [.006 ; .020]) indicated close approximate fit, as well as the CFI (.966) and SRMR (.045). Table 6 shows the parameter estimates and the associated 95% confidence intervals. All parameter estimates were considered significantly different from zero, as zero is not included in the 95% confidence intervals. The indirect effects of positive and negative relations on achievement were small, but significant. Although the model shows good fit on the averaged correlation matrix, this analysis provides no information about whether SES might explain the between-study heterogeneity. Subgroup analysis is used to test whether the parameters differ across studies with different levels of average SES.
Subgroup Stage 1: Random-effects analysis Different pooled correlation matrices were estimated in the group of studies with low SES and the group of studies with high SES (see Tables 7 and 8). The proportions of betweenstudies variance (I 2 ) within the subgroups are smaller than they were in the total sample, indicating that SES explains part of the between-study heterogeneity.
Subgroup Stage 2: Testing moderation of effects by SES
The hypothesized path model showed acceptable approximate fit, but no exact fit, in the low-SES group, χ 2 (2) = 6.28, p <.05, RMSEA = .013 (95% CI = [.002 ; .026]), CFI = .978, SRMR = .041 as well as in the high-SES group, χ 2 (2) = 9.50, p <.05, RMSEA = .015 (95% CI = [.006 ; .025]), CFI = .936, SRMR = .0549. The fit of the unconstrained baseline model, with which the fit of the models with equality constraints will be compared, is equal to the sum of the fit of the models in the two subgroups. Therefore, the χ 2 and df against which the constrained models will be tested is df = 2+2 = 4 and χ 2 = 6.28 + 9.50 = 15.78. Constraining the three direct effects in the path model to be equal across subgroups did not lead to a significant increase in misfit, χ 2 (3) = 5.18, p = .16. Therefore, the null hypothesis of equal direct effects across subgroups is not rejected.
Discussion In this example we tested whether the direct effects in a path model linking teacher-child relations to engagement and achievement were moderated by SES. The subgroup analysis showed that the null-hypothesis stating that the effects are equal in the low SES and high SES populations cannot be rejected. Note that non-rejection of a null-hypothesis does not imply that the null-hypothesis is true. It could also mean that our design did not have enough statistical power to detect an existing difference in the population.
Simulation study
It is often necessary to create subgroups of studies, because an overall analysis will mask differences in parameters across subgroups. For example, if the population regression coefficient is 0.20 for Subgroup 1, and 0.30 for Subgroup 2, an analysis of all of the studies together will result in an estimated regression coefficient of between 0.20 and 0.30. This means that the effect will be overestimated for Subgroup 1 and underestimated for Subgroup 2. Subgroup analysis will lead to better parameter estimates in the subgroups. However, creating subgroups may lead to small numbers of studies within each subgroup. In combination with having twice as many parameters to be estimated as with an overall analysis, small numbers of studies will likely result in estimation problems such as non-convergence. Convergence is an important issue, because researchers will be unable to present any meaningful results of the MASEM analysis without having a converged solution. In order to evaluate the effect of the number of studies within each subgroup on the frequency of estimation problems, we conducted a small simulation study.
Data generation and conditions
We generated data from two subgroups, in which one regression coefficient differed by .10 points across subgroups in the population. Next, we fitted the correct model to the two subgroups separately, as well as to the combined data. We expected that, due to the larger number of studies, the percentage of converged solutions would be larger for the overall analysis than for the subgroup analyses and that the estimation bias in the manipulated effect would be smaller in the subgroup analysis (because the regression coefficient is allowed to be different in each subgroup). The data-generating model was based on the results from Example 2. The population values for the direct effects in Subgroup 1 were: β 31 = .265, β 32 = -.307, β 43 = .288, and ψ 31 = -.329. The between-studies variance used to generate random correlation matrices was based on Example 2. In Subgroup 2, all population values were identical to the values in Subgroup 1, except for β 43 , which was .388 (.10 larger than in Subgroup 1). We generated data with k = 22, k = 44, k = 66 or k = 88 studies per subgroup, with sample sizes of n=200 for each study. For each condition we generated 2000 meta-analytic datasets.
In each condition we fitted the correct model to the two subgroups separately, as well as to the subgroups combined. We restricted the between-studies covariance matrices to be diagonal, in order to reduce the number of parameters to be estimated. In practice, this restriction is often applied (Becker, 2009). We evaluated the percentage of converged solutions, the relative bias in the estimate of β 43 , and the relative bias in the standard error of β 43 across methods and conditions. The relative percentage of estimation bias for β 43 was calculated as We regarded estimation bias of less than 5% as acceptable (Hoogland & Boomsma, 1998). The relative percentage of bias in the standard error of β 43 was calculated as: whereS E(β 43 ) is the average standard error ofβ 43 across replications, and SD(β 43 ) is the standard deviation of the parameter estimates across replications. We considered the standard errors to be unbiased if the relative bias was smaller than 10% (Hoogland & Boomsma, 1998). Figure 4a shows the convergence rates for all conditions. As expected, the analysis of the total dataset resulted in more converged solutions than the subgroup analysis in all conditions. In addition, convergence rates increased with the number of studies. However, the convergence rates were generally low. For example, with 22 studies per subgroup (the condition similar to that of our Example 2), only 43% of the datasets led to a converged solution with the overall analysis, while only around 30% converged with the subgroup analysis. With small numbers of studies per subgroups (smaller than 44), most analyses are expected to not result in a converged solution. Fig. 4 Convergence, parameter bias and standard error bias for overall and subgroup analysis with a group difference of 0.10 in β 43 Note:
Convergence
The results in panels B and C are based on only those replications that led to a converged solution for all three analyses. The numbers of replications used are 141, 188, 246, and 300 replications for k=22, k=44, k=66, and k=88 respectively
Bias in parameter estimates
We evaluated the parameter bias in β 43 only for. 1 The results are presented in Fig. 4b. The percentage of estimation bias was not related to the number of studies or to sample size. As expected, the overall analysis resulted in underestimation for Subgroup 1 and overestimation for Subgroup 2, while the subgroup analysis led to unbiased parameter estimates. Although the difference in the population value was only 1 Consequently, the numbers of replications used to calculate the bias were 141, 188, 246, and 300 of the 2000 replications for k=22, k=44, k=66, and k=88, respectively. We have also calculated the bias using all converged solutions per method (resulting in larger, but different numbers of replications being used for different analyses). This approach leads to very similar results, and identical conclusions. 0.10, the percentages of the relative bias exceeded the cutoff of 5% in all conditions for the overall analysis. For parameters that did not differ across subgroups, all analyses yielded unbiased estimates.
Bias in standard errors
The relative bias in standard errors was around 10% in all conditions for the overall analysis. With the subgroup analysis, the standard error estimates were more accurate, with a bias of between roughly -5% and 5% in all conditions. The results are presented in Fig. 4c. The standard errors of the parameters that did not differ across subgroups were unbiased for all analyses.
Conclusion on the simulation study
The simulation study showed that convergence is a serious potential problem when applying random-effects MASEM. Moreover, the likelihood of non-convergence occurring increases with smaller numbers of studies, such as with a subgroup analysis. However, if the model converges, the subgroup analysis will lead to better parameter estimates and standard error estimates in cases where a difference in the population coefficient is present, even if the population difference is small. In order to increase the likelihood of obtaining a converged solution, it is recommended that as many studies as possible be included.
General discussion
We proposed subgroup analysis to test moderation hypotheses on specific parameters in MASEM. We illustrated the approach using TSSEM. The subgroup analysis method that was presented is not restricted to TSSEM. One could just as easily apply the subgroups analysis on pooled correlation matrices obtained with univariate approaches (Hunter & Schmidt, 2015;Hedges & Olkin, 1985) or the multivariate GLS-approach (Becker, 1992;1995). However, based on earlier research comparing these approaches (Cheung & Chan, 2005b;Jak & Cheung, 2017), univariate approaches are not recommended for MASEM. Creating subgroups of studies to test the equality of parameters across groups is a useful approach, but may also lead to relatively small numbers of studies within each subgroup. Given the large number of parameters involved in random-effects modeling, the number of studies may become too small for a converged solution to be obtained, as was the case in our Example 1. One way to reduce the number of parameters is to estimate the between-study heterogeneity variances but not the covariances among the random effects, i.e., restricting T 2 to be diagonal. In practice, this restriction is often needed (Becker, 2009). We applied this constraint to the two subgroups in the second example and in the simulation study.
In the simulation study, we found that even with a diagonal heterogeneity matrix, random-effects subgroup modeling is often not feasible due to convergence problems. In practice, researchers may therefore have no other option than to apply fixed-effects modeling instead of randomeffects modeling. However, ignoring between-study heterogeneity is known to lead to inflated false positive rates for significance tests (Hafdahl, 2008;Zhang, 2011). Researchers should therefore be careful when interpreting the results of significance tests in cases where heterogeneity exists but a fixed-effects model is applied. Collecting more studies to be included in the meta-analysis is preferable over switching to a fixed-effects model.
A limitation of the subgroup analysis to test moderation is that the moderator variables have to be categorical. In the second example, we split the studies into two groups based on the percentage of respondents with high SES in the study. By dichotomizing this variable we throw away information and lose statistical power. Indeed, contrary to our findings, the univariate metaregression analyses reported by Roorda et al. showed significant moderation by SES. However, these analyses did not take into account the multivariate nature of the data, and tested the moderation of the correlation coefficients and not of the regression coefficients. Future research is needed to develop methods to include study-level variables as continuous covariates in TSSEM.
Concluding remarks
In the current paper we presented a framework to test hypotheses about subgroup differences in meta-analytic structural equation modeling. The metaSEM and OpenMxcode and R-functions used in the illustrations are provided online, so that researchers may easily adopt the proposed procedures to test moderator hypotheses in their MASEM analyses. The simulation study showed that increasing the number of studies in a random-effects subgroup analysis increases the likelihood of obtaining a converged solution.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons. org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
v3-fos-license
|
2020-12-31T09:02:41.251Z
|
2020-12-30T00:00:00.000
|
234379592
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journal.uinsgd.ac.id/index.php/ja/article/download/6461/pdf",
"pdf_hash": "32c9619a14e95ab11f3140f3ddf99f8b303b0572",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:620",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "88a0f3d4d733ddc902cc99d36953f6e12f10225a",
"year": 2020
}
|
pes2o/s2orc
|
WATER SAVING TECHNOLOGY PACKAGE TO IMPROVE SHALLOT PRODUCTIVITY FOR SMALLHOLDER FARMERS IN EASTERN INDONESIA PAKET TEKNOLOGI HEMAT AIR UNTUK MENINGKATKAN HASIL BAWANG MERAH BAGI PETANI KECIL DI INDONESIA TIMUR
Dryland usage for shallot cultivation is very potential in West Nusa Tenggara (NTB) Province Indonesia. However, its utilization is faced with various obstacles such as soil low fertility, limited water availability, and high pest and disease attacks. Currently, farmers apply flood and furrow irrigation methods for shallot cultivation in NTB Province, which may not suitable on dryland, especially on coarse texture soils. The purpose of this study was to obtain a package of watersaving technology to increase the productivity of shallots in the dryland of NTB. There were three treatments of technology packages tested laid as Randomized Block Design: A (Trichoderma sp., bio-urine liquid fertilizer, sprinkler irrigation; B (bio-urine liquid fertilizer, furrow irrigation); and C (farmer practice), involving farmer group members from planning to evaluating for the technology package that being tested. The amount of water used was measured using a water meter. The results showed that package A had achieved the highest shallot yield at 31.6 tons ha , which was 14% and 45% higher compared to package B and C, respectively. Package A was also able to save water irrigation for 62.1% and 95.8% compared to package B and C, respectively. Thus, sprinkler irrigation not only can increase shallot yield but also better in saving water irrigation.
INTRODUCTION
West Nusa Tenggara (NTB) province is the third-largest national producer of shallots (Allium ascalonicum L.) with harvest area of 11,518 ha. The largest shallot cultivation area is located in Bima District for 8,027 ha followed by East Lombok District for 1,156 ha (Badan Pusat Statistik, 2015). Although the potential land for the development of shallots cultivation is huge up to 118,241 ha, but currently only 6.32% has been used (Nazam et al., 2012).
Extensification of shallots is increasing from year to year due to the surge of demand and the profitable selling price for farmers. The expansion of shallot cultivation area mostly occupies dryland which covers more than 80% of the total land of NTB (Badan Pusat Statistik, 2015). The development of shallot cultivation in dryland is one of the strategic policy to reduce poverty in NTB because most of the poverty live in dryland areas moreover their livelihood relies on agricultural activities.
The main constraints on the development of shallot cultivation on dryland are water scarcity also pests and diseases attack. Furthermore, farmers always irrigate shallots by furrow and flood irrigation methods. This causes waste of water and labour. One method of irrigation that quite efficient to be used is sprinkler irrigation, because this method can save water, besides the water irrigation can be applied in accordance to the needs of plants accurately (Li & Rao, 2003;Undang, 2004). However, the sprinkler irrigation system has not been studied in detail and demonstrated at the farmer level. This can be seen where rarely farmers used a sprinkler to irrigate the shallots. In addition, the farmers irrigate shallot plants not based on crop needs and the right time but they irrigated every day to keep the soil moist. Thus, it is very necessary to study and demonstrate an effective and efficient irrigation system that is economically profitable, socially acceptable, and technically easy to do.
126
Components of water-saving technology of shallot cultivation sufficiently available from the results of research elsewhere (Roy, 2014;Sumbayak & Susila, 2018;Vickers et al., 2015;Yenus, 2013). However, most of these components have not been assembled and combined in the form of technological packages that are ready to be adopted by farmers at the field level. So far there is no local-specific technology package that considers the amount of irrigated to be applied on shallot cultivation in dryland. With the technology package, the shallots productivity can be increased by at least 20%. The purpose of this study was to determine the productivity of shallot with a water-saving technology package in the dryland of NTB.
MATERIAL AND METHODS
The experiment was conducted from June to August 2018 at Labuhan Lombok Village, Pringgabaya sub-district, East Lombok District (-8.51157, 116.65475). The site has been regarded as a dryland and semi-arid climate agroecosystem and centre of shallot production in NTB.
The technology package studied consisted of three packages as treatment (Table 1). Package A was a technology package from the results of experimental technology components carried out in 2017, especially the use of Trichoderma sp., biopesticides, bio-urine, sprinkler irrigation, and timing of irrigation with soil moisture testing equipment. Package B was almost similar to package A but with furrow irrigation and without Trichoderma sp., and package C was conventional farmers' practice. The experiment was conducted through participatory research approach, which was carried out on farmers' land basis by fully involving farmers and extension agents starting from the planning stage until evaluating the performance of each technology packages. Each farmer applied three technological packages for at least 0.2 ha of each. Thus, the amount of land used in this experiment was about 2 ha. Data about the amount of water given during the plant growth period were recorded to determine water efficiency and water savings in each irrigation treatment using a water meter. For packages A and B, irrigation water was applied when soil moisture content had reached 14.8%, which was measured using a soil moisture test. The soil moisture content limit was based on 20% of available water that was still present in the soil before reaching a permanent wilting point. For package C, irrigation water was applied based on farmer practice, still the amount and times for irrigation were recorded.
Every activity on shallot cultivation in the field was recorded in the form of a farm record-keeping for each farmer cooperators. This was done to ensure that all of the packages of technology were applied by farmer co-operators.
Agronomical parameters observed in this study included plant height and numbers of leaves observed using 15 plant samples at 20 days after sowing (DAS), 40 DAS and at harvesting time; fresh and dry weight of yield. Data were analysed using T-test by comparing treatment AB, AC, and BC.
Climate and water irrigation
The Sandubaya site of Labuhan Lombok Village, Pringgabaya District is regarded as a dryland and semi-arid agroclimatic zone. This is indicated by the total annual rainfall which is less than 1500 mm per year and included in climate types of D and E (Oldeman et al., 1980). The average annual rainfall for the last 17 years in the region is 640 mm. The average monthly rainfall for 17 years (2000 -2016) in the region is presented in Figure 1. The wet months of the study area are only three months from December to February meanwhile the rest are dry months. The topography of the Sandubaya region is relatively flat, dominated by Entisols soil type with sandy to sandy loam texture. Besides relying on rainwater, Sandubaya also uses ground water (pumping wells) for irrigation called P2AT.
In the Pringgabaya Subdistrict area, there are 72 P2ATs with varying debits of around 10-20 l s -1 . P2AT wells are managed by a groundwater user farmer association (P3AT). Irrigation costs using P2AT wells vary from Rp. 30,000-35,000 per hour. One ha of land
Land characteristic
Land characteristics of the site are shown in Table 2. In general, the nutrient contents of phosphor and potassium were high, while nitrogen (N) status was low. Soil pH was in the range of neutral to slightly alkaline. The ability of soil to hold water was quite low due to the very low content of clay and organic matter. The percentage of clay at the site was about 10% in the top layer of 0-10cm and decreased to 6% at soil depth of 20-40cm. Thus, the furrow irrigation method may be less suitable for this location. However, the reality in the field farmers always apply furrow and flooded irrigation for shallot cultivation.
Agronomic parameters and yield of shallot
Plant height during the growth of shallots at various technological packages applied is shown in Figure 3. In general, the performance of shallot height at 20 days after sowing (DAS) was not significantly different for package A and package B, except package C, where it was higher than other packages. At 40 DAS, the height of shallot at package A was higher than the others, although this was not significantly different with package B but significantly different from package C. Furthermore, at 60 DAS or harvest stage, plant height of shallot was not significantly different at all packages. It was observed that shallot height at 40 DAS was shorter than at 20 DAS. This was due to manually cutting leaves that were attacked by caterpillar pests. Yoo et al. (2019) reported that the the bulb weight of short-day onions plants was either reduced or increased by cutting leaves. The effect of the technology packages on leaves number during growth period of shallot is shown in Figure 3. In general, the number of leaves of shallot increased along with the growth period. The number of leaves on package A at 20 and 60 DAS was higher than other packages, although this was not significantly different with package B, indicating that technology package A had given good influence in increasing growth of shallot compared with farmer practices. The number of leaves at package C treatment was less than the other packages. The leaves of shallot at package C was very dry at the time of harvest compared to the other packages. While leaves of shallot at package A and package B were still look green. One of yield parameters that determines the productivity of shallots is number of bulbs. The more bulbs, the higher yield of shallots obtained. Variation of bulb number of shallots for whole technology packages is shown in Figure 4. Generally, the number of bulbs per square meter for each technology package applied was not significantly different. This indicates a uniformity of growth and population at all packages technology applied. However, there was a tendency of shallot bulb increase at package A compared to packages B and C. Variation of shallot yield on wet and dry weights is shown in Figure 5. In general, the yield of shallots was quite varied due to the implementation of the technology package. The highest shallot yield was found at package A technology followed by package B and the lowest was package C (farmers practice). The technology of package A increased yield of shallots by 31.6 tons ha -1 (fresh weight), it was 14% and 45% higher compared to package B and package C, respectively. Thus, package A which was water-saving technology for shallot was quite feasible to be applied by farmers around the study site. The yield of shallot in dry conditions also showed a similar trend to wet weight. In dry conditions, the technology of package A increased yield of shallot by 10% compared to package B and 45% compared to package C.
The amount of water irrigation
The amount of water irrigation used during shallot plant growth period (63 days) for each technology package is shown in Table 3. Package A amounted to 348.8 mm as the lowest value than other packages. Debit of pumping well used to irrigate shallot was 7.9 l s -1 and duration required for irrigation was 22.5 hours per hectare. Package A (sprinkler) was able to save water up to 62.1% and 95.8% compared to package B (furrow irrigation) and packages C, respectively. Undang (2004) stated that in coarsetextured soil, the efficiency of water use with the sprinkler method was twice as high as surface water irrigation.
CONCLUSION
1. Water-saving technology package for shallot in dryland using sprinkler irrigation increased productivity of shallots by 31.6 t ha -1 , which was 14% and 45% higher compared to package B (furrow irrigation) and package C (farmer practice), respectively. 2. The amount of water used during the growth of shallots for package A was 348.8 mm which was 62.1% and 95.8% lower than package B and C, respectively. The irrigation cost of package A was less than the others, which was more efficient by 15.8% to package B and 30.4% to package C. 3. The development of a water-saving technology package is very promising. Therefore, socialization and demonstration plots in several regions are very necessary to disseminate and adopt the technology package by the farmers. However, this technology may need financial analysis in order to meet economic feasibility.
|
v3-fos-license
|
2022-10-11T17:38:42.867Z
|
2022-01-01T00:00:00.000
|
252796944
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09913422.pdf",
"pdf_hash": "9bd7527532f5c1cf88a1baeac94ebc5129cc3e89",
"pdf_src": "IEEE",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:621",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Environmental Science",
"Computer Science"
],
"sha1": "ed00a80e7762f7925fcb5cc220379e7d790a5c0e",
"year": 2022
}
|
pes2o/s2orc
|
A Survey on Multiuser SWIPT Communications for 5G+
Increasing number of devices connected to the networks, applications and demands of new generation wireless communications cause very high energy consumption resulting in large amount of carbon emission. Thus, energy harvesting solutions together with accomplishing information transmission are required for energy efficient communications of new wireless communications generations such as 5G and 6G (i.e., 5G+). As such, due to the promise of energy efficient green communications, simultaneous wireless information and power transfer (SWIPT) techniques are expected to be indispensable component of 5G+. In this survey, the literature of multiuser SWIPT communications is reviewed for 5G+. Different multiuser SWIPT scenarios including multi-antenna communications, cooperative communications, network coding, communications security, and unmanned aerial vehicle (UAV)-enabled communication systems based on multiuser diversity and multiple access methods are thoroughly reviewed. The experimental studies are also discussed.
The associate editor coordinating the review of this manuscript and approving it for publication was Yiming Huo . The 5G new radio (NR) features an important number of distinct signal processing algorithms to respond to the numerous requirements of the technology [1]. The upcoming telecommunications generations are further conjectured to promote continuously increasing number of different signal transforming capabilities. In addition, the innovations of new wireless communication systems (such as Internet of Things (IoT), wearable devices, ubiquitous computing, commodity sensors, smart televisions, smart buildings, smart cities, etc.) have given rise to an unprecedented scenario where the number of devices connected to the Internet is expected to rapidly exceed 50 billion with a connection density of 1 million per square kilometer [2], [3]. Moreover, the number of mobile subscribers is forecasted to exceed 7.5 billion globally in a time of a few years. The new generation communication systems with ever-increasing data rate, coverage area, connection and transmission reliability, mass connection, very low latency, spectrum efficiency requirements, growing number of networked devices, and proliferation of new applications entail an electrical energy load of about 910 TWh and cause approximately 235 million tons of carbon emissions [3]. Hence, energy scavenging solutions for energy efficient communication systems that reduce energy consumption have become imperative for 5G [4], [5] and beyond [6], [7] (in short 5G+). Energy harvesting (EH) is the process of capturing ambient wasted or insignificant heat, sound, wind, and radio wave (radio frequency, RF) emissions and converting them into electrical energy for use in powering devices. In [8] and [9], the usability of natural energy resources for EH in wireless communication networks has been investigated and it has been concluded that these approaches are not as effective as expected due to the irregularity and unpredictability of the environmental resources. Wireless power transfer (WPT) is an EH approach that can overcome the mentioned limitations and charge the batteries of the devices in the communication network with the help of electromagnetic radiation. Green energy can be collected by WPT methods using two types of sources which rely on the signals produced by the resources already in the environment and the signals transmitted by a designated and fully controllable power source such as a base station (BS). As the distance between the BS and the end terminals in a communication network is critical for both information and power transmission, the far-field WPT techniques have been discussed in detail in the literature. In fact, the first pioneering experiment on WPT through RF signals was performed by Tesla in 1899. Since then, an important number of studies have been carried out on long-distance WPT. However, these initial works have mainly concentrated on high-power applications and the advancement of these technologies has been slow due to the health concerns and inefficiency in the implementation. Recently, many trials have been performed on the realization of self-sustainable communication systems using WPT techniques for relatively shorter distances based on inductive coupling. These approaches aim to maintain preferable quality of service (QoS) levels while supporting many current concepts such as IoT in new generation wireless networks. The need to integrate WPT approaches into wireless networks and to develop techniques that enable information and power transmission to end terminals in a concurrent fashion has led to the concept of simultaneous wireless information and power transfer (SWIPT) [10].
A. SIMULTANEOUS WIRELESS INFORMATION AND POWER TRANSFER
Among different energy scavenging approaches, SWIPT has gained popularity because natural energy sources such as sun and wind are not sustainable under all conditions and RF signals have the capability to carry both information and energy [2], [3], [11], [12], [13]. As making ubiquitous wireless communications possible in a self-sustainable fashion, SWIPT is an indispensable solution for 5G+ providing the necessary energy for wireless charging of energy-constrained devices and for transmitting and receiving information. It is VOLUME 10, 2022 especially helpful in charging sensor nodes at locations which are very hard (if possible) and costly to access. In [14], a novel SWIPT scheme that transports energy by a powerful unmodulated signal and transmits information by a relatively weak modulated signal is proposed. It is experimentally demonstrated that the harvested power at 4-meter distance is more than 0.5 mW, which is enough to charge many IoT devices. SWIPT concept enables simultaneous information and power transmission as in power-line communications and has the potential to provide significant gains in terms of prolonged lifetime, spectrum efficiency, interference management, and transmission delays [15], [16], [17]. Considering the fundamental pillars of 5G+, SWIPT technology is anticipated to be an important enabler in the upcoming standards. On the other hand, SWIPT brings about a paradigm shift for wireless communication networks due to the new architectural challenges. Especially, the balance between the information transmission rate and the amount of energy harvested at the end terminals is an important factor in the evaluation of the system performance [18]. In this respect, an essential tradeoff exists between the information transmission rate and the amount of the harvested energy. This is designated by the so-called rate-energy region formed by all the possible combinations of the attainable information transmission rate and harvested energy levels [10]. The optimal trade-off is given by the Pareto boundary of the rate-energy region. Initially, SWIPT has been investigated for single-user, single-antenna, single-hop wireless communication systems [10], [18].
As the EH operation with RF signals disrupts the information content of the received signal, it is generally not possible to perform EH and information decoding (ID) operations on the same signal. Therefore, in order to perform SWIPT in a practical manner, it is necessary to split the received signal or to use separate antennas for ID and EH processes. Further, due to the huge gap between the sensitivity levels in ID and EH operations (depending on the application between -140 dBm and -85 dBm for ID circuitry and in the order of -30 dBm for EH circuitry, i.e., no harvested power when the input power is below the sensitivity level [19]), appropriate receiver structures are needed to enable SWIPT to function effectively. To this end, time switching (TS), power splitting (PS), and antenna switching (AS) receiver designs have been developed to divide the received signal into different domains respectively as time, power, and space [13], [18], [20], [21], [22], [23]. These approaches are illustrated in Fig. 1.
In the TS structure, the receiving antenna is periodically switched between the ID and EH circuits in an alternating fashion [13]. On the other hand, the PS receiver splits the received signal into two parts with different power levels under a certain PS ratio. Subsequently, two streams are separately sent to the ID and EH units to allow simultaneous ID and EH operations [20], [21]. By modifying the PS ratio (denoted by α in Fig. 1) at the receiver, the information transmission rate and the level of harvested energy can be balanced and optimized according to system needs [21].
It has been shown in [22] that a so-called on-off PS scheme provides the best trade-off between the information rate and the amount of harvested energy for practical setups. In AS, while a subset of receive antennas are allocated for ID, EH is performed through the remaining antennas at the receiver. Hence, AS allows a simpler implementation as compared to TS and PS [23], [24]. The EH receiver circuit also known as rectenna is typically comprised of a bandpass filter, a rectifier, and a low-pass filter to convert RF energy into DC power as shown in Fig. 2 [25]. In the literature, the relation between the RF power (P RF ) at the input and the amount of the harvested power (P DC ) is commonly given by P DC = ηP RF where the constant η ∈ [0, 1] captures the efficiency during the RF-to-DC conversion. A more practical non-linear model encompassing the effect of the saturation during the process is also provided in [26].
Multiple-input multiple-output (MIMO) techniques have been proven to increase the capacity and reliability of the system under distinct fading conditions by using multiple number of antennas at the source and/or destination [27]. In order to increase the energy and spectral efficiencies as well as the sustainability of the system, the cutting-edge smart antenna approaches such as MIMO have been combined with SWIPT under various network topologies with a single user and multiple users [17], [28], [29]. The interuser-interference plays a dominant role in the end-to-end performance of multiuser wireless networks. It is conventionally regarded as the primary hindrance in satisfying certain levels of QoS. When not harnessed suitably, the interuser-interference may considerably degrade the system performance in terms of reliability and information transmission rate. On the other hand, EH receivers can seriously benefit from the interuserinterference by converting it into useful energy. Hence, a reasonable trade-off is required to balance the harmful and beneficial effects of the interuser-interference respectively on the ID and EH processes.
In wireless networks, the source and destination pair may not be sufficiently close to each other or a direct reliable link between them may not be possible due to environmental and/or channel conditions. In this case, the transmission can be performed cooperatively through relays judiciously placed between the source and destination to expand the coverage area. Even if there is a direct path between the source and the destination, the occasional use of a relay improves performance by providing diversity [30]. In cooperative communication, the relay can function based on amplifyand-forward (AF) and decode-and-forward (DF) protocols using half-duplex (HD) or full-duplex (FD) communication protocols. The FD technique augments the spectral efficiency compared to the HD approach as it simultaneously transmits and receives data in the same frequency band and time slot. The mobile relay node with a limited battery power needs some external charging mechanism to stay active within the system [31]. For this reason, SWIPT is very important in such networks as it also enables information transmission [32], [33], [34], [35], [36], [37], [38], [39], [40], [41]. In addition to the classical store-and-forward based cooperation techniques, the use of smart processing techniques and the relay nodes may aid to increase both the spectral efficiency and the energy efficiency. One such approach is the use of network coding in wireless transmission scenarios where multiple packets of symbols can be combined at the relay, and their superposition is transmitted [42]. The benefits of the SWIPT and EH systems have hence been studied in network-coding enabled systems [43], [44], [45], [46], [47], [48], [49], [50], [51]. As one of the earliest examples, [43] presented a theoretical framework to evaluate the performance of EH in bidirectional network coded cooperative communications. A SWIPT enabled networkcoded system is considered in [44] in the presence of PS and TS scenarios from an information theoretical perspective. In [45], the authors have considered an analog network coding based bidirectional relay system with multi-antenna source terminals. An optimization problem is formulated according to the TS ratio and the relay location to maximize the throughput. A SWIPT enabled multiuser multi-relay network is considered in [46] with information and energy level cooperation. Network coding is used to aid information cooperation. An optimization problem is formulated to maximize the energy efficiency under constraints of the energy causality and a predefined outage probability threshold and solved through an iterative approach. The authors in [47] have proposed an optimization framework for the joint problem of routing, network coding, and scheduling in SWIPT networks. An iterative solution is then proposed using the Lyapunov optimization algorithm. A buffer-aided two-way relay network is studied in [48], where a wireless-powered relay, which is capable of employing opportunistic network coding, as an intermediate node for data exchange. The stability region of such a network is derived for TS EH at the relay. The throughput-optimal and power-optimal policies have then been presented. The authors in [49] have considered multiuser multiple-input single-output (MISO) broadcast channels where SWIPT is used for interference management in the presence of physical layer network coding. A performance improvement due to canceling out the interference at the receivers from the aspect of data by physical layer network coding is demonstrated without eliminating the beneficial energy sources. SWIPT enabled network coded cooperation is also considered in drone applications. In [50], the authors have proposed a dynamic PS factor for harvesting the energy at nodes in a drone-assisted analog network coded system. The corresponding average rate and average outage probability expressions have been derived. Finally, [51] investigates the information freshness and EH aspects in the presence of random linear network coding. A theoretical analysis is presented to highlight the trade-off between the age of information and the EH performance considering a variation in the number of packets. Overall, the use of network coding in multiuser SWIPT network is expected to be an effective tool to improve the overall performance, and from the literature overview it can be observed that there is still a gap in the literature when combining the two techniques.
B. SURVEY STUDIES ON SWIPT
In the literature, there exist a number of survey papers each reviewing and outlining certain aspects of the EH and/or SWIPT technologies in contemporary communication VOLUME 10, 2022 systems. An overview of the recent research on both green 5G techniques and EH for communications is provided in [2]. In addition, certain technical challenges and potential research problems to accomplish sustainable green 5G networks are described. In [3], the authors present a survey on the combination of SWIPT with cooperative relaying systems which provides both flexibility in spectrum usage and energy efficiency. Distinct practical application scenarios are investigated with a broad perspective. The novel architectures and empowering technologies for SWIPT are reviewed and the technical challenges behind implementing SWIPT are specified in [11]. Additionally, the authors present a novel SWIPTsupported power allocation procedure for device-to-device communications. In [12], the authors compose an inclusive survey of the state-of-the-art SWIPT methods and present open issues stimulated by SWIPT and WPT assisted methods. Especially, potential emerging technologies due to the application of SWIPT/WPT in 5G communication systems are investigated in a detailed fashion. A comprehensive literature review on the research progress for the design of wireless networks with RF EH capability is provided in [13]. The authors present an overview on the system architecture, different RF EH techniques with existing applications, and the RF EH circuit design as well as the state-of-the-art circuitry implementations. The distinct communication protocols specifically tailored for RF EH networks are also explored. In [16], the authors provide a survey of SWIPT systems with a special focus on the hardware implementation of rectenna circuits and practical techniques for realizing SWIPT in the domains of time, power, and space. The advantages of SWIPT technologies for modern communication networks in the context of resource allocation and cooperative cognitive radio (CR) networks are also discussed. A framework is introduced for implementing SWIPT in a broadband wireless communication system with a simple SWIPT mobile architecture and different power control algorithms for single-user/multiuser systems, variable/fixed coding rates, and uplink/downlink information transfer in [24]. It is demonstrated that the power control has a critical role to improve the efficiency of SWIPT systems. In [52], the authors emphasize the fundamental challenges of RF-based EH in cellular networks. The benefits of various EH and information transmission architectures are analyzed and compared quantitatively from the view of users in the signal-to-noise ratio (SNR) outage zone. A concise survey of the contemporary SWIPT approaches is provided in [53] by introducing miscellaneous practical transceiver architectures. In addition, the most important link-level and the system-level design issues are investigated and potential solutions and research ideas are stated. In [54], the integrated wireless energy and information transfer networks are studied by providing a survey on the essential techniques from its bottom to top layers. Practical implementation, resource allocation, and protocol design issues are inspected together with the information theoretical foundations. A review of the energy efficiency optimization for cloud radio access networks (as known as C-RANs) using SWIPT is provided in [55]. The authors demonstrate an inclusive taxonomy of the miscellaneous EH sources for wireless sensor networks (WSNs) in [56] and describe certain challenges to establish cost-effective, efficient, and reliable EH systems. Input signals coming from a finite alphabet may cause an important performance degradation for SWIPT systems designed under the assumption of Gaussian distributed input signals. To this end, the modulation and coding design problems for both single-user and multiuser SWIPT systems are investigated in [57] by introducing the theoretical fundamentals and by presenting certain design guides. Differing from the preceding related works, multiuser SWIPT techniques available in the literature are reviewed in this survey study. Distinct application scenarios of multiuser SWIPT such as multi-antenna communications, cooperative communications, communications security, and unmanned aerial vehicle (UAV)-enabled communication systems are separately investigated in a comprehensive fashion. In addition, the multiuser diversity methods and the experimental studies for multiuser SWIPT are thoroughly reviewed.
C. PAPER ORGANIZATION
The remainder of this survey study is structured as follows. Section II introduces multiuser diversity and user scheduling concepts. The multi-antenna applications and the integration to SWIPT systems are respectively reviewed in Section II.A and Section II.B in a sequential fashion. In Section III, the relevant studies on the application of SWIPT with various orthogonal and non-orthogonal multiple access (NOMA) methods are rigorously reviewed. Section IV lays the foundations of the communications security in multiuser networks and the physical layer security (PLS) of multiuser schemes and multiuser SWIPT schemes are subsequently discussed in Section IV.A and Section IV.B. The UAV-enabled SWIPT systems are introduced in Section V. In the subsequent three subsections, the related studies on cooperative, multicasting, and mmWave networks are respectively reviewed under the UAV and SWIPT integration. Section VI focuses on the experimental studies on SWIPT systems. Specifically, waveform design problems and multi-antenna test systems are treated in Sections VI.A and VI.B, respectively. Section VII presents challenges, opportunities and future research directions. Finally, the conclusions are drawn in Section VIII.
II. MULTIUSER DIVERSITY AND USER SCHEDULING
Diversity methods have played pivotal roles in the design of contemporary wireless communications standards. The common feature of these methods is to assure that the replicas of a message signal are exposed to possibly distinct channel realizations such that the receiver gets multiple copies to form its decision on the transmitted message. The traditional time, frequency, and antenna diversity techniques are studied in the literature to a great extent. A relatively recent diversity scheme referred to as multiuser diversity (MUD) has been first introduced in [58] to increase the system capacity for a wireless communication system with multiple users.
It has been shown that the system capacity can be augmented by an appropriate user scheduling plan when the system resources are opportunistically allocated to the user with the best channel condition. Unlike the conventional diversity methods where the aim is to mitigate the channel fading, the MUD schemes exploit the existence of the channel fading by tracking the peak points in the random variations of the users' channel fading states towards a common entity such as a BS.
One limiting feature of MUD approaches is that the transmitter is required to possess some kind of channel information corresponding to each user to carry out appropriate user scheduling. This, on the other hand, may put a heavy burden on the system especially when the number of users is relatively large. Therefore, user scheduling approaches which function with no regard on users' channel status have been conventionally and widely adopted especially for delay-sensitive data services. The round-robin scheduling, the earliest deadline first scheduling, and the weighted fair queueing are some examples of such techniques. The round-robin scheduler gives the users service in a circular fashion. In the earliest deadline first scheduling, the user whose packet delay has the soonest expiration time is bestowed service in a prioritized fashion. On the other hand, the scheduling is performed based on a set of weights corresponding to users under the weighted fair queueing approach. These schemes are employed in the medium access control (MAC) layer with no regard on the physical layer. Although all the preceding methods are advantageous in terms of computational load and complexity, they lack the MUD gain and thus offer a limited sum throughput. Contrarily, the channel-aware scheduling techniques rely on the transmitter's partial or complete knowledge on the users' channel conditions. These methods are generally categorized under the name of opportunistic or greedy user scheduling and try to harness the MUD in a way to improve the overall system performance in terms of data rate, error probability, and QoS requirements.
The concept of opportunistic scheduling has been introduced in [58] under a scenario where a single user with the best channel condition is scheduled in every time slot. The authors have inspected MUD gain in an uplink transmission scenario where each node is assumed to be equipped with a single antenna within a single-cell system. This method, which does not take fairness into consideration and thus is appropriate for delay-tolerant applications, is generally called maximum-rate scheduling in the literature. When the network consists of a set of homogenous users, scheduling the user with the maximum instantaneous rate (or with the largest signal-to-interference plus noise ratio (SINR)) provides longterm fairness where each user has similar average throughputs. If the users form a heterogeneous network, on the other hand, the maximum-rate scheduling cannot ensure fairness among users on its own. The so-called proportional fair scheduling (PFS) tries to maintain fairness by scheduling the user whose ratio of instantaneous data rate to its own average data rate is largest [59]. It has been shown that under the PFS scheme, the data rate attained by a scheduled user is very close to its peak data rate for a scenario with a large number of users under slow fading [60]. A multiuser scheduling scheme under short-term temporal fairness constraints is investigated in [61]. The authors propose a scheduling strategy which satisfies short-term fairness constraints for arbitrarily feasible window lengths. A review of the scheduling algorithms proposed for the fourth-generation multiuser wireless networks is presented in [62] for both single-antenna and multi-antenna systems. It is concluded that a cross-layer design between the physical layer and the MAC layer, which is responsible for resource allocation and packet transmission scheduling, is of fundamental importance in order to construct the optimal scheduling scheme. In [63], a PFS method for the downlink of the Long Term Evolution (LTE) cellular communication systems is proposed. It is shown that a superior fairness performance relative to the other related works can be attained with a modest loss in throughput as long as the users' average SINRs are fairly uniform. The downlink LTE channel model is also adopted in [64] to formulate a resource allocation problem on maximizing sum throughput. In order to simplify the non-linear combinatorial optimization problem, a linearized model is developed whose solution is shown to be equivalent to allocating resources to the users with the best channel conditions. The relevant schedulers are compared in terms of achieved throughput and fairness.
One of the fundamental pillars of the 5G NR is its promise to guarantee unprecedentedly reduced latency times (around 1 ms depending on the application and deployment scenario). In order to enable the 5G NR system to satisfy the low level latency requirement, the third generation partnership project (3GPP) body has introduced the concept of grantfree scheduling [65]. Unlike grant based scheduling as in the previous generations where a user equipment is dynamically scheduled by the BS by acquiring a dedicated resource block (grant), a user demanding permission is directly granted access without undergoing any handshake process. The grantfree scheduling approach is studied in [66] over a 5G network by exploring two related access schemes. The system parameters are set such that the latency requirements are fulfilled with low resource consumption. Scheduler designs seeking solution to a dual-objective optimization problem with certain latency and data rate constraints are provided in [67] and [68] for 5G systems by adopting opportunistic scheduling mechanisms. A comprehensive survey on opportunistic scheduling methods is presented in [69] by categorizing distinct opportunistic scheduling schemes into four groups (capacity, QoS, fairness, and distributed scheduling) based on their objective functions.
A. MULTIUSER DIVERSITY IN MIMO SYSTEMS
The application of MUD methods in MIMO systems is inspected in this section. The relation between the spatial diversity and MUD gains is investigated to a great extent in the literature [60], [62], [70], [71], [72], [73], [74], [75], [76], [77]. As an example, a downlink cellular system with K VOLUME 10, 2022 statistically homogeneous and independent users is illustrated in the sequel. The BS is equipped with N antennas and each user has M antennas such that M ≤ N . Each user has perfect information about its own channel only and the BS is provided with only partial channel state information (CSI) of users.
In the round-robin scheduling, the channel is allocated to every user periodically and in an alternating fashion where any user informs the BS on the instantaneous throughput its current channel state can support. The BS accordingly designates the instantaneous transmission rate for the scheduled user. The average system throughput for the roundrobin scheduling, C rr is given by [72]. In the opportunistic scheduling scenario, the BS uses the feedback information on the instantaneous capacity of each user to allocate the channel to the user with the largest instantaneous throughput. The average system throughput for the opportunistic scheduling, C d is given by [72]. The C rr and C d are compared for M = N = 2 and an average power of P = 15 dB in Fig. 3 as a function of K . It is important to mention that as K asymptotically increases, the average system throughput for the opportunistic scheduling exhibits a double logarithmic growth in the number of users (a log log K scaling in K ) [76]. The use of spatial diversity in MIMO systems has an effect of lessening randomization in the channel fluctuations. This, on the other hand, has a negative impact on the amount of MUD gain, which mainly depends on the random nature of the users' channels. The application of opportunistic scheduling into MIMO systems is first studied in [70] where it is demonstrated that the utilization of spatial diversity has a consequence of decreasing the variations in the channel fading. A similar result is also concluded in [71] where the gain due to MUD is shown to get rapidly reduced as the number of antennas increases. The authors term this effect as channel hardening. An opportunistic beamforming scheme is proposed in [60] based on a random beamforming technique at the transmitter. The aim is to increase the dynamic range of the channel fluctuations for the scenarios with little scattering and/or slow fading such that the MUD gain is augmented. When there exist a sufficient number of users, this scheme is shown to achieve the coherent beamforming gains only with quite limited channel feedback. A downlink multiple antenna multiuser wireless system is investigated in [72]. The authors propose a novel scheduling scheme that exploits both spatial diversity and MUD gains available in the channel. It is shown that the decrease in the MUD gain due to the increase in the number of antennas can be circumvented partially by properly designing user scheduling mechanism.
A random beamforming technique that achieves MUD, spatial multiplexing, and array gains simultaneously is proposed in [73] for multiuser MIMO downlink systems. The transmitter is required to have partial CSI (effective received SINR values) only corresponding to all the users. It is demonstrated that the throughput of the introduced scheme approaches to that of the eigen-beamforming technique when there exist sufficiently many users within the network. In [74], a cross-layer analytical framework is presented to concurrently study the MUD gain, the spatial diversity gain, and the array gain over a generalized Nakagami-m fading channel. The interplay among these and the fading parameter is investigated for a multiuser MIMO downlink scenario.
Several multiuser scheduling schemes with limited feedback are investigated in [75] over MIMO broadcast channels. It is demonstrated that the introduced scheduling algorithms have the potential to attain large MUD gains with a greatly reduced feedback load. Note that a similar conclusion is also drawn in [60] for a broadcast channel with a multi-antenna BS and single-antenna users. In [76], a multiuser multi-antenna downlink system under partial channel knowledge at the transmitter is studied. It is shown that in order to fully benefit from spatial multiplexing and MUD gains simultaneously, feedback quantization at the users must be based on the users' SINR values rather than the channel magnitudes. The trade-off among the number of feedback bits, the number of users, and the operational SNR is determined. A unified error rate analysis of a downlink multiuser scheduling system is presented in [77] with imperfect CSI at the BS. Three distinct transceiving schemes are adopted by assuming that all the nodes are equipped with multiple antennas. It is shown that as a result of the imperfect CSI, the diversity level is reduced differently for the three transceiving techniques. Scheduling in relaying networks has also been studied in [78] and [79] for multi-hop and two way cases, respectively.
B. MULTIUSER DIVERSITY AND SWIPT
In terms of SWIPT integration, multiuser communication systems have certain differences in comparison with single user (point-to-point) communication systems. An appropriate user scheduling procedure that satisfies certain QoS levels (in terms of data rate, amount of harvested energy, fairness, complexity, etc.) should be adopted under multiuser SWIPT. In addition, signals of different links are superimposed in multiuser SWIPT systems. This brings about the existence of interuser-interference as an additional dimension to the optimization problem, which is not applicable under single user SWIPT systems. A multiuser downlink SWIPT system is depicted in Fig. 4 where a certain set of scheduled users are chosen as the ID (wireless information transfer) users, the EH (wireless power transfer) users, and SWIPT users. The distinguishing feature of the multiuser scheduling problem in an SWIPT based network (SWIPT-N) is that it extends the problem by an additional dimension represented by the harvested RF energy constraint. The conventional multiuser scheduling schemes aim at achieving the best usage of the available resources among multiple users such that certain QoS criteria in terms of throughput are satisfied under a definite fairness requirement. For instance, the user whose channel gain is in its peak is selected for information transmission in the opportunistic scheduling technique. However, such a scheduling model is not convenient for the SWIPT-Ns as it may fall short of achieving the harvested RF energy requirement. The rationale behind this is that the best channel states are tracked and designated for the ID users in a way limiting the amount of the RF energy harvested at the EH users. Some of the notable works on MUD SWIPT systems are listed in Table 1 with their distinctive features. In an early work, downlink multiuser scheduling problem has been studied for a time-slotted system with SWIPT over fading channels [80]. Assuming that a single user is scheduled for ID and the remaining users opportunistically harvest the ambient RF energy in every time slot, new scheduling schemes have been devised to control the trade-off between the achieved throughput and the amount of the harvested energy. The multiuser scheduling is investigated for SWIPT-Ns in [52] and a concise survey on the existing solutions addressing certain inherent challenges is provided. In [81], the authors treat a downlink multiuser scheduling scenario with SWIPT. Optimal scheduling techniques maximizing the long-term average system throughput are proposed under an average harvested energy constraint and distinct fairness requirements. The spectral efficiency of an uplink RF-powered network is investigated in [82] by adopting the so-called harvest-then-transmit protocol where the scheduled user transmits in the uplink channel based on the energy it harvests from its administrator BS through the downlink channel. The conventional greedy and round-robin user scheduling schemes are modified such that the system performance is enhanced. In [83], a novel SWIPT scheme based on opportunistic communications is introduced for an interference alignment network where both user and antenna selection scenarios are studied. An EH cooperative network model with multiple source-destination pairs and one relay is considered in [84] where the relay schedules only some set of user pairs for transmission. It is shown that when an EH relay is employed, the max-min criterion incurs a loss in diversity gain. In [85], the authors inspect a SWIPT system with an HD hybrid access point (H-AP) regulating the information transmission to a scheduled downlink user and power transfer to an uplink user. The link scheduling and power allocation problems are jointly formulated in order to maximize the sum throughput under a power causality requirement. It is numerically demonstrated that the proposed approaches yield considerable performance improvements as compared to the conventional schemes. With the aim of maximizing the average total harvested power, a joint scheduling and power allocation scheme is suggested for a multiuser SWIPT scenario in [86] by embracing a realistic non-linear EH model. In [87], the outage probability performance of a cooperative relaying scheme in a two-user NOMA system is analyzed where SWIPT is adopted at the near terminals to power their relaying operations. A best-near best-far user scheduling algorithm is introduced. User scheduling problem is considered in [88] for a multicell SWIPT system employing the static PS receiver. An α-adaptive scheduling scheme is introduced such that any certain trade-off constraint between maximizing achievable rate and maximizing harvested energy of the scheduled user is achieved by adjusting the corresponding α factor. The performance of the proposed approach is compared with those of two conventional scheduling schemes namely random scheduling and max-SNR scheduling. In [89], the downlink outage performance of opportunistic scheduling is analyzed in dual-hop cooperative networks comprised of one source, multiple RF EH relays, and multiple destinations. Two low-complexity relay-destination selection techniques are suggested where the first one is based on opportunistic CSI whereas the second scheme relies on partial CSI. It is shown that the former approach attains full diversity gain of (M +K ) while the latter achieves a diversity order of (M + 1) where M and K respectively represent the number of destinations and the number of relays. The authors of [90] investigate an EH underlay CR system consisting of multiple cognitive users with EH capability, a common cognitive BS, and multiple passive eavesdroppers. A novel energy-aware multiuser scheduling technique is proposed to improve the PLS of the system. It is demonstrated that the introduced scheme outperforms the round-robin scheduling and the conventional multiuser scheduling approaches. In [91], two multiuser scheduling schemes are presented for an FD wireless-powered IoT system. The authors model the charging and discharging processes of the battery at each wireless-powered node as a finite-state Markov chain. The proposed methods are shown to outperform the round-robin scheduling and the random scheduling techniques in terms of throughput and fairness. An adaptive PFS algorithm is studied for SWIPT-enabled multi-cell downlink networks in [92]. The recommended scheduling algorithm is demonstrated to accomplish better fairness measure in comparison with the miscellaneous other scheduling algorithms. A multiuser downlink communication system with SWIPT is considered in [93] and [94] by assuming that while the BS transmits information to a single user, the remaining ones opportunistically replenish energy from the received RF signals. Power allocation and user scheduling policies are proposed for the throughput maximization problem under the average power limitation of the grid and a constraint on the amount of the harvested energy. In [95], the performance of a wireless powered communication network (WPCN) is studied by assuming that multiple batteryless devices harvest RF energy from a dedicated energy transmitter and a single device is selected for information transmission to a common information receiver. The authors inspect various selection approaches depending on distinct CSI requirements and implementation complexities. An intelligent reflecting surface (IRS)-aided FD WPCN scenario is investigated in [96] and [97] where an FD H-AP transmits RF energy to multiple devices in downlink and meanwhile receives information from the devices in uplink through an IRS. In [96], assuming multiple antennas at each node, the time allocation for downlink/uplink, precoding and transmit covariance matrices, and phase shifts have been jointly optimized to maximize the sum throughput of the IRS-aided MIMO FD-WPCN. Three types of IRS beamforming architectures for the IRS-aided FD-WPCN are proposed in [97] to obtain a balance between the system performance and signaling overhead as well as implementation complexity. The performance of multiuser SWIPT-enabled IoT relay networks in the presence of transceiver hardware impairments is inspected over Nakagami-m fading channels in [98]. Both TS and PS architectures of SWIPT are considered at the relay terminal. The dependence of the system performance on the number of IoT users, fading severity parameter, TS factor, PS factor, and energy conversion efficiency is demonstrated. In [99], an EH relay network is considered where multiple sources communicate with a destination through multiple EH PS-based DF relays. An optimal PS and joint source-relay selection method is proposed such that the main link capacity is maximized.
III. MULTIPLE ACCESS TECHNIQUES AND SWIPT
The number of devices to be connected in IoT networks is projected to skyrocket in a few years' time. As such, efficiency in energy and spectrum usage has become particularly important in the contemporary communication systems. In quest of a solution to this, SWIPT technology has recently been combined with distinct multiple access methods. The conventional orthogonal multiple access (OMA) approaches rely on orthogonal resources in either time, frequency, or code domain. Thus, they offer only a limited spectrum efficiency.
In NOMA, the data multiplexing is carried out in the power domain and successive interference cancellation (SIC) is performed at the receive sides yielding superior spectral efficiency and user fairness by allowing multiple users simultaneously to access the same spectrum [5], [100]. Due to its advantages, a downlink NOMA technique called multiuser superposition transmission has been recommended for the 3GPP LTE advanced (3GPP-LTE-A) networks [101]. In addition, NOMA is envisioned to evolve into next generation multiple access in order to satisfy the stringent challenges of next generation wireless networks [102]. With their inherent advantages, SWIPT and NOMA approaches can be combined to present a promising solution for 5G+ and IoT communication systems. In this section, the relevant studies on the application of SWIPT with various orthogonal and non-orthogonal multiple access methods are reviewed. Some of the prominent studies on MAC SWIPT systems are tabulated in Table 2 with their distinctive features.
The work in [103] focuses on the transfer of information and energy in multiple access and multi-hop channels. Using an information-theoretic approach, it is shown that the constraints on the energy flow and information transfer have central effects on the design of wireless networks with multiple nodes. In [104], a WPCN scenario where an H-AP with constant power supply regulates the wireless energy and information transmissions to/from a group of dispersed users with no other energy sources is considered. Each user initially harvests wireless energy over the downlink channel and then sends its information to the H-AP over the uplink channel by means of time-division multiple access (TDMA). As the far users can replenish less energy than the near users, the sum throughput is maximized by allocating considerably more time to the near users than the far users leading to a significantly unfair scenario in terms of users' data rates [104]. In order to tackle with this doubly near-far phenomenon, the authors in [104] introduce a new objective function with an additional constraint such that all the users have the identical data rate. Due to its spectral efficiency and its capacity in tackling frequency-selective fading, orthogonal frequency-division multiple access (OFDMA) has become very popular as an air interface technique and been adopted in a number of modern transmission technologies such as WiMAX, WiFi, LTE-A, 5G NR. Accordingly, OFDMA systems with SWIPT capability have been studied extensively in the literature. In a pioneering work, a novel framework is presented to realize SWIPT in single-user and multiuser broadband systems [24]. The authors of [21] study the resource allocation problem for OFDMA systems with SWIPT. PSbased receivers, which divide the received signals into two power streams for simultaneous ID and EH, are adopted with the aim of maximizing the energy efficiency of data transmission in terms of bits/Joule delivered to the receive nodes. The resource allocation optimization for a multiuser downlink SWIPT system is investigated in [105] by using two transmission techniques given by TDMA-based information transmission with TS implemented at each receiver and OFDMA-based information transmission with PS implemented at each receiver. The obtained results reveal that the TDMA-TS scheme can attain the same data rate as the traditional TDMA systems while every user is still able to harvest a fair amount of energy and the TDMA-TS system outperforms the OFDMA-PS system under certain scenarios. On the other hand, if the constraint on the harvested energy level at the users is proportionately high, the OFDMA-PS approach performs better than the TDMA-TS method. In [106], the resource allocation problem for a WPCN is inspected. An H-AP transfers wireless energy to a group of users in the downlink and simultaneously receives independent information from the users based on TDMA in the uplink. A novel protocol is introduced to satisfy the system constraints. A combination of SWIPT and NOMA techniques is studied in [107] by proposing a new cooperative SWIPT NOMA protocol where the near users function as EH relays to assist the far users. It is analytically demonstrated that the diversity gain of the system is the same as the conventional NOMA approach without SWIPT. In [108], a wireless powered uplink communication system comprised of one BS and numerous EH users is considered with NOMA. The system is optimized in terms of user data rates and fairness among users. The numerical results reveal that in terms of throughput and fairness, the introduced approach outperforms a baseline scheme where EH nodes employ TDMA. NOMA is studied for the uplink of WPCNs in [109]. In order to deal with the doubly near-far effect, a decoding order that is inversely proportional to the users' distances to the transmitter is adopted and it is shown that in comparison with the TDMA-based schemes, the introduced approach results in considerable rate improvement for the cell-edge EH users, thus improving system fairness. The outage performance of a two-user cooperative NOMA system is investigated in [87] where SWIPT is adopted at the near users to provide necessary energy for their relaying operations. The authors recommend a best-near best-far user selection algorithm. In [110], a cooperative NOMA network with SWIPT is considered where a transmitter communicates with two users by means of an EH relay. The impact of the power allocation is inspected on the system performance. It is demonstrated that the introduced NOMA schemes can noticeably decrease the outage probability as compared to the traditional cooperative SWIPT networks with OMA. The work in [111] inspects a wireless powered communication IoT network scenario where multiple energy-limited nodes first harvest energy over the downlink channel and subsequently transmits information through the uplink channel. The time allocation is optimized by respectively adopting TDMA and NOMA approaches such that the spectral efficiency is maximized under both cases. By taking into account the circuit energy consumption of the IoT devices, it is shown that the NOMA-based scheme is neither spectral efficient nor energy efficient in comparison with the TDMA-based scheme. However, this result is obtained with no regard on the user fairness. The communication security is studied for a MISO NOMA CR network with SWIPT in [112]. Adopting a practical non-linear EH model, an artificial-noise-aided cooperative jamming method is introduced. The results reveal that the NOMA-based technique outperforms its counterpart with OMA. In [113], with the aim of obtaining a trade-off between spectrum efficiency and energy efficiency, the authors offer to use SWIPT for hybrid precoding based mmWave massive MIMO-NOMA systems. The numerical results demonstrate that the proposed scheme can attain higher spectrum and energy efficiency as compared to a similar method based on OMA. NOMA with SWIPT is studied in [114] where an AP concurrently conveys information and power to two users. It is proven that in terms of the information-energy tradeoff, NOMA outperforms OMA when the decoding energy consumption is negligible. In addition, if the decoding energy consumption is not negligible and the channel power gains of the two users are not distinct enough, OMA may perform better than NOMA. In [115], the energy efficiency optimization problem for a SWIPT NOMA system is addressed. Compared to the traditional OMA, it is shown that an important energy efficiency gain can be attained by appropriately combining SWIPT with NOMA. The resource optimization for NOMA heterogeneous small cell networks with SWIPT is studied in [116]. A low-complexity subchannel matching algorithm is proposed based on the merits of channel conditions. In [117], the authors consider PLS for a downlink orthogonal frequency-division multiplexing (OFDM) SWIPT IoT system with multiple legitimate IoT nodes. In order to manage multiuser communications, OFDMA and TDMA are separately adopted by using PS-SWIPT and TS-SWIPT architectures at the IoT objects, respectively. It is shown that in terms of secrecy rate, the OFDMA system with PS-SWIPT always outperforms the TDMA system with TS-SWIPT. A new resource allocation and relay selection method is proposed for cooperative multiuser multi-relay OFDMA networks with SWIPT in [118]. The authors demonstrate that the introduced approaches yield important performance gains in comparison with a semi-random resource allocation and relay selection technique. Also, the achieved performance converges to the optimal solution when the number of OFDMA subcarriers is large enough. In [119], the outage probability of a SWIPT-based cooperative NOMA system with a transmitter, a near user, and a far user is analyzed by assuming that the near user relying on PS for EH and ID processes serves as a relay to assist the far user. It is revealed that the introduced technique considerably improves the near user's performance and yields a larger system throughput. The authors of [120] inspect EH capability at the users for a hybrid combination of NOMA, TDMA, and SWIPT approaches. The numerical results show that the introduced scheme outperforms the conventional TDMA technique in terms of transmit power consumption. Energy efficiency resource allocation optimization for cooperative CR networks with SWIPT is studied in [121] by adopting TDMA for information transmission. The authors provide a number of comparisons with existing resource allocation algorithms and show that the introduced technique attains higher energy efficiency.
Another potential application of SWIPT lies in the field of heterogeneous networks (HetNets), where multiple frequency bands are used in an overlapped manner. Especially as the number of users increases and the HetNet becomes ultradense (as expected in the literature such as the joint use of microwave and millimeter wave bands [122]), the potential of SWIPT to benefit from both the power efficiency and the frequency diversity aspects will become more apparent. In line with the vision, there are some works in the literature that considers the use of SWIPT in HetNets. In [123], the authors consider a downlink resource allocation problem for a two-tier HetNet, where both TS and PS approaches are considered for SWIPT. It is shown that the co-tier interference signals can provide significant harvesting gains, and the trade-off in achievable throughput and EH rate depends on the selected harvesting approach. [124] has considered the use of power beacons in a HetNet with the goal of increasing energy efficiency. The weighted sum of harvested energy and information rate is maximized in a multiuser MISO configuration. The cross tier and co-tier co-channel interference has been exploited to improve energy efficiency in [125] in the absence of perfect CSI. The power allocation problem is modeled as a non-cooperative game and an iterative approach has been proposed for power optimization and subchannel allocation in SWIPT enabled transmission scenario. The authors in [126] have considered a heterogeneous cloud radio access network (H-CRAN) where EH is used to alleviate the power consumption of the grid. The authors have proposed a mixed integer non-linear programming problem to increase energy efficiency. The impact of cell load of a SWIPT system in a HetNet is studied in [127]. A number of different user association and network deployment scenarios have been considered, and the SWIPT performances have been quantified numerically. A robust energy efficiency maximization problem is formulated in [128], and a practical min-max approach is proposed for iterative power allocation. A two-tier heterogeneous SWIPT network has been studied in [129] in the presence of Nakagami-m fading channels. Outage probability expressions are derived by making use of the stochastic geometry theory in the presence of a non-linear EH model. Non-linear EH model is also considered in [130] for HetNets in the presence of multi-carrier transmission. In [131], the authors have considered an IoT HetNet in a densely deployed transmission scenario where uplink and downlink transmissions are decoupled to increase the target utility. Finally, the authors in [132] consider a SWIPT enabled multi-tier network with a cooperative NOMA transmission scenario where BS distribution follows a Poisson point process model. Expressions for outage probability and throughput are derived, and the benefits of the interference components are highlighted from an energy efficiency perspective. Overall, it is clear that as the number of users increases and the number of tiers are layered, the complex interference nature of SWIPT-enabled wireless networks is expected to be beneficial from an energy efficiency perspective.
IV. COMMUNICATIONS SECURITY OF MULTIUSER SWIPT SCHEMES
Due to the drastically increased number of connected wireless equipments within the same or neighbor service area of the state-of-art and forthcoming communications applications/standards, the communications secrecy has been constituting a major and challenging issue from the implementation and scheduling perspectives. The open transmission media of the wireless communications applications allow room for the unauthorized access of the eavesdroppers' on the content of a legitimate communications link. Former approaches on this communications security problem have mainly concentrated on employing several cryptographic protocols in the network layer [133]. However, detailed researches on the multiuser communications secrecy have revealed that security measures at higher communication layers would be overcosting and extremely fragile to attacks [134], [135].
On the other hand, despite being a widely-used PLS approach in military applications, direct jamming technique (i.e., performed in order to degrade the signal quality at an intended receiver) has not been of considerable importance. However, several researches on the practical usage of interleaved jamming signals have proposed and investigated the artificial-noise (AN) injection technique as another efficient perspective to combat wiretapping techniques in fading environments [136], [137], [138]. AN-based approaches have been shown to achieve enhanced communications security against eavesdroppers in the presence of Gaussian noisy channels whereas their efficiencies vanish for the case of multiuser interference channels [139]. Additionally, these approaches typically aim to insert a fractional AN into the null space of the channel matrix related to the legitimate receiver, which only deteriorates the eavesdropping capability, but not the reliability at the legitimate receiver. Hence, they lead to increased signal processing complexities at the transmitter terminals that might complicate their physical implementation [139].
Alternatively, PLS has emerged as a novel and efficient tool instead of the traditional cryptographic key-based and AN-based schemes those lack to fulfill the power-efficiency and latency requirements of forthcoming (5G+) wireless systems. By taking the advantage of the stochastic characteristics of the communications channels, PLS techniques have been shown to enhance the data security of the legitimate users while successfully preventing the eavesdroppers (i.e., unauthorized terminals) to intercept the others' data over the open wireless medium [140].
A. PHYSICAL LAYER SECURITY OF MULTIUSER SCHEMES
The literature related to the communications security problem of multiuser wireless networks have been consisting of detailed investigations on miscellaneous techniques for different antenna configurations (single-antenna or multi-antenna cases), cooperation types (e.g., singlehop, cooperative or multi-hop schemes), multiple access types (e.g., code-, frequency-or time-division-based, orthogonal/non-orthogonal) and channel characteristics (e.g., Rayleigh, Rician and Nakagami-m fading). In order to enhance the network security against the attacks of multiple eavesdroppers, different relay and user pair selection strategies have been proposed in [141] for multiuser cooperative relay networks with multiple AF relays in Rayleigh fading channels. The authors of [142] have provided on the investigation of AN-aided secure beamforming (BF) scheme for multiuser MISO broadcast channel with confidential messages.
The PLS examination of two-way untrusted relay schemes within Sparse Code Multiple Access networks has been provided in [143] where the relationship between the security capacity and the number of users, the power coefficients for users and the power allocation of cooperative interference in the two-way network is also investigated. The authors of [144] have focused on the PLS of a dual-hop uplink CR network over Nakagami-m fading channels.
By jointly considering interference power constraints and the total transmit power constraint, an extensive study on the optimization of the secrecy rate of multiuser MIMO wiretap channel has been carried out in [145]. This study has provided rigorous information and useful outcomes from the optimization perspective.
On the other hand, several researches have focused on the multiuser communications schemes with non-orthogonal type of access. Exact expressions for the average secrecy outage probability (SOP) performance of NOMA AF-relayed network where the transmitter employs transmit antenna selection (TAS) / maximal ratio combining (MRC) scheme through cooperation with untrusted EH relays is provided by [146]. The investigation in [147] has focused on a NOMA-based massive machine-type communications (mMTC) uplink scenario where each terminal conveys its confidential signal to the BS while a passive eavesdropper has been operating to steal information. Opportunistic scheduling schemes employed within multiuser MIMO-NOMA systems have been examined in [148] where the multi-antenna BS is dedicated to communicate with multiple single-antenna cell-center and cell-edge users while multiple single-antenna eavesdroppers strive to access the users' information. Another NOMA-based investigation has been done in [149] on the PLS of CR networks with multiple primary and secondary users under Nakagami-m fading conditions. The study provided by [149] has focused on a multiuser massive MIMO scheme that promises improved PLS performance due to its ability to steer the transmission beams through the direction of the intended users. Additionally, a long short-term memory (LSTM)-based channel prediction approach has been proposed with the aim of mitigating the degrading effects of imperfect CSI due to high mobility conditions [150]. By mentioning the fact that NOMA networks would face substantial security risks due to the distributed nature of the employed SIC mechanism, the investigation in [151], the secrecy performance of a multiantenna cooperative NOMA scheme has been examined for both DF and AF relaying cases. Here, in addition to the conventional metric i.e., SOP, the authors have also focused on a different performance metric called strictly positive secrecy capacity (SPSC). The detailed investigation in [152] has focused on the effects of external and internal eavesdropping scenarios on the PLS performance of a unified NOMA framework for the case of stochastically-located multiple user terminals.
The authors of [153] have provided the PLS examination for the data of the weak user where this data have been considered to be intercepted by the strong user in a NOMA network. By employing the directional modulation technique [154], [155], the proposed NOMA scheme allows access of the intended symbols at the weak user while giving access to a different but valid lower order symbol alphabet to the strong user for performing SIC. Due to its distributive nature at the decoding steps, SIC technique brings along with serious security risks for the far-end users' signals in the presence of near-end users. The security designs for 5G NOMA systems are examined in [156] for firstly the two-user case that has been extended to the multiuser case. After adjusting the order of SIC and employing a cooperative jammer, an optimal power allocation problem has been built subject to users' data rate and total power constraints.
B. PHYSICAL LAYER SECURITY OF MULTIUSER SWIPT SCHEMES
In systems employing wireless information and power transfer (WIPT), both information and power signals are carried by the same RF wave. Afterwards, information packets are decoded by ID receivers, while electromagnetic energy is collected by power receivers and converted into electrical energy [158], [159]. However, due to the open (unguided) nature of wireless communication channels that has been demonstrated in Fig. 5, information signals may also reach power harvesters and unwanted receivers, thus creating the potential for information leakage.
In particular, in WIPT-based systems, a power beacon assigned to wireless power transmission can be exploited to improve communication privacy by scrambling eavesdropper terminals. In addition, an information sign can also be used as a power source to increase the energy collected in power receivers. Therefore, PLS techniques offer natural solutions to improve the security of communication systems using the WIPT approach. While increasing the channel capacity provided to main users in classical systems that only transfer information, in addition to the aim of reducing the capacity of infiltrating channels to users making malicious / unauthorized listening to minimum levels, with the inclusion of WIPT techniques, the aim of maximizing wireless power transmission efficiency is also coming onto the agenda.
Accordingly, the combined optimization of the privacy and power efficiency performances of communication systems using SWIPT techniques constitutes a dual-objective optimization problem in which communication reliability and power efficiency phenomena that constitute trade-off between each other are input. For this reason, it is necessary to establish a good balance between information transmission security and power transmission efficiency.
A brief categorization of the research studies on the communications security of SWIPT-based schemes has been provided in Table 3 with respect to their network structure, transmission & reception strategy, EH mechanisms, channel model & system imperfections, and performance metrics evaluated. In study [160], the authors has projected the trade-off between information transmission security and power efficiency on two distinct problems: the first problem has focused on maximizing the privacy rate, provided that the minimum energy level collected for each of the users, while the other problem has aimed to maximize the energy level collected by the users under a specified secrecy constraint. In this study, while designing spatial BF structures for information and power signals in a multi-antenna BS, both problems are tried to be solved jointly. The secrecy performance of the MISO-SWIPT scheme has been examined when the CSI of the wireless channels between the BS and the information and power receivers is incorrect / uncertain, and a robust BF scheme has been designed in [161].
Recently, thanks to the identification of new attributes of interference effects, research interest has focused on the beneficial use potential of interference in wireless systems rather than avoiding or suppressing interference. Accordingly, it is evaluated that traditional suppression techniques are no longer optimum and the use of more innovative interference comes to the fore. Thus, it has been revealed that the reliability, privacy and accessible data rates of wireless communication systems can be improved when interference is utilized. In the modern communication concept, interference is of great importance in both information and power transmission in wireless communication systems [162].
In the literature, there are studies in which SWIPT techniques using the approach to exploit interference effects in multiuser systems have been studied extensively [163], [164], [165], [166]. The interference effects, although seen as detrimental to the ID process, can be beneficial for the EH process. On the other hand, a trade-off between ID and EH processes is existing for effective implementation of the SWIPT approach in multiuser systems. The idea of BF optimization achieves a remarkable gain by collecting signals in constructive / supportive interference scenarios. The references [167] and [168] suggest a multiuser symbol-level precoding approach, positioning the effects of multiuser interference at the point of common use of CSI and information. In some specific cases, interference effects between data packets can be transformed into useful signals that have the potential to improve the SINR of the downlink signal. Using a similar setup, with the help of the experience of CSI and data packets at the transmitter, the effects of constructive / supportive interference have been studied in [169]. Recently, by proposing a hybrid analog / digital precoding structure, a number of BF strategies have been proposed to take advantage of the concept of mutual coupling between transmitting antennas with the help of adjustable antenna loads [168].
In study [170], it has been shown that interference does not limit the capacity of the broadcast channel. However, in many studies, SIC approach has been proposed and used. The use of SIC technique in communication networks including the SWIPT approach has been examined in [162]. In the study, it was shown at what level each receiver in the network could perform WPT using the SIC technique without affecting the decoding process of the information packets. In addition, the results of the study revealed that the SIC technique is considerably useful for communication systems using SWIPT. A SWIPT scheme based on opportunistic communications (OC) has been proposed in [83] for networks conducting interference alignment (IA). In this study, in order to apply the SWIPT technique, OC-based antenna and user selection approach has been used. The reference [171] has analyzed the improvements in the performance of both EH and ID processes with the aid of constructive interference in a MISO downlink transmission scenario. In addition, it has been proposing a new precoder design for the SWIPT scheme, which can significantly reduce transmitter power values for cases with fewer transmit antennas than the number of users. Besides, in study [172], the effects of residual selfinterference (RSI) in MIMO channels have been discussed. In [157], implications for the use of SWIPT techniques in broadband wireless systems aiming to create a set of parallel sub-channels that simplify the resource allocation mechanism by using OFDM and beam shaping techniques together are presented. Here, the authors propose power control mechanisms for the SWIPT approach in a multiuser multi-antenna OFDM infrastructure, including electronic circuit power constraints. Whereas, in the study, subcarriers allocated to a particular user are used for EH-purpose by assuming a fixednumber of subcarriers. VOLUME 10, 2022 Moreover, the study in [21] brings the performance of the SWIPT approach to the literature in a multiuser OFDMbased system consisting of single-antenna users using a PS technique. Here, it has been shown that system energy efficiency can be improved by using RF-based EH techniques in interference-limited regime. The use of multi-antenna receivers has been stated to be beneficial in improving system capacity rather than improving system energy efficiency. The performance of SWIPT techniques in multiuser OFDM-based wireless systems has been examined in [105] for the scenario of transmission from a fixed access point to distributed user terminals. Within this study, two multi-access techniques such as TDMA and OFDMA are discussed. When using the TDMA technique, the TS approach has been employed within such a layout in which the information packet receiver of a particular user will be active during the scheduled time channel of that user, and in all other time channels, the receiver (energy receiver) serving the EH process will be active. However, when using OFDMA technique, the PS approach is operated assuming that all sub-carriers use the same PS ratio in each receiver. In the SWIPT approach, in order to facilitate the EH process by utilizing the RF signal, the transmitter unit must be able to emit a highly reinforced signal. This, in turn, can lead to high vulnerability to eavesdropping due to the high level of data leakage potential if the receiver unit is malicious. In this context, a new QoSoriented paradigm has been introduced for communication systems using the SWIPT approach. In particular, privacy and authorization have increasingly been the main fields of study for wireless communication technologies. Besides, the concept of physical layer privacy has been introduced as a new layer of defense in order to provide a considerable level of communication secrecy without cryptography.
Cooperative relay systems that provide secure transmission through physical layer privacy / security enhancements have attracted great attention [173], [174]. In studies [175], [176] the secrecy rate has been maximized for AF-type relays using SWIPT. By using a dedicated receiver mode (e.g., a scheme where one of the receivers is conducting a highly confidential ID process while the others participate in the EH process), SWIPT-assisted PLS has been investigated in [177] and [178]. In study [160], AN has been used both to confuse the eavesdropping system and to increase information confidentiality and as the primary power source for receivers that support the EH process (non-ID-oriented). This method has been thought to be not suitable for co-located antennas where users simultaneously receive information packets and conduct EH. In the references [179] and [180], the resultant power received by all users was optimized to meet the privacy requirements for each user scheduled with OFDMA access method. Secrecy performance for SWIPT-based single-input multiple-output (SIMO) system using MRC has been analyzed over SOP and secrecy capacity criteria in [181]. A similar system configuration based on MISO using TAS has been examined in [182] for the case of imperfect CSI.
The secrecy probability performance of a SWIPT scheme supported by a large number of multiple-antenna relays has been examined in [183] when the imperfect CSI information available at the relays and non-linear EH equipment are employed. The authors have brought detailed investigation to the literature on selecting the relay that maximizes instantaneous secrecy capacity when relays are operated in DF-type data transmission mode. In the case of non-ideal (erroneous) channel estimation, an energy-efficient resource allocation mechanism for multiuser massive MIMO systems based on the SWIPT approach is proposed in [184]. The PLS performance of a multiuser downlink OFDM-IoT system to be observed in the presence of a large number of eavesdropping equipment while communicating with a large number of legitimate IoT equipment has been studied in [117]. For the scenario in which a large number of users are scheduled using OFDMA and TDMA access techniques, assuming that IoT equipment could jam the wiretapper systems using power signals based on the SWIPT technique, the secrecy performance analysis has been conducted in terms of extensive simulation studies.
The communications security of a multiuser MISO downlink scheme employing SWIPT has been investigated in [20] by considering passive and potential eavesdroppers. Here, receiver equipments have been considered to employ the PS scheme in order to decode information and harvest energy simultaneously. A resource allocation algorithm that aims to minimize the total transmit power for EH receivers by combining the usage of AN and energy signals has been proposed. The study has shown that the proposed algorithm could provide secure communications and efficient energy transfer capabilities. The application of SWIPT scheme within a MIMO downlink scenario has been examined in [185] via Monte Carlo simulations by considering the case that the information aimed to the legitimate receiver might be wiretapped by the intruding EH receivers (i.e., the potential eavesdroppers). In the case that the CSI of EH receivers is not perfectly known at the transmitter, the worst-case communications secrecy is tried to be optimized by jointly taking the precoding matrix, AN covariance matrix, PS ratio into accounts. A transmit BF power minimization scheme has been proposed in [186] for a multiuser MIMO SWIPT scheme where PSs are employed by the receivers. The authors have extended their investigation by including robust secrecy transmission designs by incorporating deterministic channel uncertainties. For both perfect and imperfect CSI cases, the study has introduced a sequential parametric convex approximation (SPCA)-based iterative algorithm, and illustrated the average transmit power of the information signal in terms of different target secrecy rates. The authors of [187] have introduced a SWIPT IoT network system that employs selection-combining (SC) technique at two legitimate user terminals with two-antennas in Rayleigh fading channels. By assuming that the network suffers from the noncollaborating passive single-antenna eavesdropping nodes, the zero-forcing (ZF) BF scheme has been employed at the jamming node with the aim of confusing the eavesdroppers. After deriving closed-form expressions for the SOP and secrecy throughput, the investigation has indicated that increasing the EH efficiency conversion has a positive impact on the system's secrecy performance. The secrecy rate region of wiretap interference channels that employ multi-antenna passive eavesdroppers has been studied in [188] under EH constraints of PS receivers. Lower-bounds for secure communication rate have been derived without imposing any limitation on the eavesdropper processing. The authors have discovered that the Pareto boundary of the secrecy rate region might be achieved via the reasonable adjustment of transmit power and PS coefficients. An optimization procedure on maximizing the minimum downlink secrecy throughput of the multiuser OFDMA network with SWIPT that is composed of a BS and several receiver users has been carried out in [189] by considering the subcarrier allocation and PS ratio as inputs of the objective function. The secrecy rate of the weakest user has been aimed to be maximized in case that every user could wiretap all other users' subcarriers. In order to jointly allocate subcarriers and determine the PS ratios for all users, a particle swarm optimization has been performed. The results of the investigation on the average total instantaneous secrecy rate of the weakest user indicate that the total transmit power should be high enough to overcome the minimum operating energy. Whereas, for higher values of the total transmit power beyond a threshold, the average total instantaneous secrecy rate has been shown to be saturated.
Recently, HetNets have been considered as novel network structures that have been promising enhancements in the spatial resource reuse opportunity and the users' QoS by providing coverage via smaller cell deployments [190]. Under the wiretapping scenario of a single eavesdropper in a heterogeneous macro-pico network has been studied in [191] where the sum-rate of macrocell users is examined with the optimization purpose from both subcarrier and power allocation perspectives. A secure HetNet scheme has been proposed in [192] by considering an energy efficiency-based resource allocation problem. The investigation provided in [193] has been focusing on the secure relay beamforming problem for SWIPT in an AF two-way relay network that employs a multi-antenna relay node in order to serve single-antenna end users. By considering both the conditions that the CSI related to the eavesdropper channels are available or not available, the achievable secrecy sum rate has been tried to be optimized under transmit power constraint and energy harvesting constraint. The authors of [194] have provided an investigation on the communications security of two-way untrusted AF relay networks with SWIPT. With the objective of maximizing the secrecy rate and SEE, the PS-and TS-based SWIPT relaying strategies have been examined for single-antenna user and relay nodes under Rayleigh fading channel conditions. In [195], an AF-based two-way MIMO relay-assisted CR NOMA network that employs the SWIPT technology in order to enhance network energy efficiency has been analyzed. In this study, the authors have focused on a scenario in which a pair of primary users and two pairs of secondary users (SUs) exchange information with the help of a MIMO two-way relay. Here, the edge SU of each pair is considered an untrusted node that tries to wiretap the central SU's information. By jointly optimizing the power allocation at all users, PS factor and relay beamforming under the constraints on the QoS, EH and transmit power, the sum achievable secrecy rate (ACSR) metric has been aimed to be maximized for communications security purposes. The authors of [196] have focused on the robust secrecy energy efficiency (SEE) optimization in a wirelessly-powered HetNet by considering ellipsoid-bounded CSI uncertainties. Here, the network is considered as to consist of a macrocell BS that covers multiple femtocell BSs. By the time, the macrocell BS has been dedicated to serve multiple users in the presence of a wiretapping multiple-antenna user while each femtocell BS serves a pair of ID and EH receivers with multiple antennas, where the EH receiver attempts to wiretap the information of ID receiver in the same femtocell. In order to enhance the secrecy performance, the macrocell and femtocell BSs have injected AN within the downlink signal, and the problem of maximizing SEE has been formed in a cross-tier multi-cell ANaided transmit BF design. The reference [197] has studied the secure downlink transmission of a massive MIMO SWIPT system. A BS with massive number of antennas conveys both the energy and the information signals to the legitimate users in the presence of an active eavesdropper. The legitimate and malicious users all perform a PS approach with the aim of simultaneously harvest energy and decode information. With the effective and precise beam-steering advantage of massive MIMO, the BS is considered to focus energy to the intended users while preventing the leakage through the wiretappers. The study has focused on the ACSR and provided a closed-form lower bound. In order to assign secure multiuser communications, the authors have carried out with an optimization process to maximize the ACSR based on the constraints on the minimum harvested energy by the user and the maximum harvested energy by the wiretapper. The outcomes of the study have exhibited the effectiveness of using massive MIMO in providing PLS in SWIPT systems. The system model examined by the reference [197] has been also at the focus of reference [198] where the authors have investigated the robust joint design of hybrid analog-to-digital (A/D) BF matrices and the artificial redundant signal (ARS) covariance matrix at the BS while the main purpose is shifted to the maximization of the worst-case sum secrecy rate of the ID users with a transmit power constraint, a non-linear EH constraint and a unit-modulus constraint on the entries of the analog BF matrix. By using the penalty-concave convex procedure within the optimization problem, the study has introduced the simulation results of the proposed design, and afterwards exhibited that the robust joint hybrid BF design achieves considerable improvements when compared to the conventional hybrid BF benchmark.
Studies focusing on the investigation of SWIPT schemes with NOMA in mmWave frequencies have been provided in [199], [200], and [201]. Being motivated by the potential of millimeter wave (mmWave) systems from abundant available bandwidth and line-of-sight probability perspectives, the analysis provided by [199] has developed a framework to examine security, reliability and energy coverage performance of downlink mmWave SWIPT systems consisting of UAV networks employing both NOMA and OMA. Under the consideration that a UAV serves two types of authorized IoT devices in the presence of multiple passive eavesdroppers, the directional modulation scheme has been employed in order to enhance the PLS performance. After deriving the analytical expressions for COP, SOP, and effective secrecy throughput (EST) of the users with high rate security requirement (HRSR) under NOMA and OMA schemes using stochastic geometry, the active antenna selection of directional modulation using adaptive genetic simulated annealing (AGSA) algorithm is optimized to further improve the secrecy performance based on the obtained analytical results. Consequently, the authors have obtained the closed-form expressions for the COP and energy-information coverage probability (EICP) of the energy-constrained users with low-rate requirement (ECLR) under NOMA and OMA schemes. Useful insights for the effect of various parameters on the trade-off between the reliability and security for the HRSR user, and the trade-off between the reliability and energy coverage for the ECLR user have been provided. Moreover, the analysis has shown that the EST of the NOMA scheme outperforms OMA at low transmit power and high codeword transmission rate.
The EICP metric corresponding to the IoT systems with the help of relay selection has been analyzed in [200] in the presence of multiple eavesdroppers. The authors of [201] have derived the average SOP and probability of strictly positive secrecy capacity related to the UAV-assisted satelliteterrestrial SWIPT scheme by considering shadowed Rician fading and Nakagami-m fading effects in satellite-to-UAV and UAV-to-users links, respectively. In [202], the secrecy rate optimization is performed for IoT systems employing PS-based SWIPT schemes with the help of AN-assisted transmission for both the perfect and imperfect CSI conditions. The average SOP performance of SWIPT scheme has been examined in [203] for the cooperative NOMA scheme with Min-Max User Selection. In [204], the SEE metric of the relayed communications scenario constructed by SWIPT-based IoT nodes in the presence of cooperative eavesdroppers. The research in [205] focuses on the coexistence case of the primary satellite networks in the mmWave bands and the secondary multiuser terrestrial networks with SWIPT. The authors have investigated robust secure BF and PS for the case of a BS equipped with a uniform planar antenna array and a secondary user employing a PS-type receiver. By relying on an angle-based CSI error model, the minimal value of the worst-case secrecy rate among all secondary users have been tried to be maximized while the QoS constraints for each secondary user, the interference limit for a primary satellite earth station, and the power consumption limit for the BS are satisfied. The numerical results of the extensive study have provided comparisons to the existing approaches and useful insights into different robust designs. Another study on the robust BF design for multiuser MIMO secrecy IoT networks with SWIPT has been presented in [206]. Here, the user terminals have been harvesting energy via a non-linear EH model with the help of a cooperative jammer. In order to facilitate efficient wireless energy transfer and secure transmission, the BS employs AN. An optimization process has been performed in order to maximize the minimum harvested energy among users subjected to secrecy rate constraint and total transmit power constraint in the presence of channel estimation errors. The authors of [207] have focused on the secure SWIPT scenario in cell-free massive MIMO systems where a massive number of randomly located access points (APs) are considered. Here, the Poisson-distributed APs have been serving multiple ID receivers and an information-untrusted active EH receiver. The EH and the wiretapping processes have been assumed to be performed through the spatially separate antennas. Hence, the dual-antenna active EH receiver has dedicated its first antenna for EH purposes while the other for eavesdropping information. The analysis has provided closed-form expressions for the average harvested energy (AHE) and a tight lower bound on the ergodic secrecy rate (ESR). The lower bound for the ESR metric has taken into account the information users' knowledge attained by downlink effective precoded-channel training. The comparison provided by the study has shown that the cell-free massive MIMO outperforms the co-located massive MIMO case over the interval in which the AHE constraint is low. Besides, the cell-free MIMO has been found to be more immune to the increase in the active eavesdropping power than the co-located MIMO case. The extensive study in [208] has stated that the employment of SWIPT approach promises to prolonging the battery life of mobile terminals and enhancing the overall system energy efficiency within especially in NOMA cases which enables the EH receivers to exploit the inter-user interference. The achievable data rate for the downlink multi-carrier NOMA network supported by PS-based SWIPT has been optimized by involving deep-learning based approaches by satisfying the limitations on the available power budget and the requirement for EH. The simulation results of the examination have demonstrated the superiority of the combination of PS-based SWIPT with multi-carrier NOMA over SWIPTaided single-carrier NOMA and SWIPT-aided OMA.
By inserting concept of the IRSs, which is also termed as reconfigurable intelligent surfaces (RIS) and is one of the emerging wireless communication applications, into SWIPTbased schemes, several researches have canalized on the communications security performances [209], [210], [211], [212]. The secrecy analysis of SWIPT scheme that incorporates the assistance of IRSs in the presence of an eavesdropper has been investigated in [209]. The computational complexity burden is aimed to be reduced with the help of deeplearning (DL)-based optimization approach. The authors of [210] have considered the joint transmit/reflect beamforming and power-splitting SWIPT scheme where the transmit sum energy of IRS-aided secondary transmitter is minimized subject to the QoS requirements of secondary receivers and the interference constraints of the primary receivers. In [211], a two-phase communication scheme under the passive eavesdropping in Rician fading channels has been examined by also considering the IRS as a tool to convey the information signals to the legitimate user. An optimization procedure has been handled in [212] that has focused on the secrecy rate corresponding to a beamforming scheme enhanced by the IRS employment.
The literature related to the multiuser SWIPT schemes also includes several researches that focus on the DL-based optimization approaches promising to efficiently reduce the computational complexity burden and processing time [209], [213]. The secrecy rate of PS-based SWIPT schemes has been investigated in [209] by utilizing the feasible point pursuit (FPP) and the successive convex approximation (SCA) methods. Additionally, a DL based solution has been introduced to solve the optimization problem. The authors of [209] have extended their DL-based analyses through the assistance of IRSs and provided the outcomes in [213].
V. UAV-ENABLED SWIPT SYSTEMS
UAVs, owing to their prominent attributes such as maneuverability, adaptive altitude, low cost, and deployment flexibility, play a paramount role in establishing and/or enhancing ubiquitous and seamless connectivity of communication devices as well as improving the capacity of future wireless networks. Furthermore, UAVs channel characteristics show significant differences, particularly offering a higher chance of line-ofsight (LoS) connectivity to ground users. The reader can refer to [214], [215], [216] for more details about air-to-ground channel models.
UAV-enabled SWIPT systems are vulnerable to malicious attacks because of the higher chance of LoS transmission link and the broadcast nature of the wireless channel, and hence need secure designs. Also, this gets more hectoring as smart wireless devices become cheaper and more accessible by malicious users or attackers.
A. UAV-ENABLED COOPERATIVE SWIPT NETWORKS
Cooperative communication can be basically expressed as the collaboration of the devices in a communication network with each other in information transmission to ensure the efficient use of the available network resources. UAV-assisted cooperative communication with its inherent attributes that enable a higher chance of LoS links to ground users, reveals an exciting great potential, especially for the cases where the channel condition of the direct link between the source and the destination is undesirable and unable to afford transmission with acceptable performance. A conceptual UAV-enabled cooperative SWIPT communication system is illustrated in Fig. 6. The current literature has reported several UAV-aided cooperative SWIPT studies [217], [218], [219], [220], [221], [222], [223], [224], [225], [226], [227], [228], [229], which are summarized in Table 4. In [217] and [218], a UAVassisted cooperative communication network with PS SWIPT has been investigated, in which the UAV serves as a mobile relay for both AF and DF protocols, and its transmission capability is powered by radio signal from the source. The authors in these studies examined the cooperative throughput maximization problem by jointly optimizing the UAV's decision profile, power profile and trajectory. These works have been revised for the time sharing mechanism in [219] and [220]. The articles [221], [222] examined FD-enabled UAV-assisted cooperative communication system, where the UAV is used as an aerial relay with DF protocol. Here, the transmission capability of the UAV is powered exclusively by a dedicated WPT link and a radio signal transmitted from the source via TS SWIPT policy. Self-interference on the UAV is also utilized as an EH source since the UAV in [221] and [222] is equipped with two antennas for ensuring simultaneous reception and transmission in FD mode. The authors revised these studies by considering Nakagami-m air-to-ground channels in [223]. The paper [226] presented a cache-assisted UAV-enabled cooperative network with a VOLUME 10, 2022 dynamic TS SWIPT mechanism, in which the UAV as an intermediate mobile relay node ensures backscatter communication through a backscatter circuit. Apart from these studies [217], [218], [219], [220], [221], [222], [223], [226] (considers a source-destination pair, and a UAV relay with SWIPT technology), the literature provides a few studies that envisage multiuser [227], and multi-relay [228] cooperative communication system. The authors of [227] introduced a UAV-enabled cooperative network that includes a multi-antenna ground BS and an HD-enabled UAV with AF relay protocol, and multiple single-antenna ground users with PS SWIPT. This work strives to maximize the sum harvested energy of users. In [228], a UAV-aided multi-relaying system has been investigated, where IRSs deployed on two UAVs and a building act as relay nodes, and a ground user (IoT) conducts PS SWIPT policy. Here, the maximization of the average achievable rate is examined.
Motivated by the importance of the security in UAVassisted communication systems, the article [224] envisaged a cooperative secure and energy-efficient transmission scheme, that a source delivers confidential information to a destination through an AF-based UAV relay in the presence of a ground eavesdropper. This work considers PS SWIPT mechanism and Rician fading channel model. As a similar work, in [225], a mobile relay UAV-assisted secure SWIPT network has been investigated in the presence of a legitimate source-destination pair and multiple eavesdroppers with imperfect locations. The UAV utilizes both PS and TS schemes. Here, the devices are equipped with a single antenna, apart from FD destination that has dual antenna, one for signal transmission and the other for signal reception. The article [229] presented a cooperative secure communication scheme in UAV-aided NOMA network that embodies a source, an artificial jammer, an AF-enabled mobile UAV relay with PS SWIPT, and two destinations.
A point worth mentioning here is that the devices, i.e., the source (except [227], [228]), the destination (except [228]), and the UAV (except pointed out above), have a single antenna and thus these studies can be viewed as singleantenna single-user (except [227], [228]) communication systems.
In [230], trajectory design and resource allocation issues by maximizing minimum average achievable rate have been studied for a UAV-assisted multicasting SWIPT system, where a single UAV as an aerial BS simultaneously sends common information and wireless power to multiple ground users (with PS receiver architecture). A similar work in [239] has been conducted with dynamic PS policy. As similar researches, the article [231] performed except grouping ground users as a single information receiver and multiple energy receivers, while [240] as only information receivers and both information and power receivers. The papers [232], [241] investigated (joint) information (and) or energy coverage performances by considering a UAV-enabled SWIPT system that include multiple UAVs as aerial BSs and multiple ground users adopting either PS or TS schemes. The authors revised these works by deploying a single directional antenna on each UAV in [242]. Here, these studies consider both linear and non-linear EH models.
Security is the main challenge in all UAV-assisted systems as mentioned earlier. This surely further increases when the multicasting is considered. In [233], a UAV-enabled secure SWIPT NOMA network has been studied in which a single UAV as an aerial BS equipped with a multi-antenna sends both wireless energy and information to multiple passive ground users equipped with a single antenna. These passive users adopt PS receiver architecture. This work proposed a joint precoding optimization scheme by applying a non-linear EH model that guarantees the security via artificial jamming, as well as verified the effectiveness of the corresponding scheme via simulations.
There are several UAV-enabled multicasting SWIPT studies that customize ground users as IoT devices [234], [235], [236], [237], [238], [243], [244], [245]. (We here remind that even if IoT is not specifically emphasized in these studies, they can be adapted to IoT in terms of their network structure.) The authors of [234] handled the trajectory design and resource allocation problems for a UAV-aided SWIPT with PS in IoT network. This work is extended to multiple UAVs in [235]. The article [236] examined a single UAVenabled multiuser SWIPT IoT network, in which the UAV delivers power and information that covers both common and private streams. In [237], SWIPT in UAV-assisted cellular IoT network has been studied. Except these works, the available literature has presented a few SWIPT studies in IoT networks with a focus on device-to-device (D2D) communication perspectives [243], [244], [245]. In [243] and [244], energy efficiency optimization issue has been tackled for a UAVenabled SWIPT system in an industrial IoT network, where an aerial UAV BS with antenna array, and multiple D2D transmitter and receiver pairs with a single antenna. Here, the UAV transmits information and power to D2D-transmitters utilizing NOMA, and the transmitters send information to D2D-receiver by using harvested energy via PS SWIPT. The authors of [245] envisaged a UAV-aided hybrid communication scenario that consists of a cellular user, and an IoT network including a low-power IoT-hub and a sensor node. Here, the UAV BS exploits NOMA for serving the cellular user and IoT-hub, while the hub exploits SWIPT for serving to the sensor node.
We here note that in the aforementioned works the UAV(s) are considered as aerial BS(s) and only the ground users exploit SWIPT technology, as well as all devices except the UAV in [233], [243], and [244] are equipped with a single antenna.
Unlike the aforementioned studies, by considering UAVs as users, the authors of [246], [247] introduced federated learning based joint scheduling and resource allocation solution for SWIPT-enabled micro UAV swarm networks.
C. UAV-ENABLED mmWAVE SWIPT NETWORKS
Millimeter-wave bands give key enablers for the revolutionary facilities with the wide swaths of unused and unexplored spectrum. Towards this, the available literature has presented a few mmWave UAV-enabled SWIPT studies that can be basically grouped as single-user communication [248], [249], [250], [253] and multiuser communication [199], [251], [252]. These studies are summed up in Table 6. A similar work [253] has been studied an FD-enabled AF-based UAV-aided cooperative SWIPT communication system over fluctuating two-ray fading model that well characterizes the wireless channels in a wide range of frequencies, i.e., millimeter waves. The authors, in [250], investigated a downlink secure mmWave UAV-aided SWIPT system, where the UAV as an aerial BS exploits the directional modulation scheme based on a random frequency diverse array.
The article [199] considers a mmWave SWIPT network with an aerial UAV BS serving two authorized users with different communication requirements (i.e., one of the high rate private information, the other of energy-constrained with low-rate) in the presence of multiple ground eavesdroppers. Herein, the security, reliability, and energy coverage performance have been analyzed for NOMA and OMA schemes. We notice that all devices in these works, only apart from the UAV of [199], [250] are equipped with a single antenna. The authors of [251] studied a downlink mmWave NOMA SWIPT network that a UAV aerial BS delivers simultaneously wireless information and energy to multiple single-antenna ID devices and EH devices. These devices are clustered via two different unsupervised learning approaches. In [252], a joint downlink SWIPT and uplink information transmission in a UAV-aided mmWave cellular network has been studied. The users' locations in this work are modeled by Poisson cluster process, and the results show that as the cluster size becomes smaller, the network performance is enhanced.
Albeit a number of UAV-enabled SWIPT studies as mentioned above has been reported in the current literature, there are still lots of issues to be researched and addressed. These can be exemplified as the multi-antenna systems, the consideration of distinct (or even hybrid) power sources, and the utilization of more realistic channel characteristics.
VI. EXPERIMENTAL STUDIES ON SWIPT
Experimental activities on EH have started with the extensive experiments of Nikola Tesla on 1890's [256]. Ever since then, the remote energy transfer topic has been investigated and currently there is an extensive literature on EH and associated test-bed based studies. The survey paper [257] provides a detailed description of the components of EH circuits. The energy efficiency of harvesting modules has been also listed recently in [258]. Although closely coupled, this survey aims to highlight the experimental activities of EH with information transfer. As one of the most comprehensive studies on SWIPT systems, Section 5 of [259] briefly explains on the associated testing activities. Below we provide a more detailed list of the research works. There are three different implementation approaches for SWIPT systems; TS, PS [53], and the recently proposed frequency splitting (FS) [260]. Extensive experimental analyses for TS and PS architectures are given in [261] along with the signal designs. Experimentation activities have started with a TS approach. [262] is a leading study that demonstrated the operation of a battery-free wireless sensor network in the microwave band. An output DC power of 50 mW is observed with an efficiency of 49.7%, highlighting the potential of SWIPT systems. The transmission of power is considered from one node to another, and communication is enabled in the reverse direction. The communication performance of the SWIPT system is investigated through experimentation in [257]. The received signal strength, packet loss rate, and the harvested power have been quantified.
As a different perspective a digital receiver is proposed for SWIPT in [263], where the nodes transmit data by counting the number of activations of the energy harvesters' control signals. A data rate of 400 bps has been transmitted. The use of inductive coupling has also been considered for short range SWIPT systems in [264] for FSK modulated signals. The unwanted harmonics are being exploited in [265], where a diplexer-based six port receiver is designed and tested by the use of the proposed multiport power recycling method. [14] proposes a fresh new approach to transmit wireless power using an unmodulated high-power continuous wave and information using a small modulated signal. On the receiver side, the signal is rectified and then split between power and information signals. The FS approach presented in [260] makes use of circulator and filters for the receiver design. Power and data symbols are split in the frequency domain, and a higher performance is absorbed in terms of the harvester power and SNR.
The majority of SWIPT test-beds rely on physical layer prototyping. As the only counterexample, [266] considers a full protocol stack over Zigbee that is used with directional antennas. This is a promising study considering the future products with SWIPT features.
A. WAVEFORM DESIGN FOR SWIPT SYSTEMS
Impact of the high peak-to-average power ratio on the EH efficiency has been first investigated in [267], where the authors consider OFDM, white noise and chaotic waveforms. A high peak-to-average power ratio is shown to provide a higher RF-DC efficiency. The authors of [268] considered the use of QAM and PSK modulated multisine signals in the SWIPT system while also regarding the power and data transfer efficiency and the transmitter distortion. It has been shown that PSK modulated multisine signals are more resistant to distortion than QAM modulated signals. [269] introduces the FSK based modulation schemes to improve the wireless power transfer efficiency. The transmission of 18 Mbps data rate has been experimentally verified. [270] considers the design of an integrated rectifier receiver with the goal of reducing the associated power consumption. Magnitude ratio modulation, ratio phase modulation, and ratio amplitude phase modulation techniques are proposed and tested. Given the impact of the used modulation on the performance, the authors of [271] implemented a system that selected the communication mode adaptively among single-tone and multi-tone signals according to the duty ratio of EH and ID.
A class F rectifier-based scheme that supports low input power levels and wide bandwidth is designed for 2.4 GHz band in [272], where the authors have reached 54.3% maximal RF-DC conversion efficiency. Software defined radio based measurements are presented in [273], by considering the use of rectangular pulses and cyclic prefix OFDM. A biased amplitude modulated OFDM waveform is proposed and tested for low power SWIPT receivers in [274]. An extension to wearable antennas is given in [275], where the authors present a textile antenna for dual-band SWIPT with a 2.4 GHz off-body communications antenna and a sub-1 GHz (785-875 MHz) broad-beam rectenna. This work highlights the potential of SWIPT systems to a wide variety of sensor networks.
B. MULTI-ANTENNA TEST SYSTEMS
A multi-antenna system has been designed and implemented in [276]. Power transmission and data communication has been carried out independently similar today time suturing scenario. The impact of the sleep cycles of the sensor modes have also been investigated. Through measurements, it has been observed that a distance of 2.5 m can be supported. [277] makes use of a multi-antenna test-bed and measures the performance of the joint beam-splitting and energy neutral control method proposed by the authors. The coupling between the duty cycle and the performance of the SWIPT system has also been highlighted. [278] describes the prototype of a wireless powered sensor network with an antenna array of 64 elements that functions up to 50 m, highlighting the practical relevance of the investigated systems.
Overall, although there are limited studies considering prototyping of SWIPT systems, the promise of battery free sensor networks is appealing both to researchers and industry. What does high potential it is expected that in the future the practical aspects will be more thoroughly examined and the advanced antenna techniques, such as massive MIMO, will be investigated further in the system deployments.
VII. CHALLENGES, OPPORTUNITIES, AND FUTURE RESEARCH DIRECTIONS
Since 5G+ communication systems and standards are promising to serve the vastly increased number of connected terminals, complicated computational burden related to the optimization with respect to power allocation, subcarrier allocation, and communications security metrics would be encountered, especially in the case of multiuser SWIPT structures being incorporated into real-time downlink and uplink operations. Hence, in order to exploit their benefits, artificial intelligence-oriented state-of-the-art approaches might be better adapted to the multiuser SWIPT communications scenarios that target communications secrecy.
In multi-tier multiuser wireless networks, it would be possible to improve privacy and reliability performances by managing the beamforming and beam tracking tools at the transmitter much more effectively in cellular scenarios, thanks to the multi-dimensional analysis and determination of spectrum sources from the CR perspective and the angleof-arrival estimations for the eavesdropping users. Hence, the cooperation of simultaneous threat estimation and beamforming mechanisms would be of great importance in future applications.
Preliminary studies on the usage of IRSs in both indoor and outdoor communications scenarios have shown the potential of these passive but useful surfaces through providing additional increments on the received signal quality by continuously manipulating their scattering characteristics. Hence, employing IRSs within the multiuser SWIPT schemes would also promise considerable QoS enhancements despite the programming and optimization complexity required for the equipped tunable elements (i.e., photodiodes). Further research on the optimization of the states of the tunable elements, and the locations of IRSs through both traditional signal processing-based or artificial intelligence-based approaches would be of great importance to pave the way of IRS deployments in next generation mobile communication systems.
Future UAV-enabled SWIPT studies can include channel modeling and estimation, which is one of the leading issues as wireless air-to-ground channel characteristics suffer from many external parameters such as the position and altitude of users and UAVs, etc. Another notable issue on UAV-enabled SWIPT networks is multi-antenna systems that enhance superior data rate and reliability. As a similar research area from the perspective of improving data rate, IRSs reveal great potential for such networks since they can reflect or refract the signals in the desired manner. As energy consumption has a vital factor for the overall performance of UAV-aided networks, one of the open challenges is EH based on distinct (or even hybrid) power sources.
Due to the pragmatic nature of the research process, the literature most likely exhibits the advantages and drawbacks of proposed schemes, techniques and algorithms under several stringent assumptions. Hence, for a while, the research studies lack to mention the practical achievements of the proposed schemes and systems under realistic operating conditions. Nevertheless, from the emerging communications standards and systems perspective, the feasibility of the investigated schemes under miscellaneous hardware and computation imperfections is of critical importance. Hence, the literature related to multi-user SWIPT schemes need rigorous and extensive examinations addressing the variations of QoS metrics under practical system impairments (e.g., IQ imbalance, phase noise, amplifier non-linearity, channel estimation errors, outdated feedback, pilot contamination) specified to the next generation communications use cases.
Another conventional but critical issue that deserves to be treated refinedly is the channel coding process for especially the wirelessly-powered low-budget transceiver terminals. Any additional signal processing burden at these systems would result in additional energy consumption that threatens the knife-edge equilibrium between EH and consumption. Hence, novel channel coding techniques proposed by further researches should be strictly tailored due to the solid energy restrictions of wirelessly-powered terminals (e.g., constrained run-length limited codes, unary coding). Thanks to the low-complexity novel channel coding approaches, the communications reliability of the emerging critical applications such as e-Health IoT would be able to be enhanced. VOLUME 10, 2022 Future wireless communications systems are strongly envisioned to include energy-eager technologies such as orthogonal time frequency space modulation, IRS, NOMA, massive MIMO, and satellite or high altitude platform based communications. To this end, the integration of SWIPT within the upcoming standards renders an essential challenge and opens up new research directions.
VIII. CONCLUSION
In this survey paper, as an effective way of energy efficient green communications, expected to be an inevitable part of 5G+, SWIPT communication systems are reviewed in detail for multiuser case. Using multiuser diversity and multiple access techniques, diverse multiuser SWIPT communication scenarios comprising multi-antenna communications, cooperative communications, network coding, communications security, and UAV-based communications are comprehensively reviewed, indicating the state-of-the art studies and related open issues. O. KUCUR received the B.S. degree in electronics and telecommunication engineering from Istanbul Technical University, Istanbul, Turkey, in 1988, and the M.S. and Ph.D. degrees in electrical and computer engineering from the Illinois Institute of Technology (IIT), Chicago, IL, USA, in 1992 and 1998, respectively. He was an Assistant Professor with the Department of Electrical Engineering, South Dakota State University, from 1998 to 1999. Since 1999, he has been with the Department of Electronics Engineering, Gebze Technical University, Turkey, where he is currently a Professor. His research interests include fading channels and diversity techniques, multiuser communications, MIMO, and cooperative communications. VOLUME 10, 2022
|
v3-fos-license
|
2017-06-09T08:17:12.000Z
|
2017-05-01T00:00:00.000
|
119523446
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-017-4954-y.pdf",
"pdf_hash": "1c94f09c0f7467ee21a04873b8cd3026412f5618",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:623",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "1c94f09c0f7467ee21a04873b8cd3026412f5618",
"year": 2017
}
|
pes2o/s2orc
|
Augmented Superfield Approach to Gauge-invariant Massive 2-Form Theory
We discuss the complete sets of the off-shell nilpotent (i.e. s^2_{(a)b} = 0) and absolutely anticommuting (i.e. s_b s_{ab} + s_{ab} s_b = 0) Becchi-Rouet-Stora-Tyutin (BRST) (s_b) and anti-BRST (s_{ab}) symmetries for the (3+1)-dimensional (4D) gauge-invariant massive 2-form theory within the framework of augmented superfield approach to BRST formalism. In this formalism, we obtain the coupled (but equivalent) Lagrangian densities which respect both BRST and anti-BRST symmetries on the constrained hypersurface defined by the Curci-Ferrari type conditions. The absolute anticommutativity property of the (anti-)BRST transformations (and corresponding generators) is ensured by the existence of the Curci-Ferrari type conditions which emerge very naturally in this formalism. Furthermore, the gauge-invariant restriction plays a decisive role in deriving the proper (anti-)BRST transformations for the St{\"u}ckelberg-like vector field.
Introduction
The antisymmetric 2-form B (2) = 1 2! (dx µ ∧ dx ν ) B µν gauge field B µν (= −B νµ ) [1,2] has paved a great deal of interest of the theoretical physicists during past few decades because of its relevance in the realm of (super-)string theories [3,4], (super-)gravity theories [5], dual description of a massless scalar field [6,7] and modern developments in noncommutative geometry [8]. It has also been quite popular in the mass generation of the 1-form A (1) = dx µ A µ gauge field A µ , without taking any help of Higgs mechanism, where 2-form and 1form gauge fields merged together in a particular fashion through a well-known topological (B ∧ F ) term [9,10,11,12,13,14].
The Becchi-Rouet-Stora-Tyutin (BRST) formalism is one of the most elegant and intuitively appealing theoretical approaches to covariantly quantizing gauge theories [15,16,17,18]. The gauge symmetry is always generated by the first-class constraints present in a given theory, in Dirac's terminology [19,20]. In the BRST formalism, the classical local gauge symmetry of a given physical theory is traded with two global BRST and anti-BRST symmetries at the quantum level [21,22]. These symmetries obey two key properties: (i) nilpotency of order two, and (ii) absolute anticommutativity. The first property implies that these symmetries are fermionic in nature whereas second property shows that they are linearly independent of each other. In the literature, it has been shown that only the BRST symmetry is not sufficient to yield the ghost decoupling from the physical subspace of the total quantum Hilbert space of states. The addition of nilpotent anti-BRST symmetry plays an important role in removing the unphysical ghost degeneracy [23]. Thus, the anti-BRST symmetry is not just a decorative part; rather, it is an integral part of this formalism and plays a fundamental role in providing us with the consistent BRST quantization.
The superfield approach to BRST formalism is the theoretical approach that provides the geometrical origin as well as deep understanding about the (anti-)BRST symmetry transformations [24,25,26]. The Curci-Ferrari condition [21], which is a hallmark of the non-Abelian 1-form gauge theory, emerges very naturally as an off-shoot of the superfield formalism. This condition plays a central role in providing the absolute anticommutativity property of the (anti-)BRST transformations and also responsible for the derivation of the coupled (but equivalent) Lagrangian densities. In recent years, the "augmented" superfield formalism, an extended version of Bonora-Tonin superfield formalism, has been applied to the interacting gauge systems such as (non-)Abelian 1-form gauge theories interacting with Dirac fields [27,28,29,30,31] and complex scalar fields [32,33], gauge-invariant version of the self-dual chiral boson [34], 4D Freedman-Townsend model [35], 3D Jackiw-Pi model [36], vector Schwinger model in 2D [37] and modified version of 2D Proca theory [38]. In this approach, the celebrated horizontality condition and gauge-invariant restrictions are blend together in a physically meaningful manner to derive the proper off-shell nilpotent and absolutely anticommuting (anti-)BRST symmetry transformations.
As far as the quantization of the 4D (non-)Abelian 2-form gauge theories is concerned, the canonical and BRST quantizations have been carried out [39,40,41,42,43,44]. The 2-form gauge theory is a reducible theory and, thus, requires ghost for ghost in the latter quantization scheme. In the non-Abelian case, a compensating auxiliary vector field is required for the consistent quantization as well as in order to avoid the well-known no-go theorem [45]. In fact, this auxiliary field is needed to close the symmetry algebra and, thus, the theory respects the vector gauge symmetry present in the theory. Furthermore, within the framework of BRST formalism, the free Abelian 2-form gauge theory in (3 + 1)dimensions of spacetime provides a field-theoretic model for the Hodge theory where all the de Rham cohomological operators (d, δ, ∆) and Hodge duality ( * ) operation of differential geometry find their physical realizations in the language of the continuous symmetries and discrete symmetry, respectively [46,47]. In addition, it has also been shown that the free Abelian 2-form gauge theory, within the framework of BRST formalism, provides a new kind of quasi-topological field theory (q-TFT) which captures some features of Witten type TFT and a few aspects of Schwarz type TFT [48].
We have also studied the 4D topologically massive (non)-Abelian 2-form theories where 1-form gauge bosons acquire mass through a topological (B ∧ F ) term without spoiling the gauge invariance of the theory. With the help of superfield formalism, we have derived the off-shell nilpotent as well as absolutely anticommuting (anti-)BRST transformations and also shown that the topological (B ∧ F ) term remains unaffected by the presence of the Grassmannian variables when we generalize it on the (4, 2)-dimensional supermanifold [49,50]. In the non-Abelian case, we have found some novel observations. For the sake of brevity, the conserved and nilpotent (anti-)BRST charges do not generate the proper (anti-)BRST transformations for the compensating auxiliary vector field [51,52]. Moreover, in contrast to the Nakanishi-Lautrup fields, the nilpotency and absolute anticommutativity properties of the (anti-)BRST transformations also fail to produce the correct (anti-)BRST symmetry transformations for the compensating auxiliary field.
The contents of our present investigation are organized as follows. In Sect. 2, we briefly discuss about the 4D massive 2-form theory and its constraints structure. Section 3 is devoted to the coupled (but equivalent) Lagrangian densities that respect the off-shell nilpotent (anti-)BRST symmetries. We discuss the salient features of the Curci-Ferrari type conditions in this section, too. In Sect. 4, we discuss the conserved charges as the generator of the off-shell nilpotent (anti-)BRST transformations. The global continuous ghost-scale symmetry and BRST algebra among the symmetry transformations (and corresponding generators) are shown in Sect. 5. Section 6 deals with the derivation of the proper (anti-)BRST symmetry transformations with the help of augmented superfield formalism. We capture the (anti-)BRST invariance of the coupled Lagrangian densities in terms of the superfields and Grassmannian translational generators in Sect. 7. Finally, in Sect. 8, we provide the concluding remarks.
In Appendix A, we show an explicit proof of the anticommutativity of the conserved (anti-)BRST charges.
It is evident that due to the existence of mass term, the Lagrangian density does not respects the following gauge symmetry: where Λ µ (x) is an infinitesimal local vector gauge parameter. In fact, the above Lagrangian density transforms as δL = −m 2 B µν (∂ µ Λ ν ). The basic reason behind this observation is that the above Lagrangian density is endowed with the second-class constraints, in language of Dirac's prescription for the classification scheme of constraints [19,20], namely; where Π 0i and Π ij are the canonical conjugate momenta corresponding to the dynamical fields B 0i and B ij , respectively. Here, the symbol '≈' defines weak equality in the sense of Dirac. Due to the existence of mass term in the Lagrangian density, both constraints belong to the category of second-class constraints as one can check that the primary (χ i ) and secondary (ξ j ) constraints lead to a non-vanishing Poisson bracket: Thus, the mass term in the Lagrangian density spoils the gauge invariance. However, on one hand, the gauge invariance can be restored by setting mass parameter equal to zero (i.e. m = 0). But this leads to the massless 2-form gauge theory. On other hand, we can restore the gauge invariance by exploiting the power and strength of the well-known Stückelberg technique (see, e.g. [53,54] for details). Thus, we re-define the field B µν as where Φ µν = (∂ µ φ ν − ∂ ν φ µ ) and φ µ is the Stückelberg-like vector field. As a consequence, we obtain the following gauge-invariant Stückelberg-like Lagrangian density for the massive 2-form theory [55,56]: (5) * We adopt the conventions and notations such that the 4D flat Minkowski metric endowed with mostly negative signatures: η µν = η µν = diga (+1, −1, −1, −1). Here, the Greek indices µ, ν, κ, ... = 0, 1, 2, 3 correspond to the spacetime directions, whereas the Latin indices i, j, k, ... = 1, 2, 3 stand for the space directions only. We also follow the convention: Here Φ µν defines the curvature for the Stückelberg-like vector field φ µ . In the language of differential form, we can write Φ (2) = dφ (1) = 1 2! (dx µ ∧ dx ν ) Φ µν . We, interestingly, point out that the above Lagrangian density and the Lagrangian density for the 4D topologically massive (B ∧ F ) theory have shown to be equivalent by Buscher's duality procedure [55,56]. Furthermore, due to the introduction of Stückelberg-like vector field, the second-class constraints get converted into the first-class constraints [19,20]. These first-class constraints are listed as follows: where Π 0 and Π i are canonical conjugate momenta corresponding to the fields φ 0 and φ i , respectively. It is elementary to check that the Poisson brackets among all the first-class constraints turn out to be zero. Further, the first-class constraints Σ and Σ i are not linearly independent. They are related as ∂ i Σ i + m Σ = 0 which implies that the Lagrangian (5) describes a reducible gauge theory [56]. These first-class constraints are the generators of two independent local and continuous gauge symmetry transformations, namely; where the Lorentz scalar Ω(x) and Lorentz vector Λ µ (x) are the local gauge parameters. It is straightforward to check that under above the gauge transformations, the Lagrangian density remains invariant (i.e. δ 1 L s = 0 and δ 2 L s = 0). As a consequence, the combined gauge symmetry transformations δ = (δ 1 + δ 2 ) also leave the Lagrangian density (L s ) invariant.
Coupled Lagrangian densities: off-shell nilpotent and absolutely anticommuting (anti-)BRST symmetries
The coupled (but equivalent) Lagrangian densities for the 4D Stükelberg-like massive Abelian 2-form theory incorporate the gauge-fixing and Faddeev-Popov ghost terms within the framework of BRST formalism. In full blaze of glory, these Lagrangian densities (in the Feynman gauge) are given as follows: where the vector fieldsB µ , B µ and scalar fieldsB, B are the Nakanishi-Lautrup type auxiliary fields, the vector fields (C µ )C µ and scalar fields (C)C (withC µC are the fermionic (anti-)ghost fields,β, β are the bosonic ghost-for-ghost fields, (ρ)λ are the fermionic auxiliary (anti-)ghost fields. The fermionic (anti-)ghost fields (C µ )C µ , (C)C and (ρ)λ carry ghost number equal to (−1) + 1 whereas bosonic (anti-)ghost fields (β)β have ghost number equal to (−2) + 2. The remaining fields carry zero ghost number. The commuting (anti-)ghost fields (β)β and scalar field ϕ are required for the stage-one reducibility in the theory (see, e.g. [42] for details).
The above Lagrangian densities respect the following off-shell nilpotent (i.e. s 2 (a)b = 0) and absolutely anticommuting (i.e. s b s ab + s ab s b = 0) (anti-)BRST symmetry transformations (s (a)b ): It is straightforward to check that the Lagrangian densities L B and LB under the offshell nilpotent BRST and anti-BRST symmetry transformations transform to the total spacetime derivatives, respectively, as As a consequence, the action integrals remain invariant (i.e. s b dx 4 L B = 0, s ab dx 4 LB = 0) under the nilpotent (anti-)BRST transformations (10) and (11). At this juncture, the following remarks are in order: (i) The above Lagrangian densities are coupled because the Nakanishi-Lautrup type auxiliary fields B,B and B µ ,B µ are related to each other through the celebrated Curci-Ferrari (CF) type of conditions: (ii) It is to be noted that L B and LB transform under the continuous anti-BRST and BRST transformations, respectively, as follows: As a consequence, the coupled Lagrangian densities respect both BRST and anti-BRST symmetries on the 4D constraints hypersurface defined by the CF conditions (13). This reflects the fact that the coupled Lagrangian densities are equivalent on the constrained hypersurface defined by CF type of restrictions.
(iii) The CF conditions are (anti-)BRST invariant as one can check that Thus, these conditions are physical conditions.
(iv) Further, the absolute anticommutativity property of the (anti-)BRST symmetry transformations is satisfy due the existence of the CF conditions. For the sake of brevity, we note that Now, it is clear from the above that {s b , s ab }B µν = 0 and {s b , s ab }φ µ = 0 if and only if the CF conditions are satisfied. For the remaining fields, the anticommutativity property is trivially satisfied.
To sum up the above results, we again emphasize on the fact that the CF conditions play a decisive role in providing the absolute anticommutativity of the (anti-)BRST transformations. These are also responsible for the coupled (but equivalent) Lagrangian densities. Furthermore, the CF type conditions are the physical restrictions (on the theory) in the sense that they are (anti-)BRST invariant conditions. We shall see later on that these CF conditions emerge very naturally within the framework of superfield approach to BRST formalism (cf. Sect. 6, below).
Conserved (anti-)BRST Charges
According to Noether's theorem, the invariance of the actions (corresponding to the coupled Lagrangian densities) under the continuous (anti-)BRST symmetries yield the conserved (anti-)BRST currents J µ (a)b : The conservation (∂ µ J µ b = 0) of BRST current J µ b can be proven by using the following Euler-Lagrange equations of motion: These equations of motion have been derived from L B . Similarly, for the conservation (∂ µ J µ ab = 0) of anti-BRST current J µ ab , we have used the equations of motion derived from LB. We point out that most of the equations of motion are the same for L B and LB. The Euler-Lagrange equations of motion that are different from (18) and derived from LB are listed as follows: It is interesting to mention that the appropriate equations of motion derived from L B and LB [cf. (18) and (19)] produce the CF conditions (13). The temporal components of the conserved currents (i.e. Q (a)b = d 3 xJ 0 (a)b ) lead to the following charges Q (a)b : It turns out that these conserved charges are the generators of the corresponding symmetry transformations. For instance, one can check that where (±) signs, as the subscript, on the square bracket correspond to the (anti)commutator depending on the generic field Ψ being (fermionic)bosonic in nature. We, further, point out that the conserved (anti-)BRST charges do not produce the proper (anti-)BRST symmetry transformations for the Nakanishi-Lautrup type auxiliary fields B,B, B µ ,B µ and the auxiliary (anti-) ghost fields (ρ)λ. The transformations of these auxiliary fields can be derived from the requirements of the nilpotency and absolute anticommutativity properties of the (anti-)BRST symmetry transformations. The (anti-)BRST charges are nilpotent and anticommuting in nature. These properties can be shown in a straightforward manner by exploiting the definition of a generator. For the nilpotency of the (anti-)BRST charges, the following relations are true: In a similar fashion, one can also show the anticommutativity of the (anti-)BRST charges as The above computations are more algebraically involved. For the shake of completeness, in our Appendix A, we shall provide a complete proof of the first relation that appear in (23) in a simpler way. Before we wrap up this section, we dwell a bit on the constraint structure of the gaugeinvariant Lagrangian density (5) within the framework of BRST formalism. We define a physical state (|phys ) in the quantum Hilbert space of states which respects the (anti-) BRST symmetries. The physicality criteria Q (a)b |phys = 0 state that the physical state |phys must be annihilated by the conserved and nilpotent (anti-)BRST charges Q (a)b . In other words, we can say that Faddeev-Popov ghosts are decoupled from the physical states of the theory. Thus, the physicality criterion Q b |phys = 0 produces the following constraint conditions: which, finally, imply the familiar constraint conditions on the physical state: Π 0 |phys = 0, , Π ij are the canonical conjugate momenta with respect to the dynamical fields φ 0 , φ i , B 0i , B ij , respectively. These momenta have been derived from the Lagrangian density (8). The very similar constraint conditions also emerge when we exploit the physicality criterion Q ab |phys = 0. These constraint conditions are consistent with gauge-invariant Lagrangian (5). As a consequence, the BRST quantization is consistent with the requirements of the Dirac quantization scheme for the constrained systems.
Ghost-scale symmetry and BRST algebra
The coupled Lagrangian densities, in addition to the (anti-)BRST symmetries, also respect the following continuous ghost-scale symmetry transformations: where ϑ is a (spacetime independent) global scale parameter. The numerical factors in the exponentials (i.e. 0, ±1, ±2) define the ghost number of the various fields present in the theory. The infinitesimal version of the above ghost-scale symmetry (with ϑ = 1) leads to the following symmetry transformations (s g ): under which the (coupled) Lagrangian densities remain invariant (i.e. s g L B = s g LB = 0). According to Noether theorem, the continuous ghost-scale symmetry yields the conserved current J µ g and corresponding charge Q g , namely; It is evident that the above charge is the generator of the corresponding ghost-scale symmetry transformations as one can check that where Ψ is the generic field of the theory. The (±) signs in front of the commutator are used for the generic field Ψ being (fermionic) bosonic in nature. At this moment, the following remarks are in order: (i) The conserved ghost charge Q g does not produce the proper transformations for the auxiliary fields ρ and λ. These transformations can be obtained from other considerations (see (29) below).
(ii) The continuous symmetry transformations (in their operator form) obey the following algebra: (iii) By exploiting the last two relations of the above equation, we can obtain the proper transformations for ρ and λ. For instance, one can check that Similarly, the transformation for the auxiliary field λ can be derived, too.
(iv) The operator form of the conserved (anti-)BRST charges together with the ghost charge obeys the following graded algebra: which is also known as standard BRST algebra.
As a consequences of the above algebra, we define an eigenstate |ζ n (in the quantum Hilbert space of states) with respect to the operator iQ g such that iQ g |ζ n = n|ζ n .
Here n defines the ghost number as the eigenvalue of the operator iQ g . Using the above algebra among the conserved charges, it is straightforward to check that the following relationships are true: which imply that the eigenstates Q b |ζ n and Q ab |ζ n have the eigenvalues (n + 1) and (n − 1), respectively. In other words, The conserved (anti-)BRST charges Q (a)b (decrease)increase the ghost number of the eigenstate iQ g |ζ n by one unit. Also, we can say that the (anti-)BRST charges Q (a)b carry ghost number equal to (−1) + 1 while ghost charge Q g does not carry any ghost number. These observations also reflect from the expressions of the conserved charges if we look carefully for the ghost number of the various fields that appear in the charges.
It is to be noted that the following horizontality condition (HC), determines the value of all secondary fields in terms of the basic and auxiliary fields of the theory. This HC implies that the l.h.s. is independent of the Grassmannian variables θ and θ when we generalize it on the (4, 2)D supermanifold. The r.h.s. of (37), in its full blaze of glory, can be written as The above HC implies the following interesting relationships amongst the superfields: Exploiting the above expressions for the superfields given in (36), we obtain the value of secondary fields, Substituting these values in the expressions of the superfields (36), we have the following super-expansions: The superscript (h) on the superfields denotes the expansion of the superfields obtained after the application of HC. In the above super-expansions, we have chosenf 3 µ = B µ where B µ andB µ play the role of Nkanishi-Lautrup type auxiliary fields. These auxiliary fields are required for the linearization of the gauge-fixing terms as well as for the off-shell nilpotency of the (anti-)BRST symmetry transformations.
It is clear from the above super-expansions of the superfields that the coefficients ofθ are the BRST transformations whereas the coefficients of θ are the anti-BRST transformations.
To be more precise, the BRST transformation (s b ) for any generic field Ψ(x) is equivalent to the translation of the corresponding superfieldΨ (h) (x, θ,θ) along theθ-direction while keeping θ-direction fixed. Similarly, the anti-BRST transformation (s ab ) can be obtained by taking the translation of the superfield along the θ-direction whileθ-direction remains intact. Mathematically, these statements can be corroborated in the following fashion: It is worthwhile to mention that the (anti-)BRST transformations of the fermionic auxiliary fields ρ, λ and Nakanishi-Lautrup type fields B µ ,B µ have been derived from the requirements of the nilpotency and absolute anticommutativity properties of the (anti-)BRST symmetry transformations. So far, we have obtained the off-shell nilpotent and absolutely anticommuting (anti-) BRST symmetries for the 2-form field B µν and corresponding (anti-)ghost fields. But, the (anti-)BRST symmetry transformations of the Stückelberg vector field φ µ and corresponding (anti-)ghost fields are still unknown. For this purpose, it is to be noted that following quantity remains invariant under the gauge transformations (δ = δ 1 + δ 2 ): This is a physical quantity in the sense that it is gauge-invariant. Thus, it remains independent of the Grassmannian variables when we generalize it on the (4, 2)D supermanifold. This gauge-invariant quantity will serve our purpose in deriving the proper (anti-)BRST transformations for the Stückelberg-like vector field φ µ and corresponding (anti)ghost fields (C)C. In terms of the differential forms, we generalize this gauge-invariant restriction on the (4, 2)D supermanifold as where the super 1-form is defined as The multiplets of super 1-from, one can express, along the Grassmannian directions as where R µ ,R µ , s,s and S µ , B 1 ,B 1 , B 2 ,B 2 are fermionic and bosonic secondary fields, respectively.
The explicit expression of the r.h.s. of (44) can be written in the following fashion: Using (44) and setting all the coefficients of the Grassmannian differentials to zero, we obtain the following interesting relationships: Exploiting the above equations together with (41) for the super-expansions given in (46), we obtain the precise value of the secondary fields in terms of the basic and auxiliary fields, namely; Putting the above relationships into the super-expansions of the superfields (46), we obtain the following explicit expressions for the superfields (46), in terms of the basic and auxiliary fields: Here the superscript (h, g) on the superfields denotes the super-expansions obtained after the application of gauge-invariant restriction (44). In the above, we have made the choices B 2 = B andB 1 =B for the additional Nakanishi-Lautrup type fields B andB. These fields are also required for the off-shell nilpotency of the (anti-) BRST symmetry transformations and linearization of the gauge-fixing term for the Stückelberg vector field φ µ . Again, the (anti-)BRST transformations for the auxiliary fields B andB have been derived from the requirements of the nilpotency and absolute anticommutativity of the (anti-)BRST transformations. Thus, one can easily read-off all the (anti-)BRST transformations for the vector field φ µ and corresponding (anti)ghost fields (C)C [cf. (10) and (11)]. Before we wrap up this section, we point out that the CF conditions (13) which play the crucial role (cf. Sect. 3) emerge very naturally in this formalism. The first CF condition B µ +B µ +∂ µ ϕ = 0 arises from the HC (28). In particular, the relation ∂ µΦ +∂ θFµ +∂θF µ = 0, which is a coefficient of the wedge product dx µ ∧ dθ ∧ dθ , leads to the first CF condition. Similarly, the second CF condition B +B + m ϕ = 0 emerges from (48) when we set the coefficient of the wedge product (dθ ∧ dθ) equal to zero. In fact, the last relation of Eq. (48) produces the second CF condition. Furthermore, it is interesting to note that the equation (44) imposes its own integrability condition [42]. Thus, if we operate super exterior derivatived = d + dθ ∂ θ + dθ ∂θ on (44) from left, we obtaiñ In the above, B (2) and φ (1) are independent of the Grassmannian variables (θ,θ) and d 2 = d 2 = 0. As a result, the above equation turns into the horizontality condition (37).
(Anti-)BRST invariance of the coupled Lagrangian densities: superfield approach
In this section, we shall provide the (anti-)BRST invariance of the coupled Lagrangian densities in the context of superfield formalism. To accomplish this goal, we note that the coupled Lagrangian densities, in terms of the off-shell nilpotent and absolutely anticommuting (anti-)BRST symmetries, can be written as For our present 4D model, all terms in square brackets are chosen in such a way that each term carries mass dimension equal to [M] 2 in natural units ( = c = 1). In fact, the dynamical fields B µν , φ µ , C µ ,C µ , β,β, ϕ, C,C have mass dimension equal to [M]. The operation of s b and s ab on any generic field increases the mass dimension by one unit. In other words, s b and s ab carry mass dimension one. Furthermore, s b increases the ghost number by one unit when it operates on any generic field whereas s ab decreases the ghost number by one unit when it acts on any field. As a consequence, the above coupled Lagrangian densities are consistent with mass dimension and ghost number considerations. The constant numerals in front of each term are chosen for our algebraic convenience. In full blaze of glory, the above Lagrangian densities (52) and (53) lead to (8) and (9), respectively, modulo the total spacetime derivatives. In our earlier section (cf. Sect. 2), we have already mentioned that L s is gauge-invariant and, thus, it remains invariant under the (anti-)BRST symmetries. As a consequence, both L B and LB given in (52) and (53) where the super Lagrangian densityL s is the generalization of the gauge-invariant Lagrangian density L s on the (4, 2)D superspace. The former Lagrangian density is given as follows:L A straightforward computation shows thatL s is independent of the Grassmannian variables θ andθ (i.e.L s = L s ) which shows that L s is gauge-invariant as well as (anti-)BRST invariant Lagrangian density. Mathematically, latter can be expressed in terms of the translational generators as It is clear from (54) and (55) together with (57), the followings are true, namely; which clearly show the (anti-)BRST invariance of the coupled Lagrangian densities within the framework superfield formalism. The above equation is true due to the validity of the nilpotency (i.e. ∂ 2 θ = 0, ∂ 2 θ = 0) of the translational generators ∂ θ and ∂θ.
Conclusions
In our present investigation, we have studied the 4D gauge-invariant massive Abelian 2-form theory within the framework of BRST formalism where the local gauge symmetries given in (7) are traded with two linearly independent global BRST and anti-BRST symmetries [cf. (10) and (11)]. In this formalism, we have obtained the coupled (but equivalent) Lagrangian densities [cf. (8) and (9)] which respect the off-shell nilpotent and absolutely anticommuting BRST and anti-BRST symmetry transformations on the constrained hypersurface defined by the CF type conditions (13). These CF conditions are (anti-)BRST invariant as well as they also play a pivotal role in the proof of the absolute anticommutativity of the (anti-)BRST transformations and the derivation of the coupled Lagrangian densities. The anticommutativity property for the dynamical fields B µν and φ µ is satisfied due to the existence of the Curci-Ferrari type conditions [cf. (16)].
The continuous and off-shell nilpotent (anti-)BRST symmetries lead to the derivation of the corresponding conserved (anti-)BRST charges. In addition to these symmetries, the coupled Lagrangian densities also respect the global ghost-scale symmetry which leads to the conserved ghost charge. The operator form of the continuous symmetry transformations and corresponding generators obeys the standard graded BRST algebra [cf. (29) and (30)]. We lay emphasis on the fact that the physicality criteria Q (a)b |phys = 0 produce the firstclass constraints, as the physical conditions (24) on the theory, which are present in the gauge-invariant Lagrangian density (5). Thus, the BRST quantization is consistent with the Dirac quantization of the system having first-class constraints.
It is worthwhile to point out that the (anti-)BRST charges which are the generators of the corresponding symmetry transformations are unable to produce the proper (anti-) BRST symmetry transformations for the Nakanishi-Lautrup fields B,B, B µ ,B µ and other fermionic auxiliary fields ρ, λ. The transformations of these fields have been derived from the requirements of the nilpotency and absolute anticommutativity properties of the (anti-) BRST transformations. Similarly, the ghost charge is also incapable to generate the proper transformations for the auxiliary ghost fields ρ and λ. We have derived these symmetries from other considerations [cf. (30)] where we have used the appropriate relations that appear in the algebra (29).
Furthermore, we have exploited the augmented version of superfield approach to BRST formalism to derive the off-shell nilpotent and absolutely anticommuting (anti-)BRST symmetries for the 4D dimensional Stückelberg-like massive Abelian 2-form gauge theory. In this approach, besides the horizontality condition (37), we have used the gauge-invariant restriction (44) for the derivation of the complete sets of the BRST and anti-BRST symmetry transformations. The gauge-invariant restriction is required for the derivation of the proper (anti-)BRST transformations for the Stückelberg-like vector field φ µ . One of the spectacular observations, we point out that the horizontality condition, which produces the (anti-)BRST transformations for the 2-form field and corresponding (anti-)ghost fields, can also be obtained from the integrability of (44) [42]. The CF conditions, which are required for the absolute anticommutativity of the (anti-)BRST symmetries, emerge very naturally in the superfield formalism. These (anti-)BRST invariant CF conditions are conserved quantities. Thus, it would be an interesting piece of work to show that these CF conditions commute with the Hamiltonian within the framework of BRST formalism (see, e.g. [57,58]).
Using the basic tenets of BRST formalism, we have written the coupled (but equivalent) Lagrangian densities in terms of (anti-)BRST symmetries where the mass dimension and ghost number of the dynamical fields have been taken into account. Within the framework of superfield, we have provided the geometrical origin of the (anti-)BRST symmetries in terms of the Grassmannian translational generators. Also, one can capture the basic properties of the (anti-)BRST transformations in the language of the translational generators. Thus, we have been able to write the coupled Lagrangian densities in terms of the superfields and Grassmannian derivatives. As a result, the (anti-)BRST invariance of the super Lagrangian densities become quite simpler and straightforward due to the nilpotency property of the Grassmannian derivatives. Now eliminatingB i ,B 0 andB by using the CF conditions, the above expression further simplifies as Exploiting the equation of motion ∂ µ B µ + m B = 0 and an off-shoot (✷ + m 2 ) B µ = 0 of the Euler-Lagrange equations of motion (18), we obtain Similarly, operating BRST transformations s b on (A.2) and exploiting the equation of motion for the anti-ghost fieldC 0 [cf. (18)], we finally obtain the r.h.s. of (A.3). In fact, we yield As a result of the above equations, the relation s b Q ab = s ab Q b = −i Q b , Q ab = 0 implies the anticommutativity (i.e. Q b Q ab + Q ab Q b = 0) of the (anti-)BRST charges Q (a)b . Here, we again lay emphasis on the fact that the CF conditions play a crucial role in the anticommutativity of the (anti-)BRST charges (and corresponding symmetry transformations).
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2015-05-01T00:00:00.000
|
7102032
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0124641&type=printable",
"pdf_hash": "5133638019261ee21c62265967cc3d0b5f791b9c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:625",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "57e404c3942cb5aa95777811508a8413c0015934",
"year": 2015
}
|
pes2o/s2orc
|
Current Management of Surgical Oncologic Emergencies
Objectives For some oncologic emergencies, surgical interventions are necessary for dissolution or temporary relieve. In the absence of guidelines, the most optimal method for decision making would be in a multidisciplinary cancer conference (MCC). In an acute setting, the opportunity for multidisciplinary discussion is often not available. In this study, the management and short term outcome of patients after surgical oncologic emergency consultation was analyzed. Method A prospective registration and follow up of adult patients with surgical oncologic emergencies between 01-11-2013 and 30-04-2014. The follow up period was 30 days. Results In total, 207 patients with surgical oncologic emergencies were included. Postoperative wound infections, malignant obstruction, and clinical deterioration due to progressive disease were the most frequent conditions for surgical oncologic emergency consultation. During the follow up period, 40% of patients underwent surgery. The median number of involved medical specialties was two. Only 30% of all patients were discussed in a MCC within 30 days after emergency consultation, and only 41% of the patients who underwent surgery were discussed in a MCC. For 79% of these patients, the surgical procedure was performed before the MCC. Mortality within 30 days was 13%. Conclusion In most cases, surgery occurred without discussing the patient in a MCC, regardless of the fact that multiple medical specialties were involved in the treatment process. There is a need for prognostic aids and acute oncology pathways with structural multidisciplinary management. These will provide in faster institution of the most appropriate personalized cancer care, and prevent unnecessary investigations or invasive therapy.
Introduction
An oncologic emergency is an acute, potentially life threatening condition that has developed directly or indirectly as a result of cancer or cancer treatment [1,2]. Non-elective consultation for symptoms caused by malignant disease is an important marker of poor prognosis [3][4][5][6][7][8]. For some oncologic emergencies, surgical interventions may be necessary for dissolution or temporary relieve [9]. Not all cancer patients will benefit from surgery. A surgical intervention is irreversible, and can result in severe complications. For patients with poor performance and/ or advanced disease, invasive treatment could have a detrimental impact on the life expectancy and quality of life.
It is hardly possible to draught guidelines for the management of surgical oncologic emergencies. The great inter-patient variability and an even greater variety of influencing factors require that every patient needs to be evaluated individually [9]. In the absence of these guidelines, the most optimal method for objective evaluation and decision making would be discussion in a multidisciplinary cancer conference (MCC) [10]. It is essential to define the prognosis of both the emergency and the cancer stage, and taking into account the patient's performance score when deciding on the treatment [9,11]. The most appropriate therapy is the treatment that has clinical benefit, and does not reduce the quality of life. Decisions regarding treatment in emergency situations are often not easy to make, and a multidisciplinary approach can provide in more solid arguments regarding the invasiveness of treatment. In an acute setting, time is scarce and the opportunity for multidisciplinary discussion is often not available. Decisions have to be made timely for prompt management of the emergency, and thus are often made by a single specialist. Acute oncology teams and units have been introduced for the care for patients with oncologic emergencies. These teams could prevent unnecessary investigations or therapy, and can provide in quick referral to palliative care when necessary [12][13][14][15][16][17]. However, specialized acute oncology care is not widely implemented in common medical practice.
In order to provide arguments for the future development of structural acute oncology pathways for faster institution of optimal care, it is important to be aware of (1) the occurrence of (surgical) oncologic emergencies, (2) the decisional process and the amount of patients being discussed in multidisciplinary cancer conferences, and (3) the clinical outcome of current management. In this study, the management and short term outcome of patients after surgical oncologic emergency consultation was analyzed.
Materials and Methods
A prospective registration and follow up was performed for adult cancer patients (age > 18) in the University Medical Center Groningen (UMCG), who required consultation for surgical oncologic emergencies, between 01-11-2013 and 30-04-2014. The protocol was consistent with the declaration of Helsinki of 1975, as revised in 1983, and approval for the study was retrieved from the institutional Medical Ethics Committee of the University Medical Center Groningen. Written informed consent was retrieved from participants, and all data were analyzed anonymously.
Criteria for inclusion were: surgical oncologic emergency consultation for symptoms caused by any type of malignant disease (including primary presentation), or for symptoms caused by current or previous cancer treatment (surgery, radiation therapy, chemotherapy, drug targeted therapy). When a surgical oncologist and/or surgical resident was involved in the diagnostic and decisional process, and the possibility of surgical treatment had been evaluated, the consultation was regarded as being a surgical oncologic emergency consultation. Patients who required emergency consultation for symptoms that could not be related to malignant disease or cancer treatment were excluded for analysis. This means that the entire hospital population was studied, including patients who were initially admitted on other than surgical wards (e.g. gynecology, internal medicine) and required surgical oncologic consultation.
Patients who required surgical emergency consultation through four possible pathways were to meet the inclusion criteria: (1) presentation at the Emergency Room (ER), (2) non-elective admission through the (surgical) outpatient clinic, (3) transfer from other hospitals, and (4) in-hospital request of surgical consultation for patients admitted for other specialties.
General patient characteristics were documented upon inclusion; gender, age, oncological history, previous cancer treatment, disease status before the emergency consultation (not being diagnosed with cancer, Alive With Disease-AWD, No Evidence of Disease-NED-after cancer treatment), intention of the current cancer treatment (diagnostic, curative, palliative). The following variables were documented during the follow up: type of surgical oncologic emergency, type of treatment (i.e. surgical procedures or other interventions), number of involved medical specialties during hospital admission, and whether the patient was discussed in a Multidisciplinary Cancer Conference (MCC). In the UMCG, multiple regularly scheduled MCC's for different cancer types are integrated in common cancer care. In general, they include the disciplines that are involved in the diagnostic process and treatment according to the prevailing guidelines. For this study, a patient was regarded as being discussed in a MCC when a report of the MCC was documented in the patient's chart.
The follow up period was 30 days. At final follow up, the patients' charts were analyzed for disease status (AWD, NED), intention of cancer treatment (curative, palliative) and mortality. All data were processed through IBM SPSS Statistics 22 for statistical analysis.
Results
During the study period, 3737 patients had visited the ER for surgical consultation, and 402 of these patients (11%) had a previous history of cancer, or active malignant disease. After visiting the ER, 147 patients (4% of all 3737 patients, and 37% of the 402 cancer patients) were identified to have surgical oncologic emergencies and were included for analysis. The remaining patients visited the ER for non-oncologic issues.
Further, 19 cancer patients were non-electively admitted through the surgical outpatient clinic for surgical oncologic emergencies, another 35 cancer patients required in-hospital surgical oncologic emergency consultation during admission for other medical specialties, and 6 patients were transferred from other hospitals.
In total, 207 patients with surgical oncologic emergencies were included for analysis through all pathways. There were 101 (49%) males and 106 (51%) females, and median age was 64 (range 19-92) years. Of all patients, 21 patients had a primary presentation of malignant disease, 132 patients were alive with disease (AWD) that was previously diagnosed, and 54 patients had No Evidence of Disease (NED) after being treated for cancer in the past, of whom 9 patients presented with recurrent disease. Of the patients who had been diagnosed with cancer in the past, the most prominent type of cancer was colorectal carcinoma (26%). Table 1 provides an extensive overview of the baseline characteristics for all 207 cancer patients with surgical oncologic emergencies.
Baseline characteristics of cancer patients who were consultated for surgical oncologic emergencies Obstruction (e.g. colorectal, biliary, small intestine), and infection were the most frequent conditions for surgical oncologic emergency consultation (42% and 32% respectively) ( Table 2). After surgical oncologic emergency consultation at the ER, 109 of the 147 patients (74%) were directly hospitalized. Four of the remaining 38 patients (11%) had an emergency admission within 30 days after the first consultation at the ER. Together with the patients who were already hospitalized before the surgical oncologic emergency consultation (the patients who required in-hospital consultation or transfer from other hospitals), 173 of all patients with surgical oncologic emergencies (84%) had been hospitalized during the study period.
During hospitalization, the median number of radiologic, endoscopic, and surgical interventions was 1 (range 0-09). Eighty three of all patients (40%) underwent surgery during the follow up period. The median duration between inclusion and surgery was 38 hours (range 0-720 hours/30 days). Of these patients, 70 patients (84%) underwent surgery in an emergency setting after a median period of 25.5 (range 0-720) hours, and 13 patients (16%) underwent elective surgical procedures after a median period of 16 (range 7-30) days. The median number of involved medical specialties during admission was 2 (range 1-8). Within 30 days after surgical oncologic emergency consultation, 61 patients (30%) were discussed in a MCC, after a median duration of 12 (range 1-30) days. For only 25 of these patients (15% of all hospitalized patients, and 41% of all patients who were discussed), the MCC took place while they were hospitalized after a median period of 8 (range 1-35) days after emergency consultation. The remaining 36 patients were discussed in a MCC after discharge from the ER or hospital ward. Of the 62 patients with symptoms caused by malignant obstruction, 42% were discussed in a MCC (Table 2), and 61% of these patients underwent surgical treatment during the follow up period. Gastrointestinal perforation in the presence of tumor mass (14%), benign obstruction (17%), and postoperative wound infections (20%) were the diagnoses with the lowest rates of multidisciplinary discussion.
Only 34 (41%) of the 83 patients who underwent surgery were discussed in a MCC during the follow up period. For 27 of these 34 patients (79%), the surgical procedure was performed before the MCC, and only 7 patients (21%) were discussed in a MCC prior to surgery. Regarding the moment of surgery in relation to the moment of the MCC, the median period was 9 days prior to (range 26 days prior to-21 days after) the MCC (Fig 1).
Before surgical oncologic emergency consultation, 32 patients (16%) were in a diagnostic and/or staging process, 49 patients (24%) received cancer treatment with curative intent, 57 patients (28%) had NED after being treated for cancer in the past, and 48 patients (23%) were diagnosed to have incurable malignant disease and were in a palliative stage of treatment. Another 21 patients (10%) had no cancer diagnosis before surgical oncologic emergency consultation, and had a primary presentation of malignant disease. At final follow up, 70 patients (34%) received adjuvant treatment with curative intent or were scheduled for supplementary curative surgical procedures, 42 patients (20%) were NED, and 69 patients (33%) were in a palliative stage, and 26 patients (13%) were deceased.
Many of the patients who were in a palliative stage at final follow up had undergone surgery after inclusion (52%), and 35% of all the patients who were deceased. Most patients died of progressive disease (77%) and 23% died of clinical sepsis or multiple organ failure. Of the deceased patients, 12 (46%) died at home after the institution of palliative care, 10 (39%) died during hospital admission, and 4 patients (15%) were transferred to a nursing home or hospice.
Discussion
To our knowledge, this is the first extensive analysis of surgical oncologic emergencies and the management in clinical practice. For 37% of the cancer patients who had visited the ER, the surgical consultation at the ER was related to a surgical oncologic emergency. Surgeons will not only be confronted with oncologic emergencies through the ER, but also through the outpatient clinic, and in-or inter-hospital consultation. Almost a third of the patients in this cohort were consultated through other pathways than the ER. In the past decades, MCCs have become common practice, especially in elective oncology care [18]. Cancer patients represent a complex population and often require treatment from multiple medical specialties. In this study, only 30% of the patients who had been consultated for surgical oncologic emergencies had been discussed in a MCC within 30 days after emergency consultation. This is strikingly low, since the national and institutional guidelines require that every cancer patient is discussed in a MCC to establish general agreement before the start of cancer treatment. For all 33 patients the MCC took place at a regular weekly schedule, and acute multidisciplinary discussion upon admission was not available. This means that for the majority (79%) of the patients who were discussed, emergency treatment was instigated before the MCC; for the 34 patients who underwent surgery and who had been discussed, there was a median period of -9 days in relation to the MCC. The rate of patients being discussed in a MCC was regardless of the amount of medical specialties that were involved during admission (a median of 2 different specialties per patient).
These results confirm the outcome of other studies, that for the most cancer patients who are non-electively treated for surgical oncologic emergencies, emergency (surgical) management-or the decision to refrain from surgery-is performed without discussing the patient in a MCC [10]. Physicians of different medical specialties, who are involved in the treatment process of one patient, can have one-to-one transmissions regarding field specific issues of attention. Nevertheless, without discussing these issues in an organized group-setting, no overall objective view will be obtained in order to connect all issues and transfer these into the same direction of treatment. For patients who require emergency treatment,-non-scheduled-multidisciplinary evaluation by acute oncology experts should be available.
Obstruction is the most frequent oncologic emergency seen in surgical practice [9]. In this study, of all patients with surgical oncologic emergencies, 42% had symptoms of obstruction with either malignant or benign origin. Surgery often seems to be the best solution for relieve of the obstruction, but could also have an adverse influence on the survival and quality of life. Cancer stage and the performance status of the patient are the most important predictors of survival, and the main factors to influence the successfulness of invasive therapies [11,[19][20][21]. Patients with obstruction of the gastrointestinal tract often require emergency surgery, and the time frame until the next scheduled MCC will be too large. For all oncologic emergencies, evaluation of all treatment options is essential. Even if the consequences of the emergency are fatally, the quality of life remains the highest priority at the end of life. Only 42% of the 62 patients with symptoms caused by malignant obstruction were discussed in a MCC. However, 61% of all these patients underwent surgical treatment. Gastrointestinal perforation in the presence of tumor mass, benign obstruction, and postoperative wound infections were the diagnoses of patients with the lowest rate of multidisciplinary discussion. The severity of diagnoses (wound infection), and time (gastrointestinal perforation) are possibly factors that have had influence on the different rates of multidisciplinary management.
The number of patients with poor outcome after surgical oncologic emergency consultation was high. Within 30 days, 33% of patients had ended in a palliative stage and 13% were deceased. Taken together, 46% of all patients had poor outcome on very short term. This was twice as many compared to the 23% of patients who were already in a palliative (and thus poor) stage before inclusion. Other studies have reported 30-day mortality rates of 10% and 30% after emergency abdominal surgery in cancer patients [11,22]. The cohort of patients in this study represents a more heterogeneous category, however, the 30-day mortality rate remains high. Regardless of the outcome, many patients had undergone surgery. Of the patients who ended in a palliative stage, 52% had undergone surgery during the study period, and 35% of all the patients who were deceased.
Physicians have the tendency to overestimate the life expectancy of terminally ill cancer patients, and it is against the nature of many to spare someone from treatment [23][24][25]. An earlier study by Ramchandran et al. tried to create a prediction model to identify hospitalized cancer patients at risk for 30-day mortality, based on information only from the electronic medical record [26]. Patients' performance scores were not included in the model, because it requires clinical assessment of the patient. However, the performance score has been reported to be one of the most important predictors of outcome [19][20][21]27]. Further research to identify influencing factors, and the development of prognostic tools, is necessary for more accurate prediction of outcome in the acute setting. Prognostic aids for decision making in a multidisciplinary setting will contribute to argumentation for (refraining from) invasive therapies. Further, when the expected outcome of therapies, or a near death, is communicated to the patient and family, it can prevent disappointment after non-successful invasive treatment, and preserve the quality of a patient's life during the last stage [28,29].
The heterogeneity of the common cancer patient population, and the variety of surgical oncologic emergencies is evident in this study. The interpatient variety (patient performance, cancer stage) is the cause of variable clinical outcome and impedes guidelines for management of these emergencies. This heterogeneity is the core of the difficulties and dilemmas in clinical (surgical) practice, and supports the need for the development of decision aids and acute oncology pathways with structural multidisciplinary management.
Since this is an observational study, it is not possible to evaluate if the treatment of patients with surgical oncologic emergencies would have been different when the decisional process had involved a MCC. The reasons why some patients were discussed in a MCC and others were not is not recorded in this study. For patients who were discussed and underwent surgical procedures, the median time period of 9 days between surgery and a MCC implicates that at this point the MCC's are used for decision making after a pathology result is present, and not for acute treatment decisions including surgery. Furthermore, the fact that, also for many patients who were not discussed in a MCC, multiple medical specialties were involved in the treatment process, could reflect the complexity of pathology.
This study was performed in one tertiary university hospital, and comparison to other hospitals will be difficult. However, since the patient population represents an entire hospital population, the authors believe that the results of the current study reflect common medical practice. In most hospitals, patients with oncologic emergencies will present through the ER, and specialized acute oncology care has not been implemented in standard emergency care.
The implementation of acute oncology pathways, providing systematic multidisciplinary management of all patients, would be the most optimal way for decision making and treatment of patients with oncologic emergencies [12][13][14][15][16][17]. Acute oncology care should include structural availability of a specialized team of (at least) an emergency care specialist, a surgical oncologist, a radiation oncologist, a medical oncologist, a palliative care specialist, and an oncology nurse. This team will be trained in acute oncology care, and should be available throughout the day and evening (in exclusive cases during the night). The members of this acute oncology team need to be involved in the evaluation and treatment process directly after emergency presentation. In this way, non-scheduled multidisciplinary decision making will be possible and personalized treatment can be instituted on the shortest term, preventing delay of required therapies or overtreatment.
Close involvement of the patient's general practitioner is required during hospital admission. In this way, when invasive treatment is not expected to be favorable for the patient, palliative care can be instituted more efficiently and on shorter term. At the end of life, the length of hospitalization should be limited to only what is needed for care with clinical benefit.
Further prospective research is necessary to investigate the influence of acute oncology pathways and structural multidisciplinary management on the clinical outcome and quality of life.
Conclusions
Obstruction (i.e. colorectal, biliary, small intestine) and infection were the most frequent conditions for surgical oncologic emergency consultation. Many patients ended in a palliative stage, and the overall mortality within 30 days was 13%. In most cases, emergency treatment, including invasive therapies such as surgery, occurred without discussing the patient in multidisciplinary cancer conferences, regardless of the fact that multiple medical specialties were involved in the treatment process. There is a need for the development and evaluation of prognostic aids and acute oncology pathways providing in structural multidisciplinary management. It will result in institution of the most appropriate personalized cancer care on the shortest term, preventing delay of required therapies or overtreatment.
|
v3-fos-license
|
2022-11-04T06:18:04.022Z
|
2022-11-03T00:00:00.000
|
253267973
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "http://hv.diva-portal.org/smash/get/diva2:1721697/FULLTEXT01",
"pdf_hash": "03100c043961a066c83f4985589d4e14a2713cfe",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:626",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3e74f89116b39eb3f22950fa83843296ddd9ccb9",
"year": 2022
}
|
pes2o/s2orc
|
Nurses' experiences of person‐centred care planning using video‐conferencing
Abstract Aim The aim was to illuminate how nurses experience person‐centred care planning using video conferencing upon hospital discharge of frail older persons. Design Care planning via video conferencing requires collaboration, communication and information transfer between involved parties, both with regard to preparing and conducting meetings. Participation of involved parties is required to achieve a collaborative effort, but the responsibilities and roles of the involved professions are unclear, despite the existence of regulations. Method A qualitative content analysis was conducted based on 11 individual semi‐structured interviews with nurses from hospitals, municipalities and primary care in Sweden. Results This study provides valuable insights into challenges associated with care planning via video conferencing. The meeting format, that is video conferencing, is perceived as a barrier that makes the interaction challenging. Shortcomings in video technology make a person‐centred approach difficult. The person‐centred approach is also difficult for nurses to maintain when the older person or relatives are not involved in the planning.
and unite coordinated efforts at home in order for the frail older person to leave the hospital in a safe manner. Thus, a coordinated care-planning meeting, initiated by the nurses at the primary healthcare centre, takes place prior to discharge.
Coordinated care planning can provide opportunities to capture the frail older person's need for care, and support coordinated care planning efforts. The care-planning meeting involves the patient, relatives and professionals from different healthcare providers, and aims to plan and transfer information regarding care. However, the nurses are the main professional group in the care-planning meeting. The care planning should be done in such a way that communication is efficient and safe (Chichirez & Purcărea, 2018), and the older person receives care of high quality, continuity and security (Svensson et al., 2016). However, there appears to be a lack of trust between the different parties involved (Larsen et al., 2017;Larsson et al., 2019), similar to what has been observed in person-centred care (Forsman & Svensson, 2019), since deficits in collaboration and communication are common (Nordmark et al., 2016). Thus, coordination between hospitals, primary care and municipal care is an important issue in the healthcare of frail older persons with complex care needs (Kneck et al., 2019).
As an approach to conduct care-planning meetings, video conferencing is now used increasingly often. A growing number of functions in the healthcare system are being performed digitally (Westerlund & Marklund, 2020) as video technology is employed in telemedicine technology (Baker & Stanley, 2018;Ignatowicz et al., 2019), teleconsultations (Diedrich & Dockweiler, 2021) and virtual care delivery (Li et al., 2021) such as pain management programs (Walumbe et al., 2021). Video technology can improve the efficiency of care and increase the access to specialist expertise (Nilsson et al., 2010).
The Covid-19 pandemic has demonstrated the possibilities and importance of video technology in healthcare. Almost all types of meetings can now be conducted using video conferencing, partly as a result of visitor limitations during the pandemic. The scope of this study is limited to care-planning meetings using video conferencing in which multiple healthcare professionals meet the patient and his/her family together. While the use of video conferencing in care-planning meetings has not been investigated to a great extent, research shows that healthcare professionals are generally positive to using the technology in this context (Newbould et al., 2017;Shubber et al., 2018). Pols (2012) argues that technology can offer adequate alternatives to personal communication since physical contact should not be necessary in meetings, instead, the health and healthcare context should be in focus. Hence, video conferencing creates new opportunities, but also new aspects to consider, since there are challenges in digitalization of healthcare (Allwood, 2017).
In order to avoid dehumanization in an increasingly digitized care setting, it is important to promote relationships and consider the older persons' autonomy and dignity (Jacobs et al., 2017). A personcentred approach is not always easy to maintain, as healthcare meetings via video conferencing seem to imply some sort of barrier between people (Hedqvist & Svensson, 2019). Understanding of how technology and video conferencing can support and improve healthcare for older people is lacking (Beirao et al., 2016;Ignatowicz et al., 2019). Moreover, extant literature recounts an extensive need for increased involvement of patients in the dialogue about health conditions, and calls for preparation for the care after discharge (Forsman & Svensson, 2019;Lindblom et al., 2020). Furthermore, aspects related to the nurses' experiences and the person-centred care presented to patients, need to be considered. It is also important to understand the potential benefits, limitations and professional implications that affect the adoption and use of video conferencing (Penny et al., 2018). Therefore, further studies are required to understand nurses' experiences of how older persons' care needs can be coordinated by using video conferencing, while at the same time maintaining a person-centred approach at a physical distance.
The aim of this paper is to illuminate how nurses experience personcentred care planning using video conferencing upon hospital discharge of frail older persons.
| THEORE TI C AL BACKG ROUND
Hospital discharge, that is, the transition of frail older persons from hospital to their homes, requires coordination of care planning between different healthcare providers. Care planning using video conferencing is increasing, while care planning during in-person meetings is decreasing. At the same time, video conferencing implies challenges for person-centred care.
| Coordination of care planning
To provide good healthcare to frail older persons with multiple illnesses, interprofessional collaboration is necessary (Ekdahl et al., 2010;Ivanoff et al., 2018). This implies an active and ongoing partnership between people from distinct professional cultures, as well as from different healthcare providers. Collaboration requires that professionals, and nurses in particular, work together to solve problems and provide common healthcare efforts (Donnelly et al., 2021;Pullon et al., 2016). Coordination between different healthcare providers such as hospitals, primary care and municipal care, is thus an important prerequisite for providing functioning healthcare to frail older persons with complex care needs. Timely information sharing among healthcare providers is a necessary part of the coordination process. However, the healthcare that older persons receive from multiple providers is often unorganized and confusing (Elliott et al., 2018). The coordination can be adversely affected by lack of trust between different healthcare providers (Larsson et al., 2019). Interaction and interpersonal relationships emerge as central aspects in well-functioning coordination and care-planning meetings that focus on older persons' healthcare and discharge from hospital (Larsson et al., 2017;Petersen et al., 2019).
Thus, high demands are placed on nurses in the collaboration and coordination processes, as well as in care-planning meetings, to promote shared decision-making regarding the older persons' care among the healthcare professionals as well as the patient and relatives (Hansson et al., 2018). Thus, patient engagement processes are also important in order to enhance appropriate coordination for the best interests of the older person.
Personnel from relevant healthcare professions, with skills needed for meeting the older persons' need for healthcare and social care efforts after discharge, participate in the care-planning meetings. Today, video conferencing is a widely used alternative to in-person meetings for care planning, partly due to the Covid-19 pandemic (Liu et al., 2021). Video conferencing is the technology that is considered most similar to in-person meetings (Park et al., 2014).
| Care planning via video conferencing
Video conferencing makes it possible for professionals, such as primary healthcare nurses, municipal nurses, municipal assistance officer, rehabilitation professionals from the primary healthcare centre and from the municipality, and others, to attend a meeting at the same time together with the patient (Larsson et al., 2019). This is considered positive for the frail older person as it provides an opportunity for people with different professions to meet, and it enables personalized care planning before discharge from the hospital (Hofflander, 2015). Video conferencing also enables relatives to participate in the care planning even if they live far away or cannot go to the hospital. Studies show that video-conference meetings generally take less time than in-person meetings and tend to be more well-structured. In addition to the meetings being shorter and more efficient, travel times also decrease dramatically (Nordmark et al., 2016;Vollenbroek-Hutten et al., 2017). A study on nurses' attitudes towards care planning via video conferencing shows that their overall attitude towards using this technology for meetings is positive, especially as it increases efficiency (Shubber et al., 2018). There are, however, some concerns in the literature among nurses that frail older persons may be overridden in video-conference meetings. One of the most difficult aspects of having a video-conference meeting is to make sure that all participants are aware of, and involved in, what is happening (Marlow et al., 2016). Nurses should include the older person and relatives in the video-conference meeting (Graves & Doucet, 2016). However, older persons and relatives often feel excluded in the meetings (Hedqvist et al., 2020). Human contact, touch and non-verbal behaviour are presented as very important parts of the care, parts which are jeopardized by the use of video conferencing (Miller, 2010).
Video conferencing enables meetings to take place without the requirement of being physically present in the same room (Ignatowicz et al., 2019). The performance of the video-conference system, which is based on Internet connectivity, image and sound quality, strongly influences the meeting experience, as limited image quality as well as small screen images hinder communication (Allen et al., 2008). Delays in sound transmission adversely affect the conversation and communication as it becomes difficult to maintain a natural flow and take turns in the conversation. Studies show that it can be more difficult to establish and maintain trust in each other in a video-conference meeting, compared to face-to-face. Thus, there are several challenges associated with video-conference meetings, and video conferencing does not always live up to its expectations.
In fact, the implementation and use of video conferencing in care planning have proven to be more complex and time consuming than initially anticipated (Shubber et al., 2018). In video-conference meetings, challenges can increase further with the number of participants. Communication through video conferencing is perceived to be more difficult than it would be face-to-face, and impairs the ability to build trust with the patient (Graves et al., 2018). Factors that determine the quality of a video-conference meeting are related to the possibilities of establishing human interaction, where eye contact is an important aspect.
| Person-centred care via video conferencing
Over the past two decades, person-centred care has gained more attention, especially in relation to research and policies linked to high-quality health care (Mead & Bower, 2000). Currently, there is no uniform definition of the concept of person-centred care, but a recurring theme concerns the ethical issue of treating patients as persons (Epstein & Street, 2011). A prerequisite for performing person-centred care is the healthcare provider's ability to communicate and interact with the patient in a person-centred manner.
In providing health care, professionals are required to be open and responsive to each patient, perceive the patient as an expert, with regard to his/her own health, and treat the patient as a partner and an equal person.
Previous literature has found that person-centred care has a positive impact on healthcare outcomes (Olsson et al., 2013).
Person-centred care, which involves shared decision-making, aided decision-making and meaningful encounters, gives frail older persons the possibility to confirm and retain a position in the context, and increases their well-being and independence (Forsman & Svensson, 2019). Video conferencing specifically challenges personcentred care, as the professionals encounter a work phenomenon that entwines task execution and relationship building within a vir- Some people can also feel uncomfortable with the lack of physical contact. Moreover, older persons with dementia or hearing disabilities may lose some of the communication and participation in careplanning meetings via video conferencing (Graves et al., 2018). Such disabilities are quite common in frail older persons with complex needs for care, which have to be coordinated and planned after hospital discharge.
| ME THOD
The study used an inductive qualitative descriptive method (Graneheim & Lundman, 2004). The data collection was conducted through individual interviews with nurses with experience of con-
| Participants
A strategic selection of experienced nurses from hospitals, the municipalities and primary care were invited to participate in the interviews. Including nurses from different organizations made it possible to get a variety of responses and study the phenomenon from several perspectives. Eleven nurses (10 female and one male) accepted the invitation to participate. Their professional experience varied between 5 and 23 years, and all participants had experience in care planning via video conferencing. The participants of the study are described in Table 1.
| Collection of data
The data were collected between April 2019 and March 2020. The study thus started before the Covid-19 pandemic. During the end of the data collection period, the Covid-19 virus emerged. Therefore, the use of video conferencing increased during this study, in the beginning of the pandemic.
The data collection method was semi-structured individual in-
terviews with open-ended questions. An interview guide was used as support so that the same questions were asked at each interview (Kvale, 1996). Follow-up questions were used to confirm, reflect on and get a deeper understanding of the participants' stories (Polit & Beck, 2016). Each interview lasted between 40 and 90 min; they were conducted at places chosen by the participants, and were recorded and transcribed verbatim.
| Data analysis
The interviews were analysed using an inductive method for content analysis, elucidating both manifest and latent content (Graneheim & Lundman, 2004). The inductive analysis involved a back-and-forth process between the text and the authors' experiences, and between parts of the text and the whole, which eventually created a new understanding (Elo & Kyngäs, 2008;Hsieh & Shannon, 2005).
Manifest content refers to what is directly expressed in the text, and latent content refers to the interpreted meaning of the text.
The analysis began with the text being repeatedly read through naïve reading to get a sense of the whole, based on the purpose of the study. Subsequently, the text was divided into meaning units that were condensed and coded. Codes with similar content were grouped into nine sub-categories that formed three categories, with respect to manifest content. Finally, the overall theme emerged, highlighting the latent content and the underlying meaning of the text (Graneheim et al., 2017;Graneheim & Lundman, 2004). An example of the analysis process is described in Table 2.
| RE SULTS
An overall theme, three categories and nine sub-categories illuminate how nurses experience person-centred care planning using video conferencing upon hospital discharge of frail older persons, as presented in Table 3.
| Preparations
The informants described the importance of careful preparation of different forms of work in connection with care planning via video, based on a shared responsibility among caregivers to assess the patient's status and the efforts provided to improve the situation.
| Different forms of work
The informants pointed out that there was no common conceptual apparatus, which meant that several different terms were used to describe the care-planning meeting; pre-meeting, reconciliation meeting, planning meeting, care-planning meeting and meeting about CIP.
| Assess status and efforts
The informants described the importance of being able to follow the status of the patient during the hospital stay in order to correctly assess and plan the interventions. It was also considered important that information about what efforts had been provided to the patient before the hospitalization was documented in the common IT system.
Another important aspect in assessing status and efforts was that
participating parties knew what their own and other organizations could offer to the patient and relatives in terms of care. One of the most difficult parts of assessing status and efforts, as highlighted by the informants, was assessing whether and when a CIP would be im-
| Conduct the meeting
The informants emphasized the importance of creating a structure, agreeing on decisions and documenting, to effectively conduct video-conference care-planning meetings.
| Create structure
The informants pointed out the importance of having a chairman for the meeting, who allows everyone to speak. The chairman was also expected to be the rapporteur.
| Agree on decisions
The informants emphasized the importance of agreeing on decisions about which efforts would be relevant to apply after discharge. The informants pointed out the ongoing discussion among care providers about the need for rehab and home care for patients. The informants also highlighted other problems in connection with discharge, such as disagreement with the doctors' decisions, whether the patient was really ready to be discharged or when the discharge was to be planned.
How can you think the patient is ready for discharge … can't participate in the care planning even once because they are so bad… but you can't keep on arguing with a patient with home care and rehab (11).
| Document jointly
The informants expressed disappointment that the patients' documentations were not regularly updated by all care providers.
Lack of time and knowledge about what to document and where, were considered to be the reasons. To remedy the problem, they urged each other to document relevant information in the common IT system. It emerged that those who did not attend a meeting could communicate their assessments and efforts in the IT system either before or after the meeting.
You have to be in the system all the time as well… so it takes quite a lot of time (3).
| Person centring
The informants said that care planning via video presupposes that care-providing parties listen to the person and adapt the situation to meet in a person-centred manner through a monitor.
| Listen to the story
Listening to the patient's story is one of the important, basic principles of person-centred care, the informants said. It was important to be able to balance between allowing the patient to speak about his/her life and deciding when the story became too long. During the meeting, it was important to respectfully direct the conversation towards the planning, and at the same time maintain the interaction with the patient.
When you can, direct the conversation to this planning that has to be done, but the interaction with the patient is the most important (5).
| Adapt to the situation
The informants pointed out that video conferencing as a meeting format has become a first choice over physical meetings in care planning, without giving the patient much choice. They considered that since the care-planning meeting put the patient and relatives in a vulnerable situation, it was important to provide information, show the patient respect and make the patient truly involved in the decision-making. They said that when there was an established relationship with the patient, it was easier to adapt the situation and make the patient more involved in the video conference.
It was also easier to know how the patient worked and what she or he wanted help with. The informants described that the patients would be so happy to recognize someone on the other side of the screen that it seemed as if they forgot that they were not sitting in the same room.
It is a vulnerable position as a patient to sit there… oh, then they appreciate that they recognize me… hey, [xxx] now I recognize you (10).
| Meet through the screen
According to the informants, it was more difficult to meet the patient as a person via the screen, and they experienced a form of distance when the proximity was lost in the video meeting. Some aspects were absent, such as eye contact, head movements, gaze, handshake and how the person's physical appearance. Vision, hearing and cognitive impairments made the video meeting more difficult, but the informants believed that the same difficulties could also exist in physical meetings. They considered that the screen did not directly affect the outcome of the meeting.
Don't feel that what is lost compared to a physical meeting have any effect on the result (1).
| DISCUSS ION
Healthcare planning via video conferencing is experienced differently by nurses from different healthcare organizations. Healthcare planning via video conferencing is also different from in-person meetings, from a person-centred care perspective. We have identified nurses' experiences of healthcare transition from hospital to the person's home, coordination and collaboration among healthcare professionals, as well as a person-centred work practice in healthcare planning.
The study shows that video conferencing as a meeting format has become a first choice over physical meetings when planning care for frail older persons who need coordinated care interventions at home after discharge from hospital, even before the Covid-19 pandemic. When a frail older person leaves the hospital, it is important to assess care needs, and plan and coordinate care interventions together with care providers, the older person and relatives. Wellfunctioning information transfer in an online journal is a necessary tool based on current legislation. But for this to work, coordination of care providers' professional resources and interprofessional collaboration is required (Larsson et al., 2019;Pullon et al., 2016). The results show that there is a lack of requirements from the management regarding the nurses' competence and education.
Nurses currently do not need any form of formal education at the advanced level to work with care planning via video conferencing.
Knowledge of assessment and collaboration to achieve coordinated care was also requested. Research shows that nursing staff trained at the advanced level excel in comparison with undergraduates, thanks to their deeper knowledge about independently assessing and planning care measures to avoid "unnecessary" hospitalizations (Glassman, 2016;Jobe, 2020).
In this study, it emerged that when care planning is carried out via video conferencing, a barrier is created that requires the nurses to strengthen the older person by creating an interaction that is maintained throughout the meeting (Graves et al., 2018). It also emerged that the nurses sometimes lack ability to listen to the older persons' Hedqvist and Svensson (2019) point out that lack of information about the structure and content of the video conference makes it difficult for older persons and relatives to prepare for the meeting. They experience insecurity and feel neglected in a situation unknown to them. Our study also shows that older persons have limited opportunities to really influence care design: the scope for action is too small and they experience that they are not given any choices during the video conference itself. Facchinetti et al. (2021) believe that this is because the care staff need to inform, involve and prepare the older persons for the discharge, so that they can handle their condition at home without feeling abandoned. Sinclair et al. (2020) found that functioning coordinated planning of care interventions before discharge has become more important in connection with the outbreak of the Covid-19 pandemic in 2020. The pandemic has thus accelerated the development of the digitalization of collaborative working methods such as coordinated planning of care interventions via video conferencing. Silsand et al. (2021) show that the use of video conferencing offers opportunities to use healthcare professionals' time more efficiently, reduces travel times and improves the exchange of information across healthcare providers' professional boundaries.
| CON CLUS IONS
In summary, this study concludes that it is complex to create a sustainable way of working for nurses with regard to planning and coordinating care interventions. It is difficult to maintain person-centred care unless the older person or relatives are involved in the care and in the care planning throughout the entire process. Although a cornerstone of person-centred care is meeting face-to-face, video conferencing can be seen as a complement in care for coordination and follow-up of care and treatment. More research is needed to study different methods using digital tools for improved coordination of care for frail older persons with complex care needs.
DATA AVA I L A B I L I T Y S TAT E M E N T
Data available on request due to privacy/ethical restrictions.
E TH I C A L S TATEM ENT
The study has been approved by the Swedish Ethical Review Authority [Approval number: 932-18].
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2015-12-01T00:00:00.000
|
16975721
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-16-S12-S1",
"pdf_hash": "76eba62664e5c89dbcaa8d99e39a3ab454ebc0e9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:627",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"sha1": "c559603c7c1591d47ad251582906544b007f382f",
"year": 2015
}
|
pes2o/s2orc
|
ARG-walker: inference of individual specific strengths of meiotic recombination hotspots by population genomics analysis
Background Meiotic recombination hotspots play important roles in various aspects of genomics, but the underlying mechanisms for regulating the locations and strengths of recombination hotspots are not yet fully revealed. Most existing algorithms for estimating recombination rates from sequence polymorphism data can only output average recombination rates of a population, although there is evidence for the heterogeneity in recombination rates among individuals. For genome-wide association studies (GWAS) of recombination hotspots, an efficient algorithm that estimates the individualized strengths of recombination hotspots is highly desirable. Results In this work, we propose a novel graph mining algorithm named ARG-walker, based on random walks on ancestral recombination graphs (ARG), to estimate individual-specific recombination hotspot strengths. Extensive simulations demonstrate that ARG-walker is able to distinguish the hot allele of a recombination hotspot from the cold allele. Integrated with output of ARG-walker, we performed GWAS on the phased haplotype data of the 22 autosome chromosomes of the HapMap Asian population samples of Chinese and Japanese (JPT+CHB). Significant cis-regulatory signals have been detected, which is corroborated by the enrichment of the well-known 13-mer motif CCNCCNTNNCCNC of PRDM9 protein. Moreover, two new DNA motifs have been identified in the flanking regions of the significantly associated SNPs (single nucleotide polymorphisms), which are likely to be new cis-regulatory elements of meiotic recombination hotspots of the human genome. Conclusions Our results on both simulated and real data suggest that ARG-walker is a promising new method for estimating the individual recombination variations. In the future, it could be used to uncover the mechanisms of recombination regulation and human diseases related with recombination hotspots.
Background
Meiotic recombination is a crucial step in the reproduction of many species. The reciprocal exchange of genetic material between homologous chromosomes during the meiosis (i.e. meiotic recombination) is an important evolutionary force for increasing genetic diversity and also essential for proper chromosome segregation. Knowledge about recombination is important for understanding the linkage disequilibrium (LD) structure of the genome [1], phenotypic diversity and evolution in a population [2], and a variety of genetic diseases [3]. Recombination events do not occur randomly along the chromosomal DNA, but would rather cluster on short chromosomal intervals, typically 1-2 kb long, named recombination hotspots [4]. Recent development in the construction of genome-scale, high-resolution recombination map and new biological techniques for analysing this cellular progress have provided a comprehensive view of the distribution of recombination hotspots as well as insights into the regulatory systems that control the recombination landscape. In particular, the PRDM9 protein was found to be an important trans-regulator controlling the activity of recombination hotspots through binding to a 13-mer DNA motif (CCNCCNTNNCCNC) in the human genome [5][6][7][8]. However, the binding motif of PRDM9 does not cover all the human hotspots [7], and there still exists a substantial gap in our understanding of the regulatory system. Moreover, open questions and challenges remain, such as the recombination variation in association with gender and age, epigenetic regulators of meiotic recombination, the roles of recombination hotspots in human diseases, etc. Thus finding other transand cis-regulators that can also regulate recombination hotspots is highly desired.
Most existing methods for the statistical analysis of recombination mainly rely on the estimation of average recombination rate from a population, such as coalescent based analysis of linkage disequilibrium (LD) [4,9]. Software tools have been developed to help researchers identify recombination hotspots by LD analysis, such as LDsplit [10][11][12][13] which has been applied to disease study [14]. Yet the power of individual recombination events tends to be overlooked by most approaches despite the accumulating evidence that recombination frequencies differ significantly between ethnic groups, genders, and also among individuals [15][16][17]. Although some methods including pedigree analysis [17,18] and sperm typing [19] can handle sex-specific and individual recombination analysis, they are limited by technical factors such as the high cost, short regions, or low resolution. Therefore, algorithms that can infer individual-specific recombination rates both on a large scale and with high resolution would be very useful for genomic studies of recombination hotspots.
Ancestral recombination graph (ARG) is a topological structure that captures the genealogical history of individuals, including historical mutations, recombination events and merging, back to a common ancestor. Thus ARG is indispensable for study of the recombination events in various evolutionary scenarios. Despite the computational complexity of the ARG inference, in recent years, there emerged several methods that have largely solved this problem. For example the algorithm of IRiS [20] can detect past recombination events based on a graph reconstruction algorithm [21] followed by integrating these recombination events into a subARG; ACG [22] estimates the full likelihood of the ARG using a Bayesian Markov chain Monte Carlo (MCMC) procedure; ARGweaver [23] can infer ARGs from genomewide data based on hidden Markov models (HMM).
In this paper, we propose a graph mining method, namely ARG-walker, to infer the different strengths of a recombination hotspot among individuals in a sample.
Given a set of extant haplotypes, we first adapt the IRiS algorithm to detect the recombination events and integrate them to construct the ARG; then a random walk method is applied on the ARG to estimate the individual-specific recombination propensities; as such, ARGwalker can translate SNP sequences to a recombination profile, i.e. a vector of floating numbers each representing the corresponding individual strength of a recombination hotspot. This method can be used to exploit the power of the variation in individual recombination frequency, to shed light on the regulatory system of recombination hotspots. Our extensive simulation tests demonstrated the statistical power of ARG-walker in detecting phenotypic variations of recombination rate. Then, applying ARG-walker on the HapMap phased SNP data for GWAS of recombination hotspots, we detected strong association signals of cis-regulation, corroborated by the enrichment of the aforementioned 13-mer PRDM9 binding motif CCNCCNTNNCCNC in proximal regions of SNPs identified by our GWAS. Through further analysis of the flanking DNA sequences of associated SNPs, we found two new motifs, AAAA-TANA and CNGCCTCC, which could be potential cisregulators of the meiosis recombination hotspots in the human genome. Moreover, by screening the GWAS results on MHC (major histocompatibility complex) region in human chromosome 6, we detected two significantly associated SNPs, rs576205 and rs2061915. The two SNPs are located in the coding regions of KSR2 and ZNF708 protein respectively, both of which have been reported to have regulatory roles in T-cell activation. It demonstrates the potential of ARG-walker for detecting trans-factors of meiotic recombination hotspots, and for study of recombination hotspots important in the human immune system.
Experimental data
The input data of ARG-walker consist of SNP data generated by simulations or collected by the HapMap project. Our simulation data was generated by a Python script which we developed using Python 2.6 and based on simu-POP (version 1.0.3) [24] which is an open source framework for forward simulation of population genetics. To evaluate the performance of ARG-walker under different situations, we set multiple groups of parameters in simu-POP according to different scenarios. Each simulation data set consists of haplotypes of 90 individuals, each individual has two homologous haplotypes of about 100 SNPs (the number of SNPs was randomly generated to be near 100) spanning 200 kb, and each SNP has two alleles, denoted as 0 and 1 (Additional File 1). For testing ARG-walker on the real data, we used the SNPs in the 22 human autosome chromosomes of HapMap Phase 3 data and recombination hotspots therein detected by LDhat [25]. We first extracted the haplotypes of 22 autosome chromosomes of JPT+CHB (Japanese and Chinese) population of the HapMap Phase 3 dataset. The length of each haplotype sequence was set to 100 SNPs and overlapped haplotypes were filtered, also the pre-processing required for running the IRiS software was applied to filter SNPs with minor allele frequency (MAF) less than 0.01, and non-tag SNPs were removed from the sequences. In this way we obtained 8,721 haplotype sequences with hotspots in the middle. Then, the haplotypes were fed to ARG-walker to estimate the individualspecific strengths of the recombination hotspot located inside the 100-SNP window. For each hotspot, a profile of the estimated recombination strengths will be output. For the genotype data, we applied the FastTagger [26] tool to select all tag SNPs with MAF higher than 0.3 from the HapMap Phase 3 data of JPT+CHB population. The minimal r 2 was set to 0.9. As such we obtained 179,671 tag SNPs. The Python script for generating the simulation data and Perl scripts for extracting and pre-processing of these SNP data are available in Additional File 1.
Description of ARG-walker
The main idea of ARG-walker is to estimate, through analysis of ARG topology, the frequencies of historical recombinations, which are then used to approximate the recombination propensities of extant individual chromosomes. First, to reconstruct the ARG, we use the IRiS algorithm which is able to infer historical recombination events from a set of extant haplotypes and integrate these events into an ARG [27]. Then, we apply a random walk algorithm to mining the ARG in order to estimate the recombination strengths of individuals corresponding to the input haplotypes. With random walks on the ARG, we assign positive weights indicating signals of recombination to the root nodes, and then like raindrop collection, the information flow runs from top to bottom of the ARG. If one extant haplotype has more ancestral recombination events in the history, then more information flows, flowing downward along the paths in the ARG, will be gathered at the downstream ARG leaf nodes corresponding to the haplotypes in the end. As such, individuals with different recombination histories can be distinguished by the amounts of information they collected from the random walks, which represent their strengths of recombination hotspot. The algorithm of ARG-walker, which consists of two stages, is illustrated in Figure 1, and described as follows.
Stage 1: ARG construction and node classification
The input of our method consists of a sample of extant human haplotypes. The software of IRiS is employed to reconstruct the ARG from the input haplotype sample. IRiS first cuts the haplotypes into segments, from which phylogenetic forests can be inferred to explain the segmentation. From the segments and forests, compatible sub-networks are constructed using an algorithm called DSR (dominant, subdominant or recombinant), which is a greedy algorithm that attempts to minimize the number of recombinations needed to explain the given data. These sub-networks are merged to construct the ARG represented as a directed graph G(V, E). Readers interested in details of the algorithm behind IRiS are referred to [20,21]. For each node ν ∈ V, three types of degrees are calculated, i.e. indegree deg -(ν), outdegree deg +-(ν) and degree deg(ν) = deg -(ν) + deg + (ν). With the information of degrees, the nodes are classified into three types: (1) the root nodes in set with deg -(ν) = 0 and deg + (ν) > 0, (2) the recombination nodes in set Recom v with deg -(ν) ≥ 2 and deg + (ν) > 0, and (3) the leaf nodes with deg -(ν) > 0 and deg + (ν) = 0. Here each node is represented by an integer, which is an index in the vertex set V. Note that the ARG constructed by IRiS could contain more than one root nodes because the reconstructed graph is only partial ARG (or subARG), which may contain only a subset of nodes and edges of the true ARG. For conciseness, however, we will hereafter still refer to the output of IRiS as ARG, rather than subARG.
Stage 2: ARG mining using backward-forward random walks
After the nodes are classified, a backward random walk from the leaf nodes to the root nodes is conducted to assign weights on the edges which are stored into an edge-weight matrix W n×n , where n is the number of nodes in the ARG. Initially, each entry of the matrix W is set to 0. Then, the matrix entries are iteratively updated according to the topological structure of the ARG, following the rule that W ij = 1 + k =i,k∈V W jk if there is a directed edge from node j to node i in the ARG. In other words, the weight of the edge from node j to node i is set to be the number of all descendants of node j plus one. If there is no edge from node j to node i, the corresponding entry in the matrix W will be 0. Due to its Y-shape structure in the ARG, a recombination node will be double-counted as two descendants, and as a result, edges with more recombination nodes as descendants can collect larger weights through the backward walk. Later the edge weights will be passed downward all the way to the leave nodes to reflect the inherited propensity of recombination. To apply the forward random walk on the ARG, we transform the edgeweight matrix W n×n into the transition probability matrix T n×n by normalization, i.e.
Through forward random walk the signals of recombination will be passed from the root nodes layer by layer towards the leaf nodes. For a coalescent node, the signal will be split to two children proportional to the transition probabilities, whereas for a recombination node in the next generation, the signals from two parental nodes will be combined. The procedure is executed by iteratively updating two vectors, each consisting of n entries corresponding to the ARG nodes. The first vector, denoted by I, contains the amounts of signals flowed to ARG nodes at the end of a step, which will be passed to their children in the next step. The second vector, denoted by V L , records the amounts of signals that the leaf nodes will gather at the end of the random walk. At the beginning, vector I is initialized by setting each entry corresponding to a root note equal to the sum of weights of its outgoing edges, and setting other entries otherwise for i = 1,2,...,n.
Vector V L consists of all 0s initially. For each iteration of the forward random walk, the following two operations are carried out to update the two vectors: (1) I = I × T, and (2) V L = V L + I. Note that an entry of the transition probability matrix, say T uv , contains a positive value if there is a directed edge from node u to node v, and is equal to 0 otherwise. Thus, by the first operation of I = I × T, the signal in a node i will all be passed to its child nodes in proportion to the transition Figure 1 The pipeline diagram of ARG-walker algorithm. ARG-walker consists of two stages. In Stage 1, from an input sample of haplotypes, the IRiS program is used to reconstruct the ancestral recombination graph (ARG), and the nodes are classified into three types as illustrated with different shapes and colours. In Stage 2, information flows indicating signals of recombination are first gathered to the root nodes through backward random walk, then propagated downwards through forward random walk and in the end gathered by the leaf nodes. The red arrows in Step (iii) and Step (iv) illustrate the start of the random walks.
probabilities. This iterative procedure is repeated until every entry of vector I is equal to 0, which means every node has no more information to pass on. After the repeated updates, the signals are collected in vector V L by the second operation above. But we will set to 0 all the entries in V L that do not correspond to the leaf nodes, so that V L only contains signals of recombination of the extant haplotypes. Moreover, the recombination probability of each individual leaf node is estimated by
Filtering hotspots without variation in recombination strength
In the simulation, we adapted the Hartigan's dip test [28] to test the unimodality of the distribution of estimated recombination strengths. This is for the case when there are only small variations of recombination strength in a population, e.g. the majority of haplotypes have cold (or hot) alleles, or it is hard to divide the population into two phenotypic groups. In our method, we assumed that when there are both hot and cold alleles of a recombination hotspot, the distribution of recombination strengths given by ARG-walker should be more likely to be bimodal. In the dip test for these two situations, we found significant difference of p values between these two groups. With a threshold of 2.473 of the dip −log 10 (p) value, we can filter out 88% samples with non-variant recombination strengths, while keeping 87.8% samples of two-allele recombination strengths. Applying this strategy in our GWAS analysis, we selected 5,200 recombination hotspots (each with a profile of individual strengths) as phenotypes.
Combination of recombination phenotype by chromosome
Besides individual hotspot phenotype analysis, we also merged recombination strengths of hotspots on the same chromosome into one collective chromosome-wise phenotype. First, we did standardization for each hotspot using formula (xmean)/SD, and then we combined the standardized hotspot phenotypes by chromosomes, i.e.
the mean value of all the standardized hotspot phenotypes on the same chromosome was calculated to represent the chromosomal recombination strength of each individual. Finally we got 22 combined chromosomal recombination phenotypes. Each of this newly generated phenotype was then mapped to the genotypes for GWAS analysis.
Results and discussion
Simulation study The evolution of meiotic recombination hotspots is notoriously dynamic and complicated, partly due to processes such as biased gene conversion (BGC). To slightly simplify our simulation (without making the synthetic data unrealistic), we focused on the common scenario of two-allele variation of recombination, i.e. hot versus cold alleles in a population. Currently large amounts of evidence from both sperm typing and genetic analyses suggest that the general patterns of recombination in the human genomes are highly unstable throughout the genome [4,9,29]. Hence, we tested different sets of parameters to simulate different recombination scenarios, of which key parameters include the recombination rate, the position of labelled SNP, MAF of the labelled SNP and the BGC rate. For each set of parameters, we generated 50 samples, and fed each sample to ARG-walker. Then, the accuracy was calculated and presented with a boxplot. We compared the performance of ARG-walker under different situations by the boxplot of accuracy with the median accuracy connected with a red line in Figure 2. First, we tested the change of recombination rates including the crossover rates of the hot individuals and the cold individuals. The ratio of these two crossover rates was set to 1 which means equal rate, 5, 10, 15 and 20. The result shows that our method has better performance with the increase of hot/cold ratio from 0 to 5, while the accuracy is slightly decreased and seems to become stable when the ratio is higher than 15. This suggests that some recombination events are not detected by ARG-walker, probably due to the loss of traces on the patterns of linkage disequilibrium when there is a high frequency of recombination. Then we analysed the effect of the position of the SNP controlling crossover rate, from the centre to the far ends of the window. The accuracy of our method does not fluctuate much, but with a tendency that the accuracy gets higher when the causal SNP is closer to the centre. Another important factor we tested is the MAF of the causal SNP. When the MAF is low, say between 0.1 and 0.2, the performance of ARG-walker is poor, with accuracy less than 0.5. A lower MAF means a lack of homogeneity of phenotype in the sample, which shall be filtered out before using ARG-walker. When the MAF is bigger than 0.3, however, ARG-walker is able to achieve a higher accuracy. Another key parameter is the BGC rate along with a crossover. BGC was suggested to be in favour of hotspot-disrupting alleles thus it has crucial influence on the recombination hotspots and their evolution [30]. Our result shows that ARG-walker can be affected to some degree by BGC but it still achieved good performance when the BGC rate is lower than 0.3 ( Figure 2, bottom-right boxplot). In addition to these tests of parameters, we also investigated two special cases, i.e. all-hot and all-cold. In these two cases, there is no difference in recombination strengths among individuals. Hartigan's dip test was used to test if our ARG-walker can identify these two special cases (see details in Methods section). In contrast to two-allele cases, here the test shows a significant difference on dip p-values, as shown in the boxplot in Figure 3. Moreover, the ROC curve in Figure 4 shows a very high AUC of 0.956 indicating that a threshold of 2.473 can dramatically differentiate these two scenarios with a specificity of 0.88 and a sensitivity of 0.878.
In summary, with relatively higher differences in recombination rates, an MAF of 0.3 or higher at the causal SNP, and BGC rate less than 0.3, our ARGwalker has a reasonably good performance of prediction, with the average accuracy about 0.64 (Table 1). In addition to the hot-cold variation, ARG-walker can also identify the all-hot and all-cold cases, which is important for GWAS of recombination phenotypes later.
Genome-wide association study of recombination hotspots
A major motivation of estimating the individual-specific strengths of recombination hotspots is to perform genome-wide association study (GWAS) to identify transand cis-regulators for meiotic recombination hotspots. Encouraged by the results of our simulation study, we performed a GWAS analysis on the real HapMap data. For each pair of phenotype (i.e. a recombination hotspot) and genotype (i.e. a SNP), we used unpaired t-test to get the p-value of association between the recombination strengths and the SNP. To view the association, we did log transformation for the p-values and plotted the Figure 2 The accuracy of prediction on simulation data with 4 main parameters. The changes in accuracy of ARG-walker were tested with regard to four main parameters in our simulation study, i.e. average recombination rate, the position of the causal SNP, MAF of the causal SNP and the biased gene conversion (BGC) rate. 50 sample files were generated for each simulation. For each test, the distribution of accuracies of ARG-walker was plotted as a boxplot, where the red line connects the median values of accuracy. result in Figure 5(a), where the gradient colour intensities and sizes of dots represent the strengths of association. From Figure 5(a), we can see that there are strong and prevalent signals of cis-regulation shown as the diagonal red line on the plot, compared with trans-regulation. It could be that SNPs located inside or proximal to a recombination hotspot may change the DNA sequence thereby affecting the binding affinities of trans-factors. With a threshold of 1e-7.3 for the p-value, we identified 15,920 significantly associated SNPs. Then we extracted the flanking DNA sequences of these significant SNPs, each with a length of 1 kb. Using the software of FIMO [31], we matched the aforementioned 13-mer motif CCNCCNTNNCCNC (the binding motif of PRDM9) to these sequences. 45,293 motif occurrences were found in these 15,920 sequences. The strong cis-regulatory signal provides new evidence to confirm the relation of this 13mer motif with recombination hotspots previously reported [6,9]. However, for the sophisticated regulation with meiotic recombination, it is unlikely that there is only one motif controlling all the recombination hotspots. In fact, this 13-mer motif can only explain a part of human hotspots. To search for other potential motifs, we fed these DNA sequences to the DREME software [32] for discriminative DNA motif discovery. From the output motifs, we selected three top-ranking motifs shown in Figure 5(b). The first motif has 9,882 positive occurrences in the 15,920 sequences with an E-value of 1.8e-591. The second motif has an E-value of 2.9e-565 with 3,537 positive occurrences in the sequences. The third motif, which is quite similar to the second one but ends with GG rather than CC, has 3,140 positive occurrences in the sequences with an E-value of 1.3e-456.
From the GWAS, we have not found any single SNP that is significantly associated with hotspots from different chromosomes. The number of hotspots affected by each single SNP was quite small compared with the sample size of 5,200 hotspots. For the three SNPs, rs1874165, rs16874441, and rs3805547, located inside Prdm9 gene, we only find one significant association between the three SNPs and one hotspot on Chromosome 5 as shown in Figure 5(c). For a trans-regulator like PRDM9, it is expected to show association with many hotspots, but it seems not the case according to our analysis. A reason might be that our GWAS method does not have sufficient statistical power to capture the trans-regulatory signals, e.g. the association might have been eliminated due to low MAF values in our pre-processing. It might also be the case that the assumption of two-allele recombination strengths (i.e. hot vs. cold) does not always apply for PRDM9. In addition, as demonstrated in [33], the hotspot activity can be influenced by multiple loci including both cis-and trans-effects.
To see whether the hotspot-SNP association is robust across different resolutions of recombination rate, we combined the recombination strengths of hotspots in the same chromosomes into 22 chromosome-wise recombination phenotypes. With the genotype of all tag SNPs, we did a GWAS analysis for these 22 phenotypes. The Manhattan plot in 6 shows the top 120 SNPs with significant associations (p-value less than 1e-7.3). Interestingly, these SNPs are mostly located near the ends of the chromosomes, implicating there might be an enrichment of regulators of genome stability at telomere regions of chromosomes. From the flanking DNA sequences of these 120 significant SNPs, two motifs were found using DREME and they turned out to be the same as the motif 1 and motif 2 found earlier by GWAS of individual hotspots, as shown in Figure 5(b).
MHC screening
Recombination in the MHC (major histocompatibility complex) region on human chromosome 6 is of particular interest due to the roles of recombination in the generation of genetic diversity at HLA (Human Leukocyte Antigen) loci and the functions of MHC genes in the human immune system. Many studies have shown some associations between susceptibility to autoimmune diseases and particular alleles of MHC genes [34]. However the functional relationship between recombination and polymorphism of genes in MHC is not clear yet. Checking the GWAS results of hotspots in the MHC region, we found 10 hotspots significantly associated with 105 SNPs (p-value < 5e-8) (Additional File 3). All of these significantly associated SNPs are located inside the MHC region, except for two SNPs: rs576205 on chromosome 12, and rs2061915 on chromosome 19. Interestingly, the rs2061915 SNP is located inside the gene of ZNF708 which is one of the zinc finger genes reported to be expressed in human T cells [35,36]. And rs576205 is located in the gene of KSR2 which is a marker of immortalization, and was predicted to have a role in cell proliferation [37]. KSR2 also has association with the IL-2 expression through mir-31 thereby affecting the T cell differentiation [38]. These findings suggest that both ZNF708 and KSR2 proteins may participate in the regulation of MHC activation or expression through the regulation of recombination in that region, which however needs to be verified through wet-lab experiments. Overall, our GWAS on MHC suggests that using ARG-walker is promising to help search for trans-regulators through GWAS.
|
v3-fos-license
|
2020-12-09T02:41:23.459Z
|
2020-12-08T00:00:00.000
|
227746274
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.aanda.org/articles/aa/pdf/2021/03/aa39719-20.pdf",
"pdf_hash": "85679424f6d52ae5164eaf6ec3f1cc867f08e11e",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:628",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "85679424f6d52ae5164eaf6ec3f1cc867f08e11e",
"year": 2021
}
|
pes2o/s2orc
|
Phase-space structure of protohalos: Vlasov versus Particle-Mesh
The phase-space structure of primordial dark matter halos is revisited using cosmological simulations with three sine waves and Cold Dark Matter (CDM) initial conditions. The simulations are performed with the tessellation based Vlasov solver ColDICE and a Particle-Mesh (PM) $N$-body code. The analyses include projected density, phase-space diagrams, radial density and pseudo-phase space density. Particular attention is paid to force and mass resolution. Because the phase-space sheet complexity, estimated in terms of total volume and simplices count, increases very quickly, ColDICE can follow only the early violent relaxation phase of halo formation. During the latter, agreement between ColDICE and PM simulations having one particle per cell or more is excellent and halos have a power-law density profile, $\rho(r) \propto r^{-\alpha}$, $\alpha \in [1.5,1.8]$. This slope, measured prior to any merger, is slightly larger than in the literature. The phase-space diagrams evidence complex but coherent patterns with clear signatures of self-similarity in the sine wave simulations, while the CDM halos are somewhat scribbly. After additional mass resolution tests, the PM simulations are used to follow the next stages of evolution. The power-law progressively breaks down with a convergence of the density profile to the well known"NFW"-like universal attractor, irrespectively of initial conditions, that is even in the three-sine wave simulations. This demonstrates again that mergers do not represent a necessary condition for convergence to the dynamical attractor. Not surprisingly, the measured pseudo phase-space density is a power-law $Q(r) \propto r^{-\alpha_Q}$, with $\alpha_{\rm Q}$ close to the prediction of secondary spherical infall model, $\alpha_{\rm Q} \simeq 1.875$. However this property is also verified during the early relaxation phase, which is non trivial.
Introduction
In our current understanding of large-scale structure formation, the main matter component of the Universe is Cold Dark Matter (CDM, e.g., Peebles 1982Peebles , 1984Blumenthal et al. 1984), which can be modelled as a self-gravitating collisionless fluid obeying Vlasov-Poisson equations: where f (r, u, t) is the phase-space density at physical position r and velocity u, φ the gravitational potential and G the gravitational constant. In the concordance model, dark matter is composed of very massive particles with very small initial velocity dispersion. This means that dark matter is in a very good approximation concentrated on a three dimensional phase-space sheet folding in six-dimensional phase-space. At early times, the phase-space density has the following form where the initial density ρ i and velocity u i are smooth functions of position r. The scale length λ below which initial density and velocity fluctuations are smooth depends on the dark matter particle mass. In this work, following the footsteps of previous numerical investigations (e.g., Diemand Angulo et al. 2017), we shall consider standard CDM model with neutralinos of mass 100 GeV, which implies λ of the order of a pc.
The phase-space sheet evolves under self-gravity and shell crossing occurs in various places of configuration space. In these regions, sheets, filaments and dark-matter halos form as the result of complex processes of multi-streaming dynamics. While perturbation theory can be used to predict the early stages of clustering (e.g., Bernardeau et al. 2002), numerical simulations are required to follow accurately the dynamics of the phasespace sheet beyond shell crossing. Traditionally, dark matter is simulated with the N-body technique, in which dark matter elements are followed with a large ensemble of macro particles interacting with each other with a softened gravitational force. There exist many different N-body simulation codes, that differ mainly from each other through the way Poisson equation is solved (e.g., Hockney & Eastwood 1988;Bertschinger 1998;Colombi 2001;Dolag et al. 2008;Dehnen & Read 2011;Vogelsberger et al. 2020, for reviews).
Even though a consensus on the robustness of the results of N-body simulations is getting close, their numerical convergence is not yet fully proven in several situations. Indeed, N-body simulations are not free from biases, due to the discrete nature of the representation of the phase-space fluid with very massive macro-particles compared to the actual dark matter candidates. Discrete noise and close N-body encounters can drive the system away from the mean field limit embodied by the Vlasov equation (e.g., Aarseth, Lin, & Papaloizou 1988;Goodman, Heggie, & Hut 1993;Knebe et al. 2000;Binney 2004;Diemand et al. 2004;Joyce, Marcos, & Sylos Labini 2009;Colombi et al. 2015;Beraldo e Silva et al. 2017Benhaiem et al. 2018, but this list of references is far from comprehensive), and this is particularly critical in the cold case (Melott et al. 1997;Splinter et al. 1998;Melott 2007;Wang & White 2007; Article number, page 1 of 37 arXiv:2012.04409v2 [astro-ph.CO] 22 Dec 2020 A&A proofs: manuscript no. vla_proto_arXiv . This is why it has been proposed recently to adopt a direct approach where the phase-space sheet is described in a smooth fashion with a fine tetrahedral tessellation (Hahn, Abel, & Kaehler 2013;Sousbie & Colombi 2016;Hahn & Angulo 2016), an idea stemming directly from the waterbag approach (e.g., DePackh 1962;Janin 1971;Cuperman, Harten, & Lecar 1971a,b;Colombi & Touma 2008, 2014. 1 In this article, we shall compare in details numerical results obtained from state of the art Vlasov simulations realised with the public code ColDICE (Sousbie & Colombi 2016) 2 to Nbody simulations realised with a standard Particle-Mesh (PM) code (e.g., Hockney & Eastwood 1988). Particular attention will be paid to effects of force resolution and mass resolution. We shall confirm again that with the proper choice of parameters, basically typically more than one particle per softening length of the force, the N-body approach remains perfectly reliable at the coarse level.
As the host of galaxies and clusters of galaxies, dark matter halos represent the main bricks of large-scale structure formation models and will be the main focus of this numerical investigation. Following the path of many previous works, this article will explore again their dynamical history, which, according to perturbation theory and cosmological simulations results, can be decomposed into the four following phases: (i) The pre-collapse phase. At the beginning, the phase-space sheet does not self-intersect in configuration space and its evolution can be followed analytically with a perturbative approach (e.g., Bernardeau et al. 2002). For instance, linear Lagrangian perturbation theory, the Zel'dovich approximation (Zel'dovich 1970), provides a pretty good description of the evolution of the phase-space sheet, at least from the qualitative point of view. In the Zel'dovich solution, there are locally three orthogonal directions of motion: the first nonlinear structures to form are two-dimensional sheets orthogonal to one dimensional singularities building up along the main direction of motion. Higher-order Lagrangian perturbation theory (e.g., Bouchet et al. 1992;Buchert 1993;Buchert & Ehlers 1993;Bouchet et al. 1995;Rampf 2012;Zheligovsky & Frisch 2014;Matsubara 2015) is needed for an accurate description of the pre-collapse motion (e.g., Moutarde et al. 1991). At sufficiently high order, it has been shown, using the Vlasov simulations presented in the present work, to perform very well until shell-crossing (Saga, Taruya, & Colombi 2018).
(ii) The early violent relaxation phase. After shell-crossing, the system enters into a complex multi-stream violent relaxation phase. If collapse happens along another direction of motion, a filament forms. Then, if another shell crossing takes place along the third direction of motion, this creates the seed of a dark matter halo. The early stages of the dynamics thus build singular structures of various kinds according to the local properties of the displacement field (Arnold, Shandarin, & Zel'dovich 1982;Hidding, Shandarin, & van de Weygaert 2014;Feldbrugge et al. 2018). They correspond to foldings of the dark matter sheet in phase-space and quickly produce a very intricate spiral structure that we will study in details through phase-space slices and of which the complexity will be quantified in ColDICE in terms of total volume and simplices count. This early relaxation process is of monolithic nature, that is happens in the absence of merger. It takes place over short time scales and is very similar to the picture described in Lynden-Bell (1967), which explains the term of "violent relaxation". During this phase, the dark matter "protohalos" build up a power-law density profile ρ(r) ∝ r −α of which the logarithmic slope α ranges in the interval [1.3, 1.7] according to various results in the literature (Moutarde et al. 1991; Diemand, Moore & Stadel 2005;Ishiyama, Makino & Ebisuzaki 2010;Anderhalden & Diemand 2013;Ishiyama 2014;Angulo et al. 2017;Delos, et al. 2018a;Delos et al. 2018b). The exact value of the slope changes slightly according to the authors, although a consensus seems to emerge with α 1.5. We shall examine this in detail again, by making sure that the measurements are performed during the monolithic phase of the evolution of halos extracted from CDM simulations with very small box size. Additionally, following the footsteps of Nakamura (1985), Moutarde et al. (1991) and Moutarde et al. (1995), we shall study highly symmetric configurations with three sine waves initial conditions of various amplitudes. These investigations will be conducted with ColDICE and the PM code, with thorough comparisons between both solvers.
(iii) The convergence to a universal profile. The initial power-law profile does not last for long because dark matter halos are the object of perturbations, in particular successive mergers with other halos. These mergers modify the profile that converges rapidly to a dynamical attractor, the so-called NFW profile (Navarro, Frenk & White 1996, where the radial density profile of dark matters halos has a form close to ρ(r) ∝ 1 (r/r s ) (1 + r/r s ) 2 , with r s a scale radius. The shape of dark matter halos and the form given by equation (4) have been the object of numerous discussions and convergences analyses (e.g., Moore et al. 1998;Jing & Suto 2000;Power et al. 2003;Mansfield & Avestruz 2020). Recent investigations suggest for instance that the Einasto profile that will be used in the present work (equation 21 below, Einasto 1965) provides better fits of dark matter halos profiles (e.g., Navarro et al. 2004), but other forms have been suggested (e.g., Stadel et al. 2009). Even though it is still debated, the nearly universal nature of the dynamical attractor seems now unquestionable and the fact that mergers contribute to its establishment has been highlighted in several works (e.g., Syer & White 1998;Ishiyama 2014;Ogiya, Nagai & Ishiyama 2016;Angulo et al. 2017). However, other numerical investigations suggest that it is possible to reach the dynamical attractor even without merger, as the consequence of radial instabilities or other sources of noise (e.g., Huss, Jain & Steinmetz 1999;MacMillan, Widrow & Henriksen 2006;Ogiya & Hahn 2018). We shall revisit these questions in the presence and in the absence of merger, by pushing further in time the simulations described in point (ii). However, due to its adaptive nature, the computational cost of ColDICE prevents it from reaching sufficiently advanced stages of this dynamical phase, which will be approached solely with the PM code, after careful mass resolution tests.
Article number, page 2 of 37 S. Colombi: Phase-space structure of protohalos: Vlasov versus Particle-Mesh Another interesting property of the dynamical attractor is that the measured pseudo phase-space density, where σ v (r) is the local velocity dispersion, follows a pure power-law behaviour Q(r) ∝ r −α Q over a very large dynamical range, with a logarithmic slope very close to the prediction of the secondary spherical infall model of Bertschinger (1985), α Q = 1.875 (Taylor & Navarro 2001;Navarro et al. 2010;Ludlow, et al. 2010). We shall see that this property stands as well during the monolithic phase (ii), for which very few studies of the pseudo phase-space density exist (see however Ishiyama, Makino & Ebisuzaki 2010).
(iv) The quasi-static evolution. Once the dynamical attractor is attained, the evolution of the halo becomes mainly monolithic again and its profile changes very little with time.
Note that, if on one hand, the pre-collapse phase (i) of the evolution of the phase-space sheet is well described by perturbation theory, on the other hand, the early violent relaxation phase (ii) and the convergence to the universal dynamical attractor ( It is not the goal in the present work to investigate in detail analytical models, but some links between our measurements and predictions from self-similarity and solutions of Jeans' equations will be discussed. This article is organized as follows. Details about ColDICE and the PM code are provided in section 2, as well as the simulations suite realised for this project and other ongoing works. Then, sections 3 and 4 are respectively dedicated to visual inspection of the three-dimensional projected density and phasespace diagrams. Section 5 measures complexity in the Vlasov simulations through plots of simplices counts and phase-space sheet volume. This is followed in section 6 by a detailed examination of radial density profiles as well as the pseudo phasespace density. Finally, section 7 summarises the main results and discusses a few prospects.
The simulations
This section is divided in three parts. The important features of the Vlasov solver ColDICE are first summarised in § 2.1. Full technical detail of the implementation of this massively parallel algorithm can be found in Sousbie & Colombi (2016). Then, the PM code written for this project is described in § 2.2. Finally, the simulation suite used in this work and other investigations (Saga, Taruya, & Colombi 2018Colombi, Neyrinck & Sobolevski 2020) is described in § 2.3. The parameters of these numerical experiments are listed in Table 1.
Brief description of ColDICE
In ColDICE, the phase-space sheet is tessellated with an ensemble of tetrahedra (also called simplices) of which the vertices (the 4 corners of the tetrahedra) are initially disposed on a regular mesh of size n s corresponding to 6n 3 s simplices. Initial comoving positions x i, j,k , i, j, k ∈ [1, · · · , n s ] and peculiar velocities u i, j,k of the vertices are given by with L the size of simulation volume, which is a periodic cube. The unperturbed positions define Lagrangian coordinates q i, j,k of each vertex. These phase-space coordinates are slightly perturbed according to linear Lagrangian perturbation theory (Zel'dovich 1970), where we have dropped the subscripts (i, j, k), a and D + are respectively the expansion factor and the linear growing mode normalised to unity at present time, P the linear displacement field. Then, the tessellation evolves dynamically under self-gravity by solving standard Lagrangian equations of motion for its vertices, similarly as for particles in a N-body simulation, with simple second order predictor corrector with slowly varying time step.
To compute the gravitational force, the tessellation is interpolated on a regular mesh of spatial resolution n g . This interpolation is performed by calculating the exact intersection between each simplex of the tessellation and each voxel cell) of the mesh. 3 It is performed at linear order, which means that it accounts for the gradient of the volume density of the phase-space sheet inside each simplex. Once the three-dimensional projected density field is obtained, Poisson equation is solved in Fourier space. Calculation of the force field is performed with a standard four point finite difference stencil to compute the gradient of the gravitational potential. Finally, the force is interpolated to each vertex of the tessellation with second order Triangular Shaped Cloud (TSC) interpolation (Hockney & Eastwood 1988).
As a time variable, ColDICE uses the "superconformal" time τ given by with H 0 the Hubble constant (e.g., Doroshkevich, Ryaben'kii, & Shandarin 1973;Martel & Shapiro 1998). Time step constraints combine the classical Courant-Friedrichs-Lewy (CFL) condition, with C CFL = 0.25, and the optional additional dynamical condition 3 A voxel is the 3D alter-ego of a pixel in 2D.
Article number, page 3 of 37 Table 1. Details on the ensemble of simulations performed for this work. The first column corresponds to the designation of the run. The second column gives the type of initial conditions, namely the relative amplitudes i of the initial sine waves or the size L of the box for the CDM simulations. The third column indicates the spatial resolution n g of the grid used to solve Poisson equation. The fourth one mentions the spatial resolution n s of the mesh of vertices used to construct the initial tessellation for the Vlasov runs or the number of particles n 3 p for the PM runs. Finally, the fifth column specifies which kind of code was used, as well as the value of the parameter I used to bound violation to conservation of Poincaré invariants in the case of the Vlasov runs (equations 13 and 14). It also mentions, when relevant, if a small shift was applied to vertices positions in initial conditions. with C dyn = 0.01, where Ω m is the matter density parameter of the Universe and ρ max the maximum of the projected density (normalised to unity) computed in the mesh used to solve Poisson equation. Furthermore, to avoid too large variations of the expansion factor at early times and have correct behaviour of the system in the (quasi-)linear regime, the additional condition ∆τ ≤ 0.1a(da/dτ) −1 is enforced. For most simulations, the dynamical condition (12) was ignored, except for the Vlasov runs with n g = 512 and n s = 512 in Table 1, as well as VLA-ANI2-UHR, VLA-ANI2-MRa and VLA-SIM-MRa. It was indeed found in practice that condition (12) did not bring significant improvements on the results during the period of time covered by the Vlasov runs.
Article number, page 4 of 37 During evolution, the phase-space sheets gets more and more intricate with time. In order to follow all the details of its complexity, local refinement with the bisection method is implemented in ColDICE using a local quadratic interpolation of the tessellation mesh. 4 This anisotropic refinement attempts to preserve in the best way possible the Hamiltonian nature of the motion, by bounding local Poincare invariants measured over faces of candidate tetrahedra obtained from refinement. Because the initial unperturbed tessellation (eqs. 6 and 7) has strict zero velocity, we should have at all times I p = 0 for any closed contour inside the phase-space sheet. Refinement is locally performed so that I p remains very small: Various values of I used in the ColDICE simulations are listed in last column of Table 1, and range in the ensemble {10 −5 , 10 −6 , 10 −7 }. Note that the choice of I should somewhat relate to spatial resolution n g , since the latter controls softening of the force field, which sources curvature of the phase-space sheet.
Brief description of the PM code
The particle-mesh code (PM, e.g., Hockney & Eastwood 1988) written for this project is standard. Using shared memory parallelisation with OpenMP, it follows a set of n 3 p particles in a mesh of resolution n g to solve Poisson equation. The initial conditions are the same for the PM particles as for the vertices in ColDICE: particles are set on a regular network according to equations (6) and (7), with a slight perturbation on initial positions and velocities according to equations (8) and (9). The implementation of equations of motion of the particles is also exactly analogous to what is done for ColDICE vertices, with the same constraints on the time step, except that condition (12) was systematically enforced using C dyn = 0.1.
The main difference between the PM code and ColDICE lies in the way Poisson equation is solved: first, the threedimensional density is estimated on the computational mesh by projecting the particles using a TSC interpolation. Second, at variance with ColDICE, an apodization of the Green function G(k) is performed with a Hanning filter in order to reduce small scale anisotropies, To preserve accuracy of refinement, a quadratic representation of the phase-space sheet inside each simplex is used, with the help of additional tracers corresponding to mid-points of the edges of each tetrahedron in Lagrangian space (that is in the space of initial, unperturbed positions). These tracers are actually used as the candidate refinement vertices. New tracers are created each time refinement is performed, by exploiting the local quadratic representation of the sheet. This refinement procedure preserves the conforming nature of the tessellation, by matching up vertices, edges and faces at the intersection between two tetrahedra, without hanging nodes, that is without an isolated vertex belonging to one tetrahedron on an edge, a face or in the bulk of another tetrahedron.
where k is the wavenumber. Otherwise all the other steps of force field calculation are exactly the same as in ColDICE, including TSC interpolation of the force on the particles, exactly as what is done in ColDICE for the vertices. The other difference is that we do not perform local Lagrangian refinement, that is the number of particles does not change with time. The PM code is therefore equivalent to evolving the initial vertices of ColDICE and let them carry the mass instead of the simplices.
The TSC interpolation used to compute the density along with Hanning filtering (15) help to reduce effects of the discrete noise of the particles, in particular local anisotropies, but add significant softening of the force field in the PM code compared to ColDICE. Calculation of the density on the computational mesh in ColDICE indeed corresponds in practice to Nearest Grid Point (NGP) interpolation (e.g., Hockney & Eastwood 1988) in terms of convolution, so one can expect a loss of effective force resolution of the PM code compared to ColDICE when using the same value of n g , which will turn in practice to be of about a factor 1.7 in the subsequent analyses. This approximate factor is obtained by comparing the force field generated by a particle interpolated on the grid with NGP scheme to the one obtained with TSC interpolation (which brings a factor 1.1 of effective resolution loss) combined with Hanning filtering (which brings a factor 1.5 of effective resolution loss). Note that a more complete comparison between the N-body approach and ColDICE could include PM simulations without Hanning filtering or/and with Cloud-in-Cell interpolation (see, e.g., Hockney & Eastwood 1988) instead of TSC interpolation. We leave this for future work as we believe that such additional analyses are not necessary to prove the main points of this article.
The simulation suite
We consider two kinds of initial conditions, CDM and three sine waves, as detailed in the next two sections.
The "CDM" simulations
For the CDM simulations, which start from fluctuations originating from a smooth random Gaussian field, initial vertices/particle positions and velocities are computed with the public software MUSIC (Hahn & Abel 2011) used at linear order. It is important to notice that, because a quadratic representation of the phasespace sheet is used, it is needed to define special tracer vertices in initial conditions. Hence, a mesh of n 3 p PM particles corresponds to n 3 s vertices of the actual tessellation, with n s = n p /2, while the remaining n 3 p − n 3 s vertices are used as tracers. The assumed cosmological parameters of the CDM runs are the following (Planck Collaboration et al. 2018): total matter density Ω m = 0.315, cosmological constant density parameter Ω L = 0.685, baryon density parameter Ω b = 0.0493, Hubble constant H 0 ≡ 100 h = 67.4 km/s/Mpc, rms density fluctuations in a 8 Mpc/h sphere linearly extrapolated to present time σ 8 = 0.811, power law index of the density perturbation spectrum after inflation n spec = 0.965. We use the extension of MUSIC of Angulo et al. (2017) to have a transfer function consistent with a neutralino of mass 100 GeV and a decoupling temperature of T = 30 MeV.
Two simulations box sizes are considered, L = 12.5 and 25 pc/h, such that initial density fluctuations are smooth over several spatial resolution scales L/n g and L/n s of the computational mesh and of the initial tessellation. These box sizes are obviously Article number, page 5 of 37 A&A proofs: manuscript no. vla_proto_arXiv unrealistically small since we are not using any resimulation technique to account for tidal forces coming from scales larger than L. This is why in the figures, "CDM" is put in "quotes", because the halos extracted from these simulations most probably have a non-representative merger history. However, for the purpose of the present work, the random nature of the field is enough to have a qualitative idea on how the phase-space pattern changes compared to the highly symmetric case represented by the three sine waves case described below.
To make sure that transients related to the fact that only linear Lagrangian theory was used to set up initial conditions do not contaminate the measurements, the simulations are started at very high redshift, a i = 10 −5 for the simulations with n g = 512 and a i = 10 −4 for those with lower spatial resolution. Figure 1 shows the projected density over the whole simulation volume in the highest resolution Vlasov runs, with n g = 512 and n s = 256, and in the PM runs, with n g = 512 and n p = 512. At this level of detail, the differences between PM and ColDICE are nearly invisible, but one can guess a faint signature of the particle pattern in underdense regions on left panels.
Due to the smallness of the simulations volumes, only a few dark matter halo form. Five of them where selected for detailed analyses, as indicated on right panels of Fig. 1. The way these halos were extracted from the simulations consisted in simply identifying connected regions of density ρ(x) larger than 400 in the computational volume sampled with a 512 3 mesh, with ρ(x) estimated as explained in Appendix A.1. The centre of each halo was identified with the centre of mass of these regions. Projected density slices of three of these halos at the most evolved stages attained by the highest resolution ColDICE runs are shown on right column of Fig. 2.
The three sine waves simulations
In the three sine waves case, the displacement field in equations (8) and (9) is given, for q x,y,z ∈ [0, L[, by where the vector A = (A x , A y , A z ) with A x ≥ A y ≥ A z ≥ 0 quantifies the linear amplitudes of the waves in each direction. This rather symmetrical set-up is restrictive but remains quite generic. Near the center of the system, it indeed coincides up to quadratic order with the peak of a smooth random Gaussian field (see, e.g., Bardeen et al. 1986). Following the evolution of these three sine wave configurations is thus expected to provide many insights on the dynamics, in particular during the early stages of the formation of dark matter halos, especially for those corresponding to high peaks of the initial field. The high level of symmetry also facilitates calculations of analytical predictions from perturbation theory (Saga, Taruya, & Colombi 2018). Of course, the three sine waves initial conditions remain unrealistic, since the only contribution to the external tidal field is given by the replica of the halo due to the periodic nature of the simulated box. Additionally, the system does not experience any merger in this set-up.
Again, the three sine waves simulations are all started at very high redshift with a i = a(t = t i ) = 0.0005 to make the contamination by transients negligible, as actually proved by the very accurate comparisons with higher order perturbation theory predictions of Saga, Taruya, & Colombi (2018). Hence, the important quantity is the relative amplitude of the waves, traced by the Five different values of are considered, as listed in Table 1, which defines three kinds of initial conditions: quasi one dimensional (Q1D) with = (1/6, 1/8), where one amplitude dominates over the two other ones, anisotropic (ANI1, ANI2, ANI3), where the amplitude of each wave is different but remains of the same order, and finally, axisymmetric (SYM) with = (1, 1). Information about the nomenclature used in the subsequent analyses is provided in Table 2, where 3 different regimes and the corresponding values of the expansion factor are introduced. Early time corresponds to the early violent relaxation phase (ii) described in the Introduction. Mid time corresponds to the intermediate step during which the system is progressively relaxing to the NFW dynamical attractor [point (iii) in Introduction], attained at what we call late time.
Left panels of Fig. 2 display three-dimensional views of the projected density in the central part of the computational volume for the most evolved stages of the highest force resolution Vlasov runs with = (1, 1) (SYM), (3/4, 1/2) (ANI2) and (1/6, 1/8) (Q1D). They evidence the complex caustic pattern building up during the early violent relaxation phase, to be compared to the seemingly more intricate case of the CDM protohalos shown on right panels.
Full detail on all the three sine waves simulations is given in Table 1. A large number of simulations was performed for extensive force resolution analyses by considering different values of n g and mass resolution analyses by considering different values of n s and n p . In addition, as discussed furthermore in sections 3 and 4 below, an asymmetry can appear during runtime in ColDICE because of very small but cumulative rounding errors when projecting the tessellation on the computational mesh due to exact superposition between the tessellation and the mesh. To try to remedy this, a shift by half a voxel size is applied to the three sine waves initial conditions for some of the Vlasov runs (indicated as "Shift" on right column of Table 1), so that vertices of the tessellation do not coincide exactly with edges of the mesh at the centre of the system.
Visual inspection of the projected density
This section focuses on visual inspection of the projected density, ρ(x). Its main objectives are three-fold and set up the way it is structured. First, through force resolution analyses performed in § 3.1, we show that the caustic pattern of the protohalos remains robust when sufficiently far enough from the center of the system. Second, we want to confirm that the N-body approach still provides a good dynamical description of the system when particle density is large enough, despite the artificial pattern that might appear due to the discrete representation. This will be supported by detailed comparisons of the PM simulations to the ColDICE simulations, along with a mass resolution analysis performed in § 3.2, that includes, among other things, an analysis of the effect on the long run of changing the number of particles in the N-body simulations. Third, in order to be able to interpret the analyses performed in next sections, § 3.3 focuses on evolution with time of the projected density for three sine waves runs as well as CDM halos number 1 to 3.
Article number, page 6 of 37 S. Colombi: Phase-space structure of protohalos: Vlasov versus Particle-Mesh Table 1. The expansion factor indicated on each panel corresponds to the last snapshot of the Vlasov runs. Additionally, on the right panels, circles highlight the halos selected for detailed analyses. 3.1. Visual inspection: force resolution Figure 3 shows, at early time, the three-dimensional density ρ(x) for the three-sine wave simulations with = (3/4, 1/2). A subcube of size L/10 is considered, where L is the simulation box size, and is sampled on 512 3 voxels. Projection of the tessellation is performed in each voxel at linear order, but instead of the exact but complex ray-tracing procedure used in ColDICE, we replace each tetrahedron by a dense regular network of particles which Article number, page 7 of 37 A&A proofs: manuscript no. vla_proto_arXiv "CDM", halo 1, L sub =1.5pc, a=0.110 "CDM", halo 2, L sub =1.25pc, a=0.110 "CDM", halo 3, L sub =4pc, a=0.067 Sine waves, =(1,1), L sub =0.1L, a=0.040 Sine waves, =(3/4,1/2), L sub =0.1L, a=0.045 Sine waves, =(1/6,1/8), L sub =0.1L, a=0.110 Fig. 2. Three dimensional spatial density ρ(x) in typical configurations of protohalos studied in this work for the final snapshots of our highest force resolution Vlasov runs. The left panels correspond to initial conditions given by 3 crossed sine waves: from top to bottom, VLA-SYM-HR with = (1, 1), VLA-ANI2-HR with = (3/4, 1/2), and VLA-Q1D-HR with = (1/6, 1/8). The right panels display protohalos extracted from our "CDM" runs: top two panels correspond respectively to halo 1 and halo 2, extracted from VLA-CDM12.5-HR; bottom panel corresponds to halo 3, extracted from VLA-CDM25-HR. The size of the subvolume on display as well as the expansion factor value are indicated on each panel.
Note that the spatial resolution scale is ε r = L/512 0.002L on left panels, which represents about 1/51th of the subcube size; on two upper right panels, ε r = L/512 0.025 pc/h, which corresponds respectively to about 1/61th and 1/51th of the subcube size on top right and middle right panels; on bottom right panel, ε r = L/512 0.05 pc/h, which corresponds to about 1/82th of the subcube size.
are then assigned to the voxels using Cloud-in-Cell interpolation (e.g., Hockney & Eastwood 1988), as detailed in Appendix A.1. For the PM runs, a simpler NGP interpolation is performed on the voxels. Figure 4 is analogous to Fig. 3, but a zoom on halo 1 is considered. In this case, the subcube considered is of size 0.12 L = 1.5 pc/h. The color table is almost the same for both figures, spanning logarithmically the density from blue to red in the interval log 10 ρ ∈ [−0.5, 5] and [−1, 4.5] respectively for the three sine waves and the CDM runs. The major differences visible in the color pattern between the Vlasov and the PM runs are Article number, page 8 of 37 obviously mainly due to discreteness effects: only in high density regions, such that there is a sufficient number of particles per sampling voxel, the PM colors become comparable to the Vlasov colors. When examining Figs. 3 and 4, the first important thing to notice is that PM simulations give very similar results to Vlasov runs if one put asides obvious discreteness effects related to representing the phase-space distribution function with particles. In other words, if we would reconstruct a ColDICE like tessellation from the regular pattern used to build the Lagrangian, initial distribution of PM particles (as first proposed by Shandarin, Habib, & Heitmann 2012; Abel, Hahn, & Kaehler 2012), we would certainly obtain results very close to the Vlasov runs, with small differences due to effective force resolution. As predicted in § 2.2, because of Hanning filtering and TSC interpolation performed in the PM code, the results obtained for the N-body simulations with a given value of n g ≡ n PM g actually lie between Vlasov runs with n g ≡ n VL g = n PM g /2 and those with n VL g = n PM g with a best match obtained for n VL g = n PM g /2. This comparison shows that for n p > ∼ n g , gravitational dynamics is not significantly influenced by discrete sampling of the phase-space distribution function in the PM runs, at least during the early violent relaxation phase.
Another important point concerns the caustic structure and force resolutions effects on its pattern. From examination of Figs. 3 and 4, we see that the caustic pattern loses complexity when force resolution is degraded, as expected, but is preserved at the coarse level. In particular, outer caustics keep their shape and their position if sufficiently far from the centre of the system, except for the lowest force resolution runs with n g = 128.
In this case, the caustic structures seem slightly less extended, which we can can associate to a less evolved dynamical state as a result of "excessive" force softening.
The fact that the caustic structure is mainly influenced by resolution effects near the centre of the system is natural. Indeed, persistent caustics are mainly kinematic objects and local self gravity affects weakly their dynamical evolution. The latter is mainly constrained by the global shape of the potential well in which the caustics evolve, sourced in large part by the singularity building up in the centre of the system. Note that what we call central part does not necessarily reduce to a point: it can also be a line or a surface, when considering the evolution of a structure such as a filament or a pancake.
Visual inspection: mass resolution
Examining mass resolution in the Vlasov runs consists in analysing the effects of changing the resolution n s of the mesh of vertices used to build the initial tessellation and combine this with an exploration of the space of values of the parameter I controlling deviations from local Poincaré invariants conservation (equations 13 and 14). Clearly, I is the most important parameter since it controls local mass resolution during runtime by triggering refinement when necessary. However, the choice of n s can influence dynamics in a subtle way. The scale length of fluctuations in the initial conditions need to be captured by the tessellation, which requires a sufficiently large value of n s . Indeed, at early time, when the phase-space sheet is nearly flat, tessellation resolution cannot be solely controlled by the refinement parameter I. In practice, it is wise to combine the choices of I and n s in a consistent way, depending on the level of accuracy required during runtime. In particular, increasing n s should be associated with a decrease of the value of I. Similarly, taking a value of n s very different from the parameter n g controlling force resolution is possible but does not seem wise, for obvious reasons.
Comparing first and second column of the group of 6 panels on left of Fig. 3 can give an indication of the effects of changing mass resolution of the tessellation. We notice, by comparing panels (c) [or (a)] to (d), as well as (f) to (g), that differences induced by changes in control parameter n s by a factor two and/or I by a factor 10 are small. This is due to the fact that all the simulations considered in this paper already follow practical accuracy constraints derived in Sousbie & Colombi (2016). Effects of mass resolution can however be distinguished in two bottom left panels, (i) and (j), for which variations in the choice of n s and I are the largest: the caustics are slightly shifted away from the centre on panel (j) which has a value of n s four times larger and a value of I hundred times smaller compared to panel (i). This is an effect related to curvature: as a consequence of local convexity, the contours of the phase-space sheet get generally closer to the centre of the system when larger linear tetrahedra are used to sample it in order to compute projected density and solve Poisson equation. This effect can cumulate progressively with time and can have consequences on the dynamical properties of the system. Note that if we would use quadratic simplices to perform the projection during runtime, the difference between panels (i) and (j) would probably become undetectable at the visual inspection level.
Effects of mass resolution on the PM runs are studied in details in Fig. 5, again for the three sine waves initial conditions with = (3/4, 1/2). The first column of panels corresponds to early time, where data from the Vlasov runs are available, as shown in top left panel. The second and third columns of panels stand for more evolved times, designated by mid time and late time in Table 2. Mass resolution decreases from top to bottom. Note that only the central part of the simulation is shown, with a zoom in the interval (x, y, z) ∈ [−0.05, 0.05] for left column of panels and (x, y, z) ∈ [−0.2, 0.2] for the two right columns of panels. At variance with previous figures, what is displayed here is the cumulated density over the z line of sight extracted from the interval under consideration.
During early time, all the simulations seem to match each other closely, except for aliasing and discreteness effects due to the representation of the matter distribution with particles. Note the striking agreement between the highest resolution PM run, with n g = n p = 1024, and ColDICE, which has n g = 512. This agreement deteriorates slightly when decreasing spatial resolution of the PM code to n g = 512. Again, this is a consequence of force softening due to Hanning filtering and TSC interpolation, as extensively discussed in § 2.2 and 3.1. Except for this, there does not appear to be any artifact in the dynamics related to Nbody relaxation, at least when n p ≥ 256, while it is difficult to make any definitive conclusion in the case n p = 128 (bottom left panel).
Similarly, mid time stages of the evolution do not seem to be significantly affected dynamically by discreteness, at least inside the halo. In the filaments, however, we notice the appearance of clumps for n p ≤ 256 that could be signatures of Jeans instability triggered by the discrete representation, although they could also be a mere signature of the pattern expected from a set of particles following regular orbits derived from the true potential: this has not been checked thoroughly and may require further investigation. Yet it seems obvious that these artificial structures will grow under gravitational instability.
The last column, corresponding to late time, highlights even more visual effects due to the discrete representation. It seems at this point unquestionable that dynamics is affected by N-Article number, page 9 of 37 A&A proofs: manuscript no. vla_proto_arXiv Projected density 3 sine waves resolution analysis =(3/4,1/2) a=0.040 Subcube of size 0.1 Thorough force resolution analysis of the three-dimensional projected density ρ(x) for a three sine waves initial setup with = (3/4, 1/2). A subcube of size L/10 is extracted from the snapshot corresponding to expansion factor a = 0.04 and sampled with 512 3 voxels. The nature of the run (Vlasov or PM) and its parameters are detailed on each panel, in particular the spatial resolution n g of the grid used to solve Poisson equation.
In addition, the resolution n s of the mesh of vertices employed to construct the initial tessellation is indicated for the Vlasov runs and its analog n p for the network of n 3 p particles in the PM simulations, along with the value of the refinement control parameter I for Vlasov. The spatial resolution scale ε r = L/n g represents about 1/102th of the subcube size on upper right panel, 1/51th in middle top panel and second line of panels, and 1/26th, then 1/13th, in the two next lines of panels. For completeness, from top to bottom and left to right, the runs considered are designated respectively by VLA-ANI2-HR, VLA-ANI2-MR, VLA-ANI2-LR, VLA-ANI2-HRS, VLA-ANI2-FHR, VLA-ANI2-MRa, VLA-ANI2-LRa, PM-ANI2-UHR, PM-ANI2-HR, PM-ANI2-MR and PM-ANI2-LR in Table 1. body relaxation for n p ≤ 256. This is also visible in the halo when examining carefully the highly concentrated cross building up during the course of dynamics: on lower right panels, it presents small fluctuations absent from from top right panels and of which the highly contrasted nature suggests they are sourced by some dynamical instability triggered by shot noise.
Article number, page 10 of 37 Fig. 4. Example of the effects of force resolution on one of the CDM halos. The three-dimensional projected density ρ(x) is shown for halo 1 at expansion factor a = 0.11. The top line of panels displays the results obtained from our high resolution Vlasov run VLA-CDM12.5-HR (left) and the PM simulation PM-CDM12.5-HR (right), both with n g = 512, that is a spatial resolution scale ε r = L/n g of about 1/61th the displayed slice size. Bottom left and bottom right panels correspond to Vlasov runs with n g = 256 (VLA-CDM12.5-MR) and n g = 128 (VLA-CDM12.5-LR), respectively, hence a spatial resolution scale of about 1/31th and 1/15th of the displayed slice size.
Hence, figure 5 suggests that, at the coarse level, the overal structure of the halo remains the same whatever the value of n p we considered. This will be confirmed by measurements of radial density profiles in § 6.2. Appearance of small artificial clumps due to N-body relaxation seems however unquestionable and is nothing fundamentally new (see for instance nice illustrations of this effect in Wang & White 2007;Hahn, Abel, & Kaehler 2013). Obviously, N-body relaxation is unavoidable, but is delayed when increasing n p . The practical condition n p > ∼ n g suggested in previous works (e.g., Melott et al. 1997;Splinter et al. 1998) globally agrees well at the qualitative level with visual inspection of Fig. 5. Figure 6 shows the evolution of the highest force resolution simulations with three initial sine waves of various relative initial amplitudes. When comparing first and second line of panels, we observe again the excellent agreement between the PM with n g = 1024 and the Vlasov runs with twice smaller resolution. Note however the small asymmetry visible on upper left panel, which is due, as discussed in the end of § 2.3.2, to very small computer rounding errors cumulating with time in ColDICE. This effect can be significantly reduced by introducing a small shift in initial conditions, as performed in top right panel.
Visual inspection: time evolution
Article number, page 11 of 37 A&A proofs: manuscript no. vla_proto_arXiv . Effect of mass resolution in the PM runs: total projected density on (x, y) plane of the simulations of three crossed sine waves with = (3/4, 1/2). Except for top panel which stands for last snapshot of the highest resolution Vlasov run, each line of panels corresponds, for various expansion factors, to a different number of particles, n 3 p , with n p = 1024, 512, 256 and 128 when moving down. The spatial resolution is n g = 512 for all the simulations, except for the first line of panels, which has n g = 1024. To help understanding furthermore the figures, remind that the logarithmic color table goes from dark blue to white, then to dark red. The images are computed from the projection on the (x, y) plane of the density calculated on a 512 3 mesh spanning a cube of size L sub = L/10 on left panels and L sub = L/2.5 otherwise, by using nearest grid point interpolation. The mass, hence the contribution of each particle, augments with dilution, explaining the change of colour towards dark red of individual particles in underdense regions when n p decreases. From top to bottom, the simulations considered are designated by VLA-ANI2-FHR, PM-ANI2-UHR, PM-ANI2-HR, PM-ANI2-HR-D8 and PM-ANI2-HR-D64 in Table 1. Article number, page 12 of 37 S. Colombi: Phase-space structure of protohalos: Vlasov versus Particle-Mesh Table 1), to be compared to second line of panels, which corresponds to the highest resolution PM runs with n g = n p = 1024 (PM-Q1D-UHR, PM-ANI2-UHR and PM-SYM-UHR). Third line of panels is the same as second line, but for a larger subcube. Last two lines of panels give the results obtained from the PM runs at mid and late time. Note that the region displayed is only a fraction of the full simulation size, namely L sub = L/10 for the 4 top panels and L sub = L/2.5 for the 6 bottom panels. Also, the density contributing to the projection comes only from the cubical sub-volume of size L sub .
Article number, page 13 of 37 A&A proofs: manuscript no. vla_proto_arXiv Further evolution in time can be examined for the PM simulations in the three last lines of panels of Fig. 6. While early stages of the evolution of the system clearly reflect the nature of initial conditions, see for instance middle left panel which illustrates the quasi one-dimensional nature of the dynamics at large scales, in the centre of the system, all the simulations build up a roughly circular halo around a three-dimensional cross, of which the arms are more or less contrasted according to the strength of the initial waves. This cross is a particular feature related to the high level of symmetry of initial conditions and is obviously not present in the CDM halos discussed below.
In the axisymmetric case, = (1, 1), the cross is perturbed, then destroyed at late time, most likely by radial orbit instabilities, but further detailed diagnostics of the dynamical properties of the flow will be needed to fully confirm this hypothesis. The high level of symmetry of initial conditions indeed makes the dynamics very radial and potentially prone to related instabilities especially in the axisymmetric configuration, which is analogous to the spherical case, that is known to be radially unstable when initial conditions are cold, as many works in the literature show (e.g., Halle, Colombi & Peirani 2019, and references therein). This radial instability relates to force softening and particle number: for instance, in the = (1, 1) PM simulation with n p = n g = 512, which is not shown here, this instability takes place later than for n g = n p = 1024, with no visible symmetry breaking at mid time, unlike right panel of fourth line in Fig. 6.
Examining now figures 7, 8 and 9, we turn to CDM simulations and inspect the evolution of halos number 1, 2 and 3. Comparison of first to second column of panels of these figures confirms the very good, if not spectacular, agreement between PM and Vlasov codes. Again, a careful examination of the figures shows that best visual match is obtained between PM runs with n g = 512 and ColDICE runs with n g = 256.
The three halos considered in these figures are typical of what can be expected in the CDM scenario and remind of what was observed in other works (e.g., Ishiyama, Makino & Ebisuzaki 2010;Anderhalden & Diemand 2013;Angulo et al. 2017). They form at the intersection of filaments of the cosmic web. Their early evolution is monolithic and similar to the three sine waves case. At later times, they are subject to successive mergers. This is illustrated by right columns of Figs. 7, 8 and especially by Fig. 9, which follows halo number 3 until present time, and is particularly rich of events. At the end of the simulation, halo number 3 has "eaten" almost all the matter available in the computing box and absorbed all the structures that formed at earlier times, in particular halos number 4 and 5 (not examined in detail here). Again, remind that these "CDM" simulations are totally unrealistic because of their very small box size, so their merger history is not representative. We shall discuss this more in detail when analysing radial density profiles in § 6.3. One obvious consequence of the simulation volume smallness is that modes aligned with the sides of the box dominate large scale dynamics, hence the typical cross structure clearly visible on bottom right panel of Fig. 9.
Phase-space sections
This section corresponds to one of the truly innovative contributions of this article. For the first time, thanks to the finesse allowed by the tessellation technique, a detailed analysis of phasespace slices is performed in the ColDICE simulations. Because the Vlasov code cannot follow the evolution of the system during many dynamical times, we consider only the early violent relaxation phase, but this will still provide us significant insights on the dynamics.
The objectives are once more three-fold, which sets up the way this section is organized. First, we aim in § 4.1 to test the robustness of the phase-space structure pattern with respect to force resolution. Second, in § 4.3, through comparison of the results obtained with ColDICE to PM simulations with large number of particles, we want to validate again the N-body approach when the particle density is high enough. Third, in order to highlight specific patterns, e.g. related to self-similarity or to random perturbations, § 4.4 examines how the phase-space structure evolves with time and changes according to initial conditions.
To achieve these goals, we rely on detailed visual inspection of Figs. 10 to 13. To be more specific, Figures 10-12 display phase-space slices extracted from three sine waves simulations with = (3/4, 1/2). These three figures allow us to study thoroughly, at three different times, the effects of force resolution, which decreases from top to bottom, and to compare in detail ColDICE (left panels) to the PM code (right panels). To complete the analyses, CDM halos 1, 2 and 3, already shown in Figs
Phase-space sections: technical details
The phase-space slices displayed on each figure correspond, in the Vlasov code, to the intersection of the phase-space sheet with the hyperplane y = z = 0 (with the origin of coordinate system centered on the halos). Remind that the intersection of a hypersurface of dimension D = 3 with a hyperplane of dimension D = 4 in six-dimensional phase-space is expected to be, in the non trivial and non degenerate case, of dimension D+ D −6 = 1, that is it corresponds to a set of curves. Additionally, since the phase-space sheet is a connected periodic smooth hypersurface with no hole, this set of curves should also be fully connected, which means that in all the left panels of Figs. 10, 11 and 12, on top nine panels of Fig. 13 and top twelve panels of Fig. 14, there should be no lose point in the curve pattern (except the two ends on each side of each panel), which is indeed the case after a detailed visual inspection.
Note that the intersection of the tessellation representing the phase-space sheet with the hyperplane y = z = 0 is calculated at linear order, which means that the curves are actually sets of segments corresponding to the intersection of each simplex with the hyperplane. On can clearly guess the segmentation pattern in bottom left panel of Figs. 10, 11 and 12. This pattern can present some small oscillations that would disappear if a second order representation of the phase-space sheet (quadratic simplices) would be used, so these features are not artifacts related to dynamical instabilities. Another remark is that the figures only account for the intersections of which the count can cumulate but no weight is given to account for volume density of the phasespace sheet, which has to be taken into account when comparing the Vlasov phase-space slices to the PM ones, that are mass weighted.
As a final remark, top two left panels of Figs. 10, 11 and 12 consider Vlasov simulations with exactly the same runtime parameters except that a small shift of half a voxel size was imprinted in initial conditions of the simulation considered in top left panel. As discussed in the end of § 2.3.2, this procedure was Article number, page 14 of 37 S. Colombi: Phase-space structure of protohalos: Vlasov versus Particle-Mesh Fig. 7. Evolution of total projected density on (x, y) plane for halo 1 of the "CDM" simulations with L = 12.5 Mpc/h. The first column of panels corresponds, from top to bottom, to Vlasov runs with expansion factor values a = 0.084, 0.11, 0.12 and 0.16. The two top panels were generated using the highest resolution run with n g = 512, VLA-CDM12.5-HR in Table 1, while the two bottom ones used respectively VLA-CDM12.5-MR with n g = 256 and VLA-CDM12.5-LR with n g = 128. The first column can be directly compared to the second one, which is analogous, but for the PM simulation, PM-CDM12.5-HR, with n g = 512. Third column of panels corresponds to more advanced times in the PM simulation and highlights a multiple merger (halo 1 can be seen at the right of top panel of this column). Note that, similarly as in Fig. 6, the mass contributing to the projection comes only from the cubical subvolume displayed on each panel.
Article number, page 15 of 37 A&A proofs: manuscript no. vla_proto_arXiv introduced at some point to try to reduce the asymmetry that develops during time due to cumulative rounding errors in the Vlasov code. The effects of this asymmetry are indeed stronger without the shift. They are clearly visible in second left panel of Figs. 11 and 12, but do not change significantly the phase-space pattern, except in the centre of the system. Figure 10 considers a moment at which the halo experienced only a few dynamical times, so that the phase-space structure of the system is not yet significantly intricate. As expected, this structure broadly follows a spiral pattern reminiscent of what is already well known in one dimension or in spherical symmetry (e.g., Fillmore & Goldreich 1984;Alard 2013), but is of course more complex. One can clearly guess on top left panel the multiple crossings the system experienced along each coordinate axis. These crossings relate to duplications of portions of the phase-space spiral. Here we find that the system experienced three crossings along x-axis, two along y-axis and one along zaxis. More specifically, duplication of the external arm of the spiral pattern reflects shell crossing along y-coordinate, while the scission in three of the central part of the spiral corresponds to one additional shell crossing along y-axis and one along zaxis. Obviously, these claims cannot be derived only from the examination of upper left panel of Fig. 10: they result from combined analysis of the caustics pattern in Eulerian and Lagrangian spaces, that is not shown here to avoid multiplying the figures. More discussions about the link between the Lagrangian pattern of the phase-space sheet and the internal dynamics of halos are deferred to a dedicated separate work (Colombi, Neyrinck & Sobolevski 2020). Interestingly enough, from a dynamical point Article number, page 16 of 37 S. Colombi: Phase-space structure of protohalos: Vlasov versus Particle-Mesh Fig. 9. Evolution of total projected density on (x, y) plane for halo 3 of the "CDM" simulation with = 25 Mpc/h. This figure is analogous to Fig. 7 except that the times considered here are slightly different and that there is one extra panel on third column corresponding to present time, a = 1. Note also that on third column of panels, the volume projected is larger, to have a better view of various structures at play.
Phase-space sections: force resolution
Article number, page 17 of 37 A&A proofs: manuscript no. vla_proto_arXiv Fig. 10. Phase-space slice: force resolution analysis and comparison between Vlasov and PM for the three sine waves simulations with = (3/4, 1/2) and a = 0.035. To have a better view of the fine structures of the system, the x coordinate is represented in a logarithmic scale, sgn(x) log 10 (1 + 512 × |x|). Some values of x are indicated in blue inside each panel, while the two red vertical segments mark the force resolution scale, L/n g , which increases from top to bottom. Vlasov runs are considered in left panels. In this case, the intersection of the phase-space sheet with the hyperplane y = z = 0 is calculated directly at linear order and represented in (x, v x ) coordinates. The two top left panels consider Vlasov runs with n g = 512 and n s = 256. The only difference between both simulations is the initial shift of half a voxel size imprinted in initial conditions of the simulation considered in top left panel, VLA-ANI2-HRS in Table 1, compared to the simulation in panel just below, VLA-ANI2-HR. Note that our highest resolution run, VLA-ANI2-FHR (with n s = 512 instead of 256), gives nearly identical results to VLA-ANI2-HR, and is not shown here. The two bottom left panels consider lower force resolution Vlasov runs with n g = 256 (VLA-ANI2-MR) and then n g = 128 (VLA-ANI2-LR). PM runs are examined in right panels. In this case, we consider a very thin slice of particles with (y, z) ∈ [−5 × 10 −4 , 5 × 10 −4 ] as tracers of the phase-space sheet. The top right panel shows the result obtained from our highest resolution PM run, PM-ANI2-UHR, with n g = 1024 and n p = 1024. The three next panels have all the same number of particles, n 3 p = 512 3 , but decreasing spatial resolution, n g = 512, 256 and 128 for PM-ANI2-HR, PM-ANI2-MR and PM-ANI2-LR, respectively.
Article number, page 18 of 37 S. Colombi: Phase-space structure of protohalos: Vlasov versus Particle-Mesh Fig. 11. Phase-space slice, continued: force resolution analysis and comparison between Vlasov and PM for the three sine waves simulations with = (3/4, 1/2) and a = 0.04. This figure is exactly the same as Fig. 10, but for a later time, a = 0.04, which is the same expansion factor as in Fig. 3. of view, the fact that only one crossing happened along z-axis suggests that the protohalo has just formed as a gravitationally bounded object.
When examining two bottom left panels of Fig. 10, which correspond to lower force resolution simulations with n g = 256 and n g = 128, one can see only one duplication of each arm of the spiral, which means that collapse happened only along x and y directions, hence that the halo is not fully formed yet. Lowering force resolution indeed delays collapse time and halo formation time. To have accurate estimate of these times, it is required to resolve sufficiently the initial fluctuations. Softening of the force also obviously simplifies the structure of the phase-space sheet, which undergoes less foldings in the centre of the system. Yet, the outer part of the phase-space pattern remains approximately Article number, page 19 of 37 A&A proofs: manuscript no. vla_proto_arXiv Fig. 12. Phase-space slice, continued: force resolution analysis and comparison between Vlasov and PM for the three sine waves simulations with = (3/4, 1/2) and a = 0.045. This figure is exactly the same as Fig. 10, but for the most evolved time available to the high force resolution Vlasov runs. the same at the coarse level, even at later times (left panels of Figs. 11 and 12), which confirms the conclusions from visual inspection of the projected density in § 3.1 (see for instance Fig. 3, which corresponds to the same time as Fig. 11). Naturally, these conclusions also stand when examining Fig. 13, which considers CDM halos. In this case, the phase-space structure appears much less coherent than for the three sine waves simulations.
When decreasing force resolution, this yarn ball pattern simplifies, especially in the centre of the system, but the outer parts remain mostly preserved.
Article number, page 20 of 37 S. Colombi: Phase-space structure of protohalos: Vlasov versus Particle-Mesh Beside sparseness effects due to the discrete nature of the particle distribution, the match between PM and ColDICE is excellent, except that, as already extensively discussed in § 2.2 and 3.1, additional softening of the force due to the TSC interpolation and Hanning filtering in the PM code makes its effective force resolution nearly twice worse than in ColDICE. This is very nicely illustrated by the phase-space diagrams, which can really allow an accurate comparison of the morphology of the phase-space sheet obtained in both codes, not only at the early stages of the evolution shown in Fig. 10, but also at later times, as illustrated by Figs. 11, 12, and independently of initial conditions (Fig. 14), even in the less coherent case of the CDM halos (Fig. 13).
Let us indeed repeat that if we would tessellate properly the particle distribution in Lagrangian space (Shandarin, Habib, & Heitmann 2012; Abel, Hahn, & Kaehler 2012) and compute the intersection of this tessellation with the hyperplane y = z = 0, the corresponding network of curves obtained from the PM particles would be very similar to that of the Vlasov code. This means that the discrete nature of the representation of the phase-space density in the PM code does not significantly affect the gravitational force field, in agreement with what we concluded from visual inspection of the density in § 3.1. Again, this result is not surprising, since, as advocated by earlier works (e.g., Melott et al. 1997;Splinter et al. 1998) we consider, for phase-space diagram analyses, only N-body simulations with at least one particle per mesh element, n p ≥ n g .
Note that the most evolved stages shown in Figs. 12, 14 and 13 still correspond to rather early phases of the evolution of the halos. As discussed in § 3.1, some instabilities due to particle shot noise in the PM code can develop at later times. Here, contrary to § 3.1, more advanced times for the PM runs are not considered because the phase-space pattern becomes really intricate and indecipherable just by using directly a particle representation. A proper analysis would require the special tessellation technique on the Lagrangian particle distribution mentioned just above, which has not been used here because it was deemed unnecessary to prove the important points raised in this article. Another reason is that N-body particles are unable to trace at advanced times all the complexity of the phase-space sheet, hence reconstruction of the latter with the tessellation technique is expected to become inaccurate when there are too many foldings (e.g., Hahn, Abel, & Kaehler 2013). Figure 14 illustrates how phase space diagrams evolve with time in the high resolution sine waves simulations. Due to the highly symmetric nature of these systems, the phase-space structure remains coherent over time, taking the form of an intricate spiral. We however only see a section of it, so calling this complex pattern a spiral is actually an abuse of language. This "spiral" structure indeed experiences successive folds along the three coordinate axes, with orbital times related to the amplitude of the initial sine wave in each direction. As discussed in § 4.2 for = (3/4, 1/2), shell crossings along y and z axes translate into splits of the spiral arms. For instance, one can see, for the quasi one-dimensional case considered in left part of Fig. 14, a split of the central part of the spiral in second panel, which corresponds to shell-crossing along y-axis. On the right panels, since the system is axisymmetric, each shell crossing in one direction is associated to simultaneous crossings in the two orthogonal di-rections, hence, for each fold, each arm of the spiral is split in three parts.
Phase-space sections: pattern analysis in various cases
Another feature of Fig. 14 is the apparent self-similar pattern of the spiral structure when it builds up complexity. Naturally, this is true only outside the central region delimited by the two red vertical segments which mark spatial resolution of the computational mesh used to estimate the force field. Additionally, one has to take into account, in the quasi one-dimensional case (left panels), the asymmetry induced by very small but cumulative rounding errors in ColDICE. Indeed we did not apply in this case any shift to initial conditions to remedy this defect, as discussed at the end of § 2.3.2. Note furthermore that signatures of self-similarity are less easy to decipher for this value of , due to the large difference in dynamical times associated to each axis. Self-similarity will be discussed furthermore in § 6.
Turning to more realistic configurations without imposed symmetries, examination of Fig. 13 suggests that the phasespace structure is much less coherent in the CDM halos than in the sine waves simulations, which is not very surprising. Yet, this lack of coherence is not as strong as it seems and this is probably partly due to the choice of representation of the phase-space slices in the Vlasov simulations, since the phase-space sheet was not weighted according to its local volume density. When examining the PM results, which are mass weighted, we can clearly guess a clean and rather symmetric regular spiral pattern at the coarse level on two bottom left panels of Fig. 13. This is because the two halos corresponding to these plots are still in the monolithic phase which is very analogous to the three sine waves case. On the contrary, in the bottom right panel, that treats a composite halo, i.e. which already experienced some merger, the structure of the spiral is more intricate. Obviously, mergers contribute significantly to disorder in the phase-space structure.
Complexity
This section corresponds to another innovative contribution of this article. We study, for the ColDICE runs, what shall be referred to as complexity of the phase-space sheet, through the analysis of the simplices count and of the phase-space sheet volume as functions of time. This will allow us to estimate the degree of winding of the phase-space sheet, that we try shall to relate to self-similarity and, if relevant, to chaotic instabilities, depending on if it grows as a power-law of time or exponentially. Confirming earlier analyses by Sousbie & Colombi (2016), we shall see that the phase-space sheet intricacy always increases very quickly, which unfortunately makes the adaptive tessellation method impracticable on the long run, whatever the level of optimisation. Figures 15 and 16 examine respectively the three sine waves simulations and the CDM runs. The top panels of these figures show the rough simplices counts as functions of expansion factor. As expected, during the early phases of the dynamics, the displacement field remains linear and the number of simplices N s stays stable. At some point, N s increases very quickly until it reaches a few billions at the end of the simulations (about 10 billions in the highest resolution runs). The way this happens is related to the formation and relaxation of dark-matter halos, which makes the phase-space sheet more and more intricate with time, as illustrated very well by the diagrams of previous section. The phase-space sheet volume shown in last line of panels of Fig. 15 and 16 provides a precise quantitative estimate of the actual level of complexity of the sheet. It suddenly starts to in-Article number, page 23 of 37 A&A proofs: manuscript no. vla_proto_arXiv Table 1 except VLA-ANI2-LRa. In the second line, the simplices counts are rescaled by the factor (I/10 −7 ) × (512/n g ) α , with α = 1.6 in two left panels and α = 2.25 in right panel, as discussed in the main text. A fit with an ellipse portion in linear-logarithm space is also shown with red symbols (equation 19). Next line of panels considers the logarithmic slope of the simplices count, to be compared again to the red squares, which correspond to the logarithmic slope derived from the ellipse portion. The last line shows the phase-space sheet volume as a function of expansion factor. Note that all the curves should superpose to each other: the differences relate to spatial resolution n g . The red losanges provide the prediction from Zel'dovich approximation.
crease after formation of the first halos. 5 This obviously triggers intense refinement of the tessellation. Fig. 16. Complexity of the phase-space sheet: simplices count and sheet volume in the "CDM" simulations. This figure is exactly analogous to Fig. 15, but left and right panels correspond to all the Vlasov CDM runs with box size L = 12.5 Mpc/h and 25 Mpc/h, respectively, as listed in Table 1. Details on the curves are given in second left panel. Here, comparison with the Zel'dovich approximation in bottom panels is lacking but would provide qualitative results similar to what can be seen in bottom panels of Fig. 15. ical state of the halos, force resolution traced by n g , and Poincaré constraint parameter I. Second line of panels of Figs. 15 and 16 combine these three elements by considering the rescaled count N s,rescaled defined as follows, where the parameter α, which spans the interval [1.6, 2.25], corresponds roughly to the logarithmic slope of the radial density ρ(r) in the early relaxation phase of the halos, as measured in § 6. The factor proportional to I stems from assuming that refinement is performed, in practice, in a locally isotropic fashion (Sousbie & Colombi 2016). This is not imposed by the algorithm, which allows for anisotropic refinement, but it results from the dynamics.
The factor proportional to n −α g in equation (18) does not come from an analytic prediction. It is simply an educated guess relating to a supposed self-similar evolution of the phase-space sheet, of which we had clear hints in previous section for the three-sine waves simulations and that will be discussed furthermore in § 6.
After rescaling (18), as expected, the curves on second line of panels of Fig. 15 superpose to each other when refinement starts to dominate in terms of simplices count compared to the initial value, N s 6n 3 s . Note however that the value of α was adjusted so that the superposition is visually optimal, but remains fixed in each panel of second line of Figs. 15 and 16, as shown in the caption of the ordinates, so this result still demonstrates to a large extent the numerical consistency of ColDICE with respect to refinement in a self-similar framework.
Turning now to the way N s increases with time, we confirm the early findings of Sousbie & Colombi (2016), that this increase is very dramatic, which forces us to stop the runs only after a limited number of dynamical times. This highlights again the main weakness of the adaptive tessellation approach. Yet, in most cases, the increase is not exponential. This is best illustrated by the third line of panels of Figs. 15 and 16 which display the logarithmic slope of N s as a function of expansion factor. For the three sine waves case, this slope suddenly grows up to a peak around 10 − 30, then slowly decreases with time but still with very high values, of the order of 7 − 9 at the latest times considered in the figures. Note that the decrease is not obvious in the high resolution simulations of the axisymmetric case but this is inconclusive due to the limited time range available.
To model the behaviour of N s with expansion factor after the peak, an ellipse is adjusted to the black curves of second line of panels of the figures (red squares). These curves correspond to the lowest force resolution simulations, with n g = 128, for which the available time range is the largest. The portion of ellipse is given by the following function, where the four parameters vector w = (w 0 , w 1 , w 2 , w 3 ) is determined with a simplex fitting algorithm (e.g., Press et al. 1992) in the interval of values of a covered by the red squares on the figures. For completeness, the values of w are listed in Table 3. A non exponential behaviour of the simplices count reflects a quiescent behaviour of the dynamics, or, in other words, the absence of chaos. This is what we find for the early, monolithic, violent relaxation phase of halos growing from three sine waves initial conditions, except possibly in the axisymmetric case, but this is a very degenerate configuration. These results have to be interpreted with caution, because the measurements cover a very limited time range in the highest resolution simulations. Convergence with respect to spatial resolution n g is not achieved. This is also clearly illustrated by sheet volume measurements discussed furthermore below. This lack of convergence is even more pronounced in the CDM simulations. In fact, the blue curve on right panel of third line of Fig. 16 suggests an exponential behaviour of N s at advanced times in the CDM case. 6 This behaviour is seen only in the highest force resolution simulation and only for a short time. This result is inconclusive but indicates that mergers, that actually take place during this small interval of time (see Fig. 9), play an important role in introducing some chaotic signature.
Bottom panels of Figs. 15 and 16 show the evolution of the phase-space sheet volume as a function of expansion factor. At linear order in the geometric representation, this volume is given by the sum of the elementary volumes of each tetrahedron it is composed of. To compute the three-dimensional volume of a tetrahedron with six-dimensional coordinates, one can for instance first find the 3D submanifold in which this tetrahedron lies, and then compute the volume of the tetrahedron in this submanifold. The phase-space sheet volume is therefore a quantity difficult to interpret because it overlaps between configuration and velocity spaces: it depends on the way velocities are scaled with respect to positions, that is on the choice of metric. Here, the figures assume box size and Hubble constant unity.
Even if it is difficult to interpret it in detail, the volume of the phase-space sheet remains a physical quantity, so it should not depend on any simulation parameter. This is not at all the case as soon as first halos form. Indeed, except in the quasi-linear regime, where predictions of linear Lagrangian perturbation theory successfully reproduce simulation measurements nearly until collapse time (red losanges on bottom panels of Fig. 15), 7 the results depend strongly on spatial resolution n g (but not significantly on other control parameters, as expected).
Halo formation marks the transition between quasi-linear and fast growth of the phase-space volume. When the value of n g is reduced, this growth is slightly delayed and becomes very significantly less prominent. This reflects the fact that during the violent relaxation phase, the phase-space sheet is subject to many foldings in the centre of the system, where a power-law singularity builds up, as discussed in detail in next section. The number of these foldings and the corresponding augmentation of the volume strongly depend on force resolution. The visual inspections performed in § 3.1 and 4.2 indeed show that the structure of the halos is pretty insensitive to force resolution in the outer parts of the system, but it becomes more and more intricate in the centre when increasing n g . While the phase-space sheet volume does not seem, for this reason, to be a very useful quantity to study, it provides a robust tool to demonstrate rigorous convergence with respect to force resolution. Note finally that the exponential behaviour seen for the simplices count in the high resolution CDM simulation with L = 25 pc/h is not obvious on bottom right panel of Fig. 16, but large fluctuations introduced by mergers make the results difficult to interpret.
Profiles
In this section, we perform classic measurements of the radial density profile, ρ(r), as well as the pseudo phase-space density (equation 5). Details on how the measurements are performed are provided in Appendix A.2.
The objective is to revisit phases (ii) and (iii) of the history of dark matter halos depicted in Introduction. After careful tests of force and mass resolution, we examine the early violent relaxation phase of dark matter protohalos and the subsequent convergence to an universal NFW-like profile, with detailed studies of the power-law slopes of the projected and pseudo phase-space densities. One important question is whether mergers represent a sine qua non condition for the convergence to NFW. This issue will be addressed by comparing the history of CDM halos to that of idealised halos obtained from three sine wave initial conditions and which experience a purely monolithic evolution.
This section is thus organized as follows. Exploring force resolution in § 6.1 will allow us to build a simple and approximate recipe to select the interval of scales supposedly not affected by softening of the force. Turning to mass resolution in § 6.2, we shall see that particle number in the PM simulations does not affect the results significantly at the coarse level for all the simulations we did, except may be at late time, depending on the desired accuracy. Then, § 6.3 examines the different phases of the history of density profiles for all the sine wave initial conditions as well as the five halos extracted from the two ensembles of CDM simulations. Likewise, § 6.4 deals with the pseudo phase-space density. The measurements will be put in perspective with respect to numerous and well know results in the literature. Figure 17 examines in details the effects of changing force resolution for the three sine waves simulations with = (3/4, 1/2). On left panels, the quantity displayed is r α ρ(r), with different values of α to emphasize better various phases of the dynamics, as discussed in details in § 6.2. To highlight the differences between various curves, right panels display ratios between the measured density and the one obtained from the highest available resolution run. At collapse time considered in top panels, the structure of the local pancake singularity implies ρ(r) ∝ r −α with α = 2/3 in the centre of the halo (e.g., Arnold, Shandarin, & Zel'dovich 1982). Middle panels examine early time, during which violent relaxation takes place. In this case, the radial density is close to a power-law of slope α 1.6, consistent with the literature. Bottom panels show mid and late time, where the system slowly relaxes to a NFW like profile, with a plateau consistent with secondary spherical infall prediction, α = 2.25 (Bertschinger 1985). On can also observe at mid time a regime with α = 1.2 at small radii.
Force resolution
As roughly demonstrated by two top lines of panels, the mesh cell size r min = ε ≡ L/n g seems a good estimate of the lower bound of the trustable dynamical range at collapse time and during the early relaxation phase. 8 This means that effective resolution of the codes is close to optimal in this regime, as long as the quantity of interest is the radial density at the coarse level. This includes the PM simulations, despite the additional force softening introduced by Hanning filtering and TSC interpolation.
Softening of the force has however some non trivial consequences at later epochs. Its indeed cumulates with time by contaminating increasingly large scales. While it seems difficult to predict this effect analytically, it can be modelled in a phenomenological way by examining PM runs of various force resolutions in order to isolate the region contaminated by softening, which is represented in orange on bottom panels of Fig. 17. This region is determined with the following phenomenological formula, r min = L n g exp log(a/a early ) log(a late /a early ) log 2.4 , where a early = 0.045 and a late = 0.185 correspond to early and late time, respectively, for = (3/4, 1/2). This equation is such that r min = L/n g for a = a early and r min = 2.4 L/n g for a = a late , while r min presents a power-law behavior as a function of expansion factor between these two values. The same formula will be employed for measurements presented in Figs. 19 and 20, but with different values of a early and a late for the CDM halos, namely a late = 0.5 for all halos and a early = 0.084, 0.111, 0.047, 0.067 and 0.067 for halos 1 to 5, respectively. When looking more closely at the density profile, one can observe some fluctuations. On top panels, the density profile should be perfectly smooth, which is almost the case for the Vlasov runs. 9 The fluctuations in the PM simulations are simply related to discreteness effects. Their amplitude is controlled by the combination of bin width and number of particles n 3 p . Note that these fluctuations are much larger than the one sigma level prediction from Poisson noise shown as very thin lines on top panels. Indeed, at collapse time, the system can still been seen as a distorted regular network of particles, which can introduce significant aliasing effects on the binned radial density.
At later times, fine variations in the density profile are mainly related to properties of the caustic pattern. Obviously, force resolution can affect this pattern, especially in the PM code, for which additional softening of the force cannot be ignored anymore. Keeping in mind the logarithmic scale used in Fig. 17, density fluctuations sufficiently far from the centre of the system still remains pretty insensitive to force resolution, consistently with visual inspections performed in previous sections, except at late time shown in top curves of bottom panels. In the latter case, force resolution seems to influence details of the radial density nearly up to the size of the system, which reflects again the cumulative nature of force softening. Hence, note finally that using fine instead of coarse details of the density profile to determine the available trustable range would impose much more restrictive constraints than equation (20) and this even more so for the PM code. Figure 18 is analogous to Fig. 17, but studies mass resolution for the PM code. The main result of this figure is that particle count does not affect significantly the density profile at the coarse level for all the values of n p considered.
Mass resolution
Turning to finer details of the profile, we already mentioned in previous section the aliasing effects on the measured radial density at collapse time due to the memory of the initial set-up of Table 2, namely collapse time in top panels, early time in middle panels (with a multiplication by a factor 2 of the curves corresponding to a = 0.045 for clarity) and mid to late time in bottom panels (with a multiplication by a factor 3 and 2 of the curves corresponding to a = 0.185 on left and right panel, respectively). As discussed in the main text, to have a better view of each regime, the quantity represented on left column is the logarithm of r α ρ(r), with α = 2/3, 1.6 and 2.25, respectively in top, middle and bottom panels. Various simulations are considered, both in the Vlasov and PM cases, as indicated on each panel through the values of n g , n s and n p also shown in Table 1. Just note that for the long dashed black curve corresponding to a ColDICE run with n g = 512 and n s = 256, the simulation used is VLA-ANI2-HRS, but VLA-ANI2-HR would provide nearly identical results. On bottom panels, the left part of the curves corresponding to regions supposedly influenced by small scale force softening is displayed in orange, as discussed in the main text. To highlight better the differences between various curves, the right column displays density ratios: on top panel, the quantity displayed is ρ/ρ Vlasov,1024 , where ρ Vlasov,1024 is the density measured in our highest resolution ColDICE run (corresponding to the dashed green curve); on two bottom panels, the quantity displayed is ρ/ρ PM,1024 , where ρ PM,1024 is the density measured in our highest resolution PM run (corresponding to the solid green curves).
Article number, page 28 of 37 3 sine waves, mass resolution study for = (3/4, 1/2) Fig. 18. Radial density profile: mass resolution analysis of the three sine waves PM simulations with = (3/4, 1/2). This figure is analogous to Fig. 17 except that it focuses on the effect of changing the number of particles n 3 p in the N-body runs. All the simulations have the same spatial resolution, n g = 512, but different values of n p , namely n p = 512 (black, PM-ANI2-HR), 256 (red, PM-ANI2-MR) and 128 (blue, PM-ANI2-LR). On right panels, the ratio considered is ρ/ρ PM,512 , where ρ PM,512 is the density measured in the PM run with n g = n p = 512 corresponding to black curves. the particles on a regular pattern. Even with the numerous complex processes already taking place during the early part of the violent relaxation phase, this memory can persist in the multistream region, as long as mixing is not sufficiently rich to locally (pseudo-)randomize the particle distribution. Indeed, the intrinsic softening nature of the Green function used to solve Poisson equation can temper, at least for some time, the effects of the discrete nature of the representation of the system with particles. Hence, on second line of panels of Fig. 18, we can suppose that most of the additional fluctuations appearing at various radii when diluting the system are of the same nature as on the top panels, as already discussed in § 3.2. At some point, however, part of these fluctuations are not transient anymore. Along with some more subtle collective effects related to shot noise (e.g., Article number, page 29 of 37 A&A proofs: manuscript no. vla_proto_arXiv Colombi et al. 2015), they can grow through gravitational instability and introduce significant deviations from the prediction of the mean field limit. This clearly shows up at late time as noticeable differences between top curves of bottom panels of Fig. 18.
Time evolution: density
Figures 19 and 20, show, for all the halos studied in this paper, the radial density profile ρ(r) and the pseudo phase-space density Q(r) (equation 5).
First, we focus on left panels, which display r α ρ(r) as a function of r, where α ranges in the interval [1.5, 1.8]. This value of the logarithmic slope of the density profile reflects the behaviour seen systematically in the early monolithic phase of the evolution of the protohalos, as found in many previous works, both for the three sine waves case (e.g., Nakamura 1985; Moutarde et al. 1995) and CDM protohalos (Diemand, Moore & Stadel 2005;Ishiyama, Makino & Ebisuzaki 2010;Anderhalden & Diemand 2013;Ishiyama 2014;Angulo et al. 2017;Delos, et al. 2018a;Delos et al. 2018b). What is mainly new here is that we provide more accurate investigations of the three sine waves initial conditions with a wide variety of values of .
We find for the three sines waves that, after collapse along the three axes, the early phase of violent relaxation builds always the same kind of power-law profile, with α 1.6 ± 0.1 (the error is estimated by visual inspection), whatever = ( a , b ) with a > 0 and b > 0. 10 Note a trend of the slope to increase from α 1.5 to α 1.7 − 1.8 when going from quasi-1D to axisymmetric configuration, as indicated by the group of three thin lines on top left panel of Fig. 19.
While the axisymmetric configuration is locally equivalent, at leading order, to a spherical Gaussian overdensity, ρ(r) ∝ 1 − (2πr/L) 2 /2, r L, the evolution of the system does not lead to the expected slope α = 2.25 predicted by the secondary spherical infall model (Gott 1975;Gunn 1977;Fillmore & Goldreich 1984;Bertschinger 1985) and measured approximately in threedimensional N-body simulations of spherical spherical Gaussian overdensities (e.g., Gosenca et al. 2017). One has thus to keep in mind that the axisymmetric three sine waves configuration remains, from the dynamical point of view, strongly anisotropic compared to the purely spherical case.
During the monolithic early violent relaxation phase, the "CDM" experiments behave similarly as the three sine waves with a slope ranging approximately in the interval [1.5, 1.8], in agreement with the early measurements of Diemand, Moore & Stadel (2005), but slightly larger than the value obtained from more recent investigations which rather suggest α ∈ [1.3, 1.5] with a preference for α 1.5 (Ishiyama, Makino & Ebisuzaki 2010;Anderhalden & Diemand 2013;Ishiyama 2014;Angulo et al. 2017;Delos, et al. 2018a;Delos et al. 2018b). These slight inconsistencies can be explained at least partly, but in order to understand them, we first need to examine the evolution with time of the various systems under consideration, which we do now.
Whatever the nature of initial conditions, we notice that the power-law behaviour seen at early time disappears at some point and all the systems relax to a "NFW"-like profile (Navarro, Frenk & White 1996, or more precisely, in the analyses performed here, an Einasto-like profile (Einasto 1965), 10 having one or two of the coordinates of the vector null reduces the dimensionality of the problem and obviously leads to different slopes. Table 4. The value of γ decreases with time 11 down to γ = 0.15 ± 0.03, 12 in agreement with numerous previous works (e.g., Navarro et al. 2004;Merritt et al. 2006;Duffy et al. 2008;Gao et al. 2008;Stadel et al. 2009;Navarro et al. 2010;Dutton & Macciò 2014;Klypin et al. 2016). This somewhat inevitable evolution towards NFW is already well known in the litterature. In the CDM case, the change of the nature of the profile is generally interpreted as the result of multiple mergers (e.g., Syer & White 1998;Ishiyama 2014;Ogiya, Nagai & Ishiyama 2016;Angulo et al. 2017). However, the relaxation to a NFW-like profile is also observed in the three sine waves simulations, that is even in the absence of merger. This result is robust against particle shot noise as illustrated by bottom panels of Fig. 18. The fact that even in the monolithic case, the density profile and its central slope can change and also relax to NFW is not new (see for instance Huss, Jain & Steinmetz 1999;MacMillan, Widrow & Henriksen 2006;Ogiya & Hahn 2018). Our numerical experiments confirm again the attractor nature of the NFW profile with a spectacular convergence of all the sine wave simulations at late time whatever the value of we considered, as illustrated by the top curves of upper left panel of Fig. 19.
Note a potentially interesting behaviour that can be observed at mid time and at small radius for = (3/4, 1/2) and (1, 1), which seems compatible with the power-law ρ(r) ∝ r −1.2 . When examining close successive snapshots, it can be noticed that this plateau seems to be the result of a progressive change of the Fig. 19. Time evolution of the radial density profile (left) and of the pseudo phase-space density (right). Top two panels correspond to the results obtained for the three sine waves simulations for different values of . Three regimes are considered: early time, mid time (multiplied by a factor 2 for clarity on left panel) and late time (multiplied by a factor 7 on left panel), as detailed in Table 2. The continuous curves of various colors correspond to the highest resolution PM runs, namely PM-Q1D-HR, PM-ANI1-HR, PM-ANI2-UHR, PM-ANI3-HR and PM-SYM-HR in Table 1. Only the parts of the curves that are not supposed to be influenced by force softening are shown. In addition, the dashed curves of the same colour provide the measurements obtained at early time in high resolution Vlasov simulations, namely VLA-Q1D-HR, VLA-ANI1-HRS, VLA-ANI2-HR, VLA-ANI3-HRS and VLA-SYM-HR. To emphasize the very clear power-law behaviour present at early time, the quantity actually displayed on left panel is r α ρ(r), with α = 1.6. In addition, thin lines indicate different slopes, in particular ρ(r) ∝ r −2.25 and Q(r) ∝ r −1.875 as predicted by secondary spherical infall model; dashed curves show Einasto profiles with parameters given in Table 4; finally, three close very thin lines on top left panel also indicate small variations in the logarithmic slope, α = 1.5, 1.6 and 1.7. Next two lines of panels are analogous to two top ones, but correspond respectively to halos 1 and 2 extracted from the "CDM" runs with L = 12.5 pc/h. Several values of the expansion factor are considered to show various stages of the evolution. Again, the continuous curves of various colors correspond to the PM run PM-CDM12.5-HR. They become thinner at small scales, where force softening is thought to influence the results. The dashed curves of the same colour correspond to Vlasov simulations of the highest possible resolution available, namely VLA-CDM12.5-HR for a = 0.084 and 0.11, VLA-CDM12.5-MR for a = 0.12 and VLA-CDM12.5-LR for a = 0.16. Mergers, that induce a temporary flattening of the density profile, are emphasized by thin lines with α = 0.
Article number, page 31 of 37 A&A proofs: manuscript no. vla_proto_arXiv Fig. 20. Time evolution of the radial density profile and the radial pseudo phase-space density: continued. Same as in Fig. 19 but for "CDM" halos 3, 4 and 5, extracted from the "CDM" runs with L = 25 pc/h. Note that halos 4 and 5 merge with halo 3. One of these mergers is clearly captured by the orange curve corresponding to a = 0.3 on top left panel.
slope of the power-law plateau seen at early time along with a reduction of the extension of this region. However, the simulations do not have sufficient force resolution to fully confirm this intermediary regime, especially since this α ∼ 1.2 slope can also be observed at small scales in the orange part of some of the curves on bottom left panel of Fig. 17. In this case, it results from softening of the force at small scales. Still, keeping this α 1.2 regime in mind, one may then isolate another scaling range with a slope compatible with the prediction α = 2.25 from secondary spherical infall, evidenced even better for = (3/4, 1/2) in bottom left panel of Fig. 17. Finally, another regime at larger scale compatible with the slope α = 3 can also be guessed before the cut-off limit corresponding to the halo extension. Turning to the CDM halos, the same argument may apply as long as their evolution is monolithic. They are not as well resolved as in the three sine waves simulations, so the examination of left panels of Figs. 19 and 20 is inconclusive to this respect, although one Article number, page 32 of 37 might be tempted to say that halo 1 and halo 2 follow the trend just discussed above for a = 0.16. We now come back to the mild discrepancy observed between the logarithmic slope of our CDM microhalos during the early relaxation phase and recent investigations in the literature. To suggest explanations of the differences, one can first notice that the simulations realised in the present work lack dynamical range. The periodic box size L is very small, which makes the interpretation of the results problematic, because the scale corresponding to L should, in reality, become highly nonlinear very quickly, which means that the tidal and merger history of the halos under consideration is unrealistic. In the real CDM scenario, one expects frequent mergers, therefore, a large fraction of protohalos could be composite and pass through the monolithic relaxation phase only shortly if not at all, which implies a smaller value of α. Indeed, the picture in which first microhalos form from a single well defined singularity might be oversimplified. It still needs to be refined, both from the theoretical point of view (following footsteps of e.g., Arnold, Shandarin, & Zel'dovich 1982;Hidding, Shandarin, & van de Weygaert 2014;Feldbrugge et al. 2018), and the numerical point of view, by studying in detail the topology of the dynamical history of halos in Lagrangian and Eulerian spaces. In the CDM simulations studied in the present work, all the selected halos pass through a monolithic stage, sufficiently long for the establishment of a "clean" violent relaxation phase. Therefore, we might expect our CDM halos to have an initial power-law profile with α close to that of the three sine waves case, hence α > ∼ 1.5, while in simulations with larger box sizes such as in Ishiyama (2014), Angulo et al. (2017), the composite yet possibly more realistic nature of the halos can lead to α < ∼ 1.5.
Another limit of the simulations in the present work is their force resolution, which implies that only a restricted scale range is available to measure the slope and this might lead to an overestimate of α. For instance, the simulation used in Ishiyama, Makino & Ebisuzaki (2010) has comparable box size (30 pc) to our runs, but much higher force resolution, 13 which widens considerably the dynamical range for measuring radial density profiles. For the CDM halos analysed, they clearly find α 1.5. On the other hand, we have seen in the three sine waves simulations that the early power-law plateau seems to become less steep and less extended with time, with α 1.2 at mid time, as a result of natural evolution of the halo. In other words, the slope is time dependent, and measuring it exactly just after early relaxation is non trivial. While resolution issues are probably real for the CDM halos analysed in the present work, this might also explain why the values of α estimated here are slightly larger than in recent analyses in the literature.
In conclusion, for halos going through a truly monolithic violent relaxation phase during their formation, it seems reasonable to think that the logarithmic slope of the power-law density profile building up during this phase can be slightly larger than 1.5. In the measurements performed for this article, α was indeed found to range in the interval [1.5, 1.8]. Then, various dynamical processes, such as evolution under slow infall as well as successive mergers decrease the effective value of α and drive progressively the system to the dynamical attractor embodied by the Einasto profile. However, bear in mind again that it has not been proven here that all dark matter halos experience a pure monolithic phase during the early stages of their evolution, nor that a definitive answer to this question exists yet. 13 note that there is much less than one particle per softening length in this work, which might be an issue. 6.4. Time evolution: pseudo phase-space density Right columns of Figs. 19 and 20 plot the pseudo phase-space density Q(r) as a function of radius r. 14 In agreement with previous works (e.g., Taylor & Navarro 2001;Navarro et al. 2010;Ludlow, et al. 2010), function Q(r) presents a power-law behaviour compatible with the prediction from secondary spherical infall model, Q(r) ∝ r −α Q with α Q = 1.875. However, the fact that this result stands irrespective of the dynamical state of the halos, even during the early violent relaxation phase and mid time (except of course when a merger perturbs temporarily the profile), is non trivial and somewhat new. To our knowledge, there are indeed only few measurements in the literature of Q(r) for CDM halos during all the stages of the evolution, in particular the early violent relaxation phase. On interesting exception is the recent work of Ishiyama, Makino & Ebisuzaki (2010), which suggests deviations from the prediction of secondary spherical infall, with α Q 2.25 at small r. Our measurements are not sufficiently accurate to confirm this.
A power-law behaviour of function Q(r) is a clear signature of self-similarity. In the pure self-similar framework and assuming spherical symmetry, the projected density ρ(r) and the pseudo phase-space density are both power-laws, with (2010), whose dark matter halos have α = 1.5, is strikingly consistent with the self-similar prediction.
Remind however that we are far from spherical symmetry. In the triaxial case, the nature of the self-similar solutions obviously changes, and even if the expected density profile slope at small radius is the same as the one predicted in spherical symmetry with non zero angular momentum, it may be reached only at very small radii (e.g., Lithwick & Dalal 2011). Additionally, the smooth nature of the initial overdensity deviates from the actual assumptions intrinsic to self-similar solutions, which makes the interpretation of the early monolithic phase of the evolution difficult in this framework. Remind however that in the purely spherical case, as already mentioned in previous section, a smooth overdensity evolves to a state compatible with the secondary infall solution (Gosenca et al. 2017). Subsequent mergers add a tidal torque contribution, that is the generation of angular momentum from the accretion, which complicates further the interpretation of the results (although this can be also approached in spherical symmetry in a self-similar fashion, e.g., Zukin & Bertschinger 2010a,b;Lapi & Cavaliere 2011).
Even if the evolution of the phase-space density is selfsimilar, the finite extension of the system can imply quantities such as the projected density ρ(r) or the velocity dispersion σ v (r) not to be pure power-laws due to the effects of the cut-offs. But ratios such as Q(r) might just compensate for this finite size effect and evidence better the self-similar nature of the dynamics (Alard 2013). Hence the fact that Q(r) is a pure power-law is not incompatible with ρ(r) not being so, as figures 19 and 20 show. This is also consistent with solutions of Jeans' equation (Dehnen & McLaughlin 2005).
It is worth noticing that function Q(r) tends to be more "universal" than the density profile. For instance, in the quantitative examples discussed above, its logarithmic slope covers twice a smaller range of values than that of the density. Similarly, according to Dehnen & McLaughlin (2005), the resolution of Jeans' equation assuming a pure power-law for Q r (r) ≡ ρ(r)/σ v,r (r) 3 with σ v,r (r) 2 the radial velocity dispersion, provides, in practice, consistent solutions only if α Q,r ≡ d ln Q r /d ln r = α crit ≡ 35/18 − 2 β(r = 0)/9, where β(r) ≡ 1 − σ v,⊥ (r) 2 /σ v,r (r) 2 , is the velocity anisotropy parameter assumed to be linearly related to the local logarithmic density slope, with σ v,⊥ (r) 2 the transverse velocity dispersion. It is known that Q r (r) and Q(r) present very similar power-law behaviours, with α Q,r very slightly larger than α Q (see, e.g., Ludlow, et al. 2010). In other worlds, α Q is in practice never expected to be so different from the generic value of secondary spherical infall model, α Q = 1.875.
In conclusion, even if it seems striking that the pseudo phasespace density is roughly compatible with the power-law predicted by standard secondary spherical infall model, α Q = 1.875, it is not surprising. It merely reflects the self-similar nature of the dynamics in phase-space (Alard 2013). Such self-similarity is also clearly suggested by direct measurements of the history of orbits of particles in dark matter halos (Sugiura et al. 2020). While the logarithmic slope of Q(r) is changing little with time, we have seen in previous section that the density profile presents some striking transformations during the course of the dynamics. This is obviously related to changes in the velocity distribution, in particular to evolution of the anisotropy parameter β(r). Indeed, it is well known that radial instabilities play an important role in the internal dynamics of halos (e.g., Huss, Jain & Steinmetz 1999;MacMillan, Widrow & Henriksen 2006).
Conclusion
In this article, the formation and evolution of dark matter halos have been studied in details with the Vlasov code ColDICE and a traditional N-body Particle-Mesh (PM) code. Two kinds of initial conditions were considered: on one hand, the highly symmetrical set up with three sine waves, on the other hand, neutralino Cold Dark Matter (CDM) fluctuations in very small periodic boxes of size 12.5 and 25 pc/h. In these analyses, that include projected density and phase-space diagrams, radial density profiles and pseudo phase-space density, we paid particular attention to numerical convergence with respect to force resolution traced by the resolution n g of the mesh used to solve Poisson equation and with respect to mass resolution traced by the initial number n 3 s of vertices in ColDICE and the number n 3 p of particles in the PM simulations. The main results of this paper can be summarised as follows: -The N-body method is robust when there is typically more than one particle per softening length of the force, that is n p > ∼ n g . This result is well known (e.g., Melott et al. 1997) but was never tested with comparisons of the N-body approach to a pure Vlasov code. Of course, because of its high computational cost, ColDICE can be used to follow the evolution of halos only in the early relaxation phase. During this phase, discrete noise of particles has little effect on the dynamical evolution of the system and agreement between PM code and ColDICE is excellent. Pushing the PM simulations further, halos profiles are still not affected by N-body relaxation at the coarse level, however some instabilities clearly develop at small scales when n p < n g .
-Early violent relaxation phase of protohalos (also called microhalos). During this phase, the halos are found to display a power-law density profile, ρ(r) ∝ r −α , with α ∈ [1.5, 1.8], which agrees well with the literature but with a slightly larger values of α. Indeed, most previous investigations of CDM microhalos profiles found α ∈ [1.3, 1.5] with a preferred value α 1.5. One obvious explanation of this difference is that the CDM simulations performed in this work lack dynamical range and that the three sine waves simulations are not sufficiently representative of true dark matter halos, given their very high level of symmetry. Another possible source of the difference might be related to the time at which the slope is measured. Indeed, halo profiles evolve significantly, even in the monolithic stage, which can affect measurements of the logarithmic slope. Finally, the picture in which the halos always first form from a well defined singularity might be oversimplified, even in the context of smooth initial conditions produced by a massive neutralino.
-Complexity. During the early relaxation phase, it is possible to estimate the level of complexity of the ColDICE phasespace sheet by measuring its total volume V s or the total number of simplices N s it is composed of. While both V s and N s increase very quickly with time after collapse, with a growth rate ranging approximately between a 7 to a 30 for N s , the increase is not exponential in most cases, which suggests the absence of chaos. Only the highest resolution CDM simulation with box size 25 Mpc/h, which is the object of several mergers in the period covered by ColDICE, presents a clear signature of exponential growth for N s . However these results are inconclusive in the sense that convergence with force resolution is not demonstrated, especially when examining phase-space sheet volume.
-Phase-space structure. During the early relaxation state, it is possible to examine in detail the structure of the phase-space sheet using phase-space diagrams. The three-sine wave halos display an intricate yet coherent spiral structure which is subject to multiple foldings in phase-space, that can be related to successive collapses along each axis of the dynamics and also show clear signatures of self-similarity. The random nature of CDM initial conditions makes the phase-space structure somewhat fuzzy but still coherent at the coarse level. This structure is of course strongly perturbed by the presence of mergers. Note that the predictions of the Vlasov code seem fairly robust with respect to force resolution when far enough from the centre of the system in terms of softening length of the force field. This is demonstrated as well by the examination of the caustic pattern in the projected density.
-Convergence to NFW-like dynamical attractor. After careful tests of convergence, the PM code was used to follow furthermore the evolution of the halos. As already well known from many investigations in the literature, the initial powerlaw behaviour breaks down and the density profile converges Article number, page 34 of 37 to the well known dynamical "NFW-like" universal attractor, irrespectively of initial conditions, even in the three-sine wave simulations. This clearly shows again that mergers do not represent a necessary condition for convergence to NFW and that radial instabilities, which change the properties of the velocity distribution, can also play a major role.
-Pseudo phase-space density. The pseudo phase-space density Q(r) = ρ(r)/σ v (r) 3 measured in all the halos is compatible with the power-law Q(r) ∝ r −1.875 predicted by secondary infall model at all the times, even during the early relaxation phase. This result is of course well known for relaxed halos, but is non trivial when considering the early and intermediary phases of their evolution, where they display very different forms of the density profile. It represents a clear signature of self-similarity of the dynamics in phase-space.
The analyses performed in this work clearly demonstrate that it is possible to perform N-body simulations in a robust way. While the tessellation approach is free of particle shot noise, it is very costly. The extremely quick growth of the phase-space sheet complexity makes this method unaffordable beyond a limited number of dynamical times whatever its level of optimisation. To solve this problem, it has been proposed that a hybrid implementation is employed, relying on the tessellation in regions where relaxation is incomplete and where N-body technique can really introduce artificial instabilities, and using particles in dense, dynamically relaxed locations, where the warmness of the system makes the N-body approach much more reliable (Stücker et al. 2020). However, this hybrid approach, which allows one to use the tessellation method when its cost remains affordable while, at the same time, it corrects for the main defects of the N-body approach, might seem unnecessarily complex. Instead, following a rather simple but often ignored ancient numerical strategy (e.g., Melott et al. 1997;Splinter et al. 1998), a better control of the traditional N-body approach could be achieved by making sure that there is everywhere in the computational volume at least one particle per local softening length, the main difficulty being to preserve as much as possible the Hamiltonian nature of the numerical system. This can be achieved with straightforward modifications of current N-body codes based on adaptive mesh refinement (Kravtsov, Klypin, & Khokhlov 1997;Teyssier 2002;Bryan et al. 2014), by improving criteria of refinement using constraints based on estimates of local entropy production. Another interesting result of the investigations of this article is the relaxation to a universal profile irrespectively of initial conditions, even in the absence of mergers. While this result is not fundamentally new, the detailed analyses of the threesine wave case in various configurations was never performed at the level of accuracy achieved in this work. However, the simulations, even with n g = 1024, still lack spatial resolution. It would be clearly worth reinvestigating the halos studied in the present work with high spatial resolution N-body simulations, performed in a controlled way just as advocated above, to analyse in detail the evolution of the velocity field structure of the halos. In particular, it would be worth studying how the velocity anisotropy parameter β(r) (equation 23) changes with time, to understand the effects of radial instabilities on the evolution of the density profile and compare them to the effects of mergers.
|
v3-fos-license
|
2018-04-03T05:30:12.599Z
|
2010-01-01T00:00:00.000
|
5071827
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.4103/0019-5545.69243",
"pdf_hash": "869ab62336a0d10171ad33b5c9fc2b5b92ed0db1",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:629",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "5fcfa6e07199a295ea00cdf3778e53ee00f91e29",
"year": 2010
}
|
pes2o/s2orc
|
Sexuality research in India : An update
This review provides the available evidence on sexual dysfunctions in India. Most of the studies have concentrated on male sexual dysfunction and hardly a few have voiced the sexual problems in females. Erectile dysfunction (ED), premature ejaculation (PME) and combinations of ED and PME appear to be main dysfunctions reported in males. Dhat syndrome remains an important diagnosis reported in studies from North India. There is a paucity of literature on management issues with an emergent need to conduct systematic studies in this neglected area so that the concerns of these patients can be properly dealt with.
INTRODUCTION
Human sexuality is inherently related to some of the social and public health problems in India.These problems may involve contraceptive use, child abuse, sex education, legal issues of homosexuality and AIDS.These health problems have a significant impact on existing health infrastructure and budget.These problems also need to look within the context of poverty, stressful living situations, diverse cultural belief systems, quackery, ignorance and inadequate health services.However, there is little recognition of how these health problems are related to human sexuality and their dysfunctions.There is a need to understand how sexual attitudes, beliefs, and values act and influence these problems.Our cultural perspective can also shape the experience and understanding of these disorders. [1]There is a need to research sexual experiences and dysfunctions, which further influence adult behavior patterns in India.
In this review, our aim is to present sexual dysfunction from the Indian perspective.Available data, based clinical studies towards the problem of sexual dysfunction.The authors also concluded that PME is a state of hyper-sexual arousal.
Using the same cohort, Nakra and his colleagues (1978) [5] found that nearly 75% of the patients had practiced masturbation before developing potency disorders and nearly 43% had guilt associated with masturbation.The authors also found nocturnal emission and adolescent homosexual contacts in 95% and 16% of the subjects respectively and of these 69% and 39% respectively had associated guilt feelings.64% of the subjects considered loss of semen harmful to health.Kar and Verma (1978) [6] studied the sexual lives of 72 married psychiatric patients and compared with 80 married relatives or friends from same socio-cultural background.With regard to marriage, 63% of subjects with schizophrenia and 24% of manic-depressives were married after the onset of the illness; 48.5% of the patients failed to perform sexually on suhag raat (first honeymoon night after marriage) compared with 18.7% of the controls faced same problem.Premature ejaculation was reported in 48% of subjects in 'patient group' and 40% in controls.Erectile impotence was reported in 27% and 13% in 'patient group' and 'control group' respectively.63.4% subjects from 'patient group' described their sexual relationship unpleasant as compared to only 2.5% from 'control group' considered unpleasant.
Kumar and his colleagues (1983) [6] conducted a study on 40 married male neurotics and 22 healthy controls from teaching hospital setting.They found that the sexual behavior of the neurotics was similar to healthy controls before the onset of illness.There was a significant decrease in the frequency of coitus, sexual satisfaction of self, perceived sexual satisfaction of the spouse and sexual adequacy.
Bagadia and his colleagues (1983) [7] used behavioral techniques to treat 26 married males with PME and secondary impotence; 58% patients improved with those techniques.Gupta and her colleagues (1989) [8] described the application of Modified Masters and Johnson technique in the treatment of sexual inadequacy in 21 married males.76.2% patients showed improvement after this technique.
Avasthi and his colleagues (1994) [9] conducted an outcome study of 66 male patients with psychosexual dysfunction in the context of socio-demographic and clinical variables.Short term outcome (of one year duration) and long term outcome (of seven years' duration) of those patients were recorded.Erectile dysfunction (ED), PME, and combination of ED and PME were reported by 30, 12 and 45% of subjects respectively.Dhat syndrome, with ED/PME, was reported by 9% of the subjects.Nearly 38% of the patients dropped out of the treatment ('dropout group').At one year follow-up, nearly 44% of the patients perceived improvement ('improved at one year group'), while rest did not ('no change at one year group').At the end of seven years, nearly 70% of the original 66 patients could be recontacted.Significantly, a greater number of subjects from the 'drop-out group' had active sexual dysfunction than other two groups.The study proved that improvement in the shortterm outcome indicated favorable long-term outcome.
Verma and his colleagues (1998) [10] analyzed data on 1000 consecutive patients with sexual disorders attending the psychosexual clinic at the tertiary care setting.They found premature ejaculation (77.6%) and nocturnal emission (71.3%) frequent problems followed by a feeling of guilt about masturbation (33.4%), small size of the penis (30%) and erectile dysfunction (23.6%).Excessive worry about nocturnal emission, abnormal sensations in the genitals, and venereophobia was reported in 19.5%, 13.6% and 13% of patients, respectively.
A file review of 178 male patients with sexual dysfunction by Avasthi and his colleagues (2003) [11] revealed that high income, married status, presence of partner at evaluation, and liberal attitude towards sexuality increased the chances of selection of behavioral sex therapy.The outcome of therapy was associated with treatment adherence.Participation of the spouse resulted in lower dropout rates.
Kendurkar and his colleagues (2008) [13] assessed the pattern of sexual dysfunction in the patients attending a marriage and sex clinic from 1979 to 2005 by looking into their medical records.After reviewing the data of 1242 patients, they found premature ejaculation being the most common complaint and the most commonly diagnosed clinical entity, followed by male erectile problems and Dhat syndrome.
Sexual dysfunction in females
As compared to male sexual dysfunction, a few Indian studies are available in the area of female sexual dysfunction.This area remains largely unexplored.Agarwal (1977) [14] reported a study of 17 female cases of frigidity.All except one presented with neurotic or somatic symptoms.Frigidity was associated with ignorance regarding sexual activity, fear of pregnancy, marital disharmony, lack of emotional atmosphere, tiredness and poor precoital attention.Superficial psychotherapy and guidance helped 65% of the subjects with frigidity.
In the review by Kulhara and Avasthi (1995), [15] there was mention of one unpublished study from Chandigarh which documented 13 female patients out of 464 attenders of a special clinic dealing with marital and sexual dysfunctions.Vaginismus, dyspareunia and lack of sexual desire were the main problems reported.
Kar and Koola (2007) [16] conducted a postal survey among English-speaking persons from a south Indian town and [Downloaded free from http://www.indianjpsychiatry.org on Thursday, November 29, 2012, IP: 182.68.49.12] || Click here to download free Android application for th journal found orgasmic difficulties in 28.6% females.Moreover, almost 40% of females reported to have never masturbated.
In the study among 100 consecutive women attending the Department of Pediatrics for the care of non-critical children in a tertiary care teaching hospital, Avasthi and his colleagues (2008), [17] found 17% of the subjects encountered one or more difficulties during sexual activities.These difficulties were in the form of headache after sexual activity (10%), difficulty reaching orgasm (9%), painful intercourse (7%), lack of vaginal lubrication (5%), vaginal tightness (5%), bleeding after intercourse (3%) and vaginal infection (2%).14% subjects attributed these difficulties to their own health problems; further lack of privacy (8%), spouse's health problems (4%) and conflict with spouse (4%) were the other cited reasons for those difficulties.None considered their sexual difficulty significant enough to demand a thorough clinical assessment.
In another cross-sectional survey of 149 married women in a medical outpatient clinic of a tertiary care hospital, Singh and his colleagues (2009), [18] reported female sexual dysfunction (FSD) in 73.2% subjects of the sample.The complaints elicited were difficulties with desire in 77.2%, arousal in 91.3%, lubrication in 96.6%, orgasm in satisfaction in 81.2%, and pain in 64.4% of the subjects.Age above 40 years and fewer years of education were identified as contributory factors.Women attributed FSD to physical illness in participant or partner, relationship problems, and cultural taboos but none had sought professional help.
Behere and Natraj (1984) [21] and Bhatia and Malik (1991) [23] found that the patients with symptoms of Dhat syndrome were mostly young, recently married, poor, rural and from family with conservative attitudes towards sex.Most studies found that these patients lose their semen in sleep, with urine, masturbation, hetero/homosexual sex.
Behere and Natraj (1984) [21] and Bhatia and Malik (1991) [23] explored the patients' beliefs regarding composition of Dhat; found majority believe semen, followed by pus, sugar, concentrated urine, infection or "not sure."Majority considered masturbation and/or excessive indulgence in sexual activities as important causative factor, followed by venereal diseases, urinary tract infections, overeating, constipation or worm infestation, disturbed sleep or genetic factors.
Regarding management of Dhat syndrome, Wig (1960) [26] suggested emphatic listening, reassurance and correction of erroneous beliefs.Avasthi and Gupta (1997), [27] in their manual proposed that the management of Dhat syndrome involves sex education, relaxation therapy and medications.
Prakash and Meena (2007), [28] provided an explanation regarding this belief derived from the anatomy and physiology of penis.They proposed that patients with Dhat syndrome believe that whatever blood is collected in cavernous spaces during erection, probably converts into semen.Hence, with every sexual activity they lose blood; as blood is their source of energy, they lose energy everyday becoming more weak and lethargic.
CONCLUSION
This review highlights the available evidence in the field of psychosexual medicine in India.It is important to mention that all studies were from a hospital setting and none from community.Only a few studies explored female sexual dysfunction.Very few studies spoke about management issues.Dhat syndrome could be an important diagnostic entity to be researched.There is a strong need to perform studies in these areas.
|
v3-fos-license
|
2023-07-15T15:50:02.886Z
|
2023-07-01T00:00:00.000
|
259869086
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://direct.mit.edu/opmi/article-pdf/doi/10.1162/opmi_a_00091/2142018/opmi_a_00091.pdf",
"pdf_hash": "2cdb9c2c3fd84523e957ddb0dee41e168b436cbe",
"pdf_src": "MIT",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:630",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "60ce53bb969acc5d54447918dc1654d33694af73",
"year": 2023
}
|
pes2o/s2orc
|
Metacognitive Information Theory
Abstract The capacity that subjects have to rate confidence in their choices is a form of metacognition, and can be assessed according to bias, sensitivity and efficiency. Rich networks of domain-specific and domain-general regions of the brain are involved in the rating, and are associated with its quality and its use for regulating the processes of thinking and acting. Sensitivity and efficiency are often measured by quantities called meta–d′ and the M-ratio that are based on reverse engineering the potential accuracy of the original, primary, choice that is implied by the quality of the confidence judgements. Here, we advocate a straightforward measure of sensitivity, called meta–𝓘, which assesses the mutual information between the accuracy of the subject’s choices and the confidence reports, and two normalized versions of this measure that quantify efficiency in different regimes. Unlike most other measures, meta–𝓘-based quantities increase with the number of correctly assessed bins with which confidence is reported. We illustrate meta–𝓘 on data from a perceptual decision-making task, and via a simple form of simulated second-order metacognitive observer.
INTRODUCTION
The confidence that we apportion to our recollections, cognitions, decisions and actions can play a critical role in the preparations we make for success or failure; in determining whether we need to collect more external information or more samples of internal information before committing ourselves; in regulating the learning that we should do when outcomes are, or are not, as expected; and in communicating with others, for instance when engaging in collective decision-making (Bahrami et al., 2010;De Martino et al., 2013;Fleming, 2021;Kepecs & Mainen, 2012;Nelson & Narens, 1990;Schulz et al., 2020). Confidence, as one of the simplest forms of higher-order or self-reflective assessment about one's own cognitive processes, has also been (sometimes controversially) influential in modern theories of awareness (Fernandez-Duque et al., 2000;Lau, 2022;Lau & Rosenthal, 2011). Furthermore, various impairments of metacognition are central in a number of psychiatric conditions, for instance, with possibly exorbitant requirements for confidence helping underpin excessive checking in forms of obsessive compulsive disorder; or substantial over-confidence helping reinforce the persistent apparently erroneous conclusions drawn by those suffering from delusional disorders (Hoven et al., 2019;Rouault et al., 2018;Seow et al., 2021;Sun et al., 2017). Various regions of the prefrontal cortex, anterior cingulate cortex, insular cortex and the precuneus have been implicated in making such judgements and using them to control our cognition (for a meta-analysis of a wealth of studies, see Vaccaro & Fleming, 2018).
It has therefore long been recognized that is critical to measure the nature and quality of confidence judgements. At stake are three, related quantities: bias, sensitivity, and efficiency a n o p e n a c c e s s j o u r n a l (Fleming & Lau, 2014;Galvin et al., 2003;Maniscalco & Lau, 2012;Nelson, 1984). For concreteness, consider a simple perceptual decision-making problem: judging whether a Gabor patch is tilted left (L) or right (R) of vertical. The sensory input α can abstractly be regarded as a noisy version of d = −1 (L) or d = 1 (R). On each trial, subjects report their decision (the 'action' a) about the tilt-this is often called a type 1 judgement-and also their degree of confidence in the rectitude or accuracy of that decision (the 'rating')-a type 2 judgement. For convenience, we consider the original report as coming from an 'actor', and the confidence judgement from a 'rater'; although these are, of course, the same individual (Schulz et al., 2023). The type 1 judgement is the topic of conventional signal detection theory (Green & Swets, 1966), with accuracy being quantified by such measures as type 1 sensitivity or d 0 . If the rating is interpreted as just the probability that the type 1 decision is correct (Pouget et al., 2016), then metacognitive bias measures the overall calibration of the rating-whether subjects tend to think that they are more or less accurate than they actually are. Of course, a subject could be metacognitively unbiased if she reported the correct overall probability of being correct on every trial, independent of the actual observation (like a well-calibrated, but useless, weather forecaster reporting on every day of the year, the overall mean probability of rain; Dawid, 1982). Thus, metacognitive sensitivity measures the adaptability of the rating to the actual rectitude on a trial-by-trial basis-an ideal rater would have perfectly predictive error monitoring, rating correctly on a trial-by-trial basis whether the type 1 decision is going to be proved correct or incorrect. However, metacognitive sensitivity is not the whole story-the rater has a particularly easy job if the type 1 action is generally correct-it would be hard for the rater to be incorrect. Thus metacognitive efficiency attempts to correct the sensitivity for the quality of inference. Of course, metacognitive bias also has an impact: a thoroughly metacognitively biased rater who declares themselves fully confident on every trial, even when she in fact errs, would necessarily be fully insensitive and inefficient.
It might seem obvious, at least to the Bayesian decision theorists amongst us, that sensible observers would use all the information available to make their type 1 choice on a trial (a = 1 if P (d = 1|α) > 0.5), and the same information to make their type 2 rating (P (d = 1|α)) about their type 1 choice. This would be bias-free, and would leave sensitivity and efficiency at maximal values given the decision-maker's perceptual capacities (d 0 ). This would render nugatory the metacognitive measures. However, empirical findings do not accord with this expectation (for instance, it is impossible for the rating to be of a less than 50% chance of being correct; whereas subjects can actually be aware of upcoming errors before they occur (Gehring et al., 1993); also evident in signals that likely emanate from the anterior cingulate cortex (Botvinick et al., 2004;Carter et al., 1998;Dehaene et al., 1994;Kerns et al., 2004). Thus, there are various accounts in which, for instance, in a so-called second order model, the internal rater has access to both additional information after the type 1 decision has been made (for instance from so-called post-decisional information, which we later call γ, that has not been processed at the time that the decision is registered), and/or only a noisy internal report (β) of the information α that the actor used in making the type 1 decision in the first place (Fleming & Daw, 2017;Jang et al., 2012). The rater could also suffer from noise in their metacognitive judgement or report (Guggenmos, 2022. In cases such as this, it is possible to have metacognitive hypo-or hyper-sensitivity, and for the rater to predict errors. been able to make, and assess the notional type 1 sensitivity of this rater (Maniscalco & Lau, 2012. This value is called meta-d 0 , and has the attractive characteristic of being directly comparable to the actual type 1 sensitivity. Meta-d 0 can be assessed by fitting the actual confidence statistics to the confidence statistics that would have been predicted by the imaginary type 1 choices of the rater. It is then possible to create a metacognitive efficiency measure that adjusts for the underlying ease of the decision-making problem by comparing meta-d 0 to d 0 either subtractively (meta-d 0d 0 ) or divisively (the so-called M-ratio, which is meta-d 0 /d 0 ).
Meta-d 0 and the M-ratio are widely used as measures of meta-cognitive effectiveness (Barrett et al., 2013). However, along with some obvious assumptions (such as that ratings are monotonic in expected accuracy), they have some less desirable characteristics, including remaining dependency of the M-ratio on type 1 performance (Guggenmos, 2021, at least some aspects of which are, as noted above, inevitable), and on metacognitive bias (Xue et al., 2021, which is arguably less so). Here, along with the common observation that there is no reason to expect subjects' empirical confidence judgements to fit an assumed type 1 decision-process exactly, which means that the assessment of metacognitive efficiency could be inaccurate, we focus on the fact that the M-ratio does not take explicit account of the number of levels of confidence rating that subjects might be able to provide. A rater who can make a fine discrimination between being correct within the intervals [80, 85)% or [85, 90)% might reasonably demand to be considered more sensitive (and more efficient) than one with a single rating 'bucket' for the whole range [80, 90)%. Given a rater whose confidence is perfectly consistent with a type 1 decision, this excess discriminability will normally have no benefit from the perspective of meta-d 0 .
Here, we introduce and explore a natural alternative to meta-d 0 and the M-ratio, namely meta-I and two forms of a meta-I -ratio (called meta-I r 1 and meta-I r 2 ), which are based on the mutual information between the rectitude of the actor and the confidence ratings. The mutual information is straightforward to compute for conventional rating buckets, makes fewer assumptions about the ratings, other than that they are distinct and, ideally, suitably predictive of differences in accuracy, and increases naturally with the granularity of the ratings. The mutual information is related to measures that are based on the correlation between accuracy and confidence (Nelson, 1984), although it is, for instance, completely agnostic to any bias. We first illustrate meta-I -based measures on confidence data from Shekhar and Rahnev (2021). Then, to examine their properties in detail, we use a simple realization of a second-order rater (Fleming & Daw, 2017;Guggenmos, 2022;Jang et al., 2012;Mamassian & de Gardelle, 2022;Schulz et al., 2023), for which we can precisely unpick the nature of metacognitive sensitivity.
META-I I I I I I
Consider a simple perceptual decision-making task such as that reported in Shekhar and Rahnev (2021). Here, on each of 2800 trials t, participants saw for just 100 ms a noisy Gabor patch (of one of three different contrasts, defining three conditions) that was tilted either to the left or right of vertical (we write this as d t = ±1), and used a single scale to report the direction of the tilt (a t = ±1) and a continuous confidence rating (c t ) about the accuracy of their choices (whose true value is r t = d t × a t ). Confidence reporting in this experiment restricted c t to being between 0.5 and 1.
To illustrate the issues for measuring the quality of metacognition, Figure 1 shows violin plots of the distributions of confidence reports for three selected subjects for incorrect (red) and correct (green) choices and for the three contrast conditions (1 is hardest; 3 is easiest). Subject 5 is biased to report low confidence; subject 18 to report high confidence; subject 1 is in the middle. We can see the reduction in incorrect responses with higher contrast (i.e., higher condition number); but also additional facets such as the spikes of very high confidence reports. Since confidence should provide information about accuracy, measures of meta-cognitive sensitivity report how closely related are c t and r t . In terms of the plots in Figure 1, we would seek the mass of the green distributions to be higher than those of the red ones, with higher confidence ratings when the answer is actually correct.
As mentioned above, there is a wide variety of such measures, some of which are based on process models of the way that participants make decisions and rate confidence (e.g., Desender et al., 2021;Maniscalco & Lau, 2012;, whereas others are agnostic to the process by which confidence judgements are made, and depend on some form of correlation between accuracy and confidence (Nelson, 1984). In both cases, raw assessments are influenced by the absolute accuracy, since, for instance, if the decision-making task is very Figure 1. Confidence and accuracy for three selected subjects ('sub'). Each plot shows the distribution of confidence reports for incorrect (red) and correct (green) choices for each of the three contrast conditions ('con'). The total area of the violin plots is normalized, so the overall accuracy is evident in the sizes of the green areas. The magenta bars show the division of confidence ratings into two bins that approximately maximize meta-I ; the lower and upper triangles show the same for three bins. The table provides the numerical values of the various sensitivity and efficiency measures for these subjects. md 0 is meta-d 0 , mR is the M-ratio, mI [n] is meta-I for n, approximately optimally-positioned, confidence bins, mI r 1 [n] is meta-I r 1 for those same bins. Data from Shekhar and Rahnev (2021).
OPEN MIND: Discoveries in Cognitive Science easy, there is little uncertainty to which confidence could be sensitive. The table below the figure indicates d 0 , meta-d 0 (written as md 0 ) and the M-ratio (mR). Here, these quantities were calculated using maximum likelihood fitting routines from Lau (2012, 2014), in which the parameters of a naïve first order Bayesian rater are fit so that the distribution of its confidence ratings match those of each subject. From the M-ratio, we can see that, in this case, the order of the efficiency of these subjects is opposite to the order of their metacognitive bias. Figure 2A and B show meta-d 0 and the M-ratio for all the subjects in Shekhar and Rahnev (2021), in the three contrast condition. The subjects are sorted differently in each figure in decreasing order of the sensitivity (Figure 2A) or efficiency ( Figure 2B) for the most difficult condition (blue). We see that both measures tend to decrease together for all the contrast conditions, confirming past observations that there something generalizable about sensitivity and efficiency, at least for such closely related problems. Figure 2A also shows the dependence of meta-d 0 on d 0 : as noted, these values are distinctly greater for the higher contrast conditions. Figure 2B shows that this characteristic is largely abolished for the M-ratio, in which the rater's meta-d 0 is normalized by the actor's d 0 .
The measure meta-d 0 (along with others mentioned above) is a model-based measure of sensitivity, in that one imagines that the confidence reports are the first order judgements for an actor with some particular, parametrized characteristics. By contrast, meta-I is a model-agnostic measure of metacognitive sensitivity which quantifies the mutual information between r t and c t . Take the case that confidence is discrete (in Shekhar and Rahnev (2021), it is measured in 1/1000 ths ) and that we had been able to measure the full joint distribution of Shekhar and Rahnev (2021). (A) meta-d 0 across the 20 subjects for the three contrast conditions (in order of contrast, blue, red, yellow), with the subjects sorted by the lowest contrast. (B) The M-ratio for the subjects, again sorted by the value in the lowest contrast condition. (C) meta-I for the subjects in the same sort order as in (A) for two (lower envelope in magenta) or three (upper envelope in green) confidence bins with optimized thresholds. The lozenges fill the areas between lower and upper envelope in the colours for the condition. (D) The two normalizers for assessing efficiency: meta-I (d 0 ) (green; right axis, used to define meta-I r 1 ) and H 2 (r) (black; left axis, used to define meta-I r 2 ) as a function of the actor's d 0 . (E) meta-I r 1 for the subjects in the same sort order as in (B) with the same plotting conventions as in (C). (F) The relationship between the M-ratio and meta-I r 1 for the 60 combinations of subjects and conditions, for two (magenta) and three (green) confidence bins, joined by thin black lines for clarity. (G) The average across subjects of the ratio between meta-I (or equivalently meta-I r 1 or meta-I r 2 ) for 2 … 10 confidence bins to meta-I for just 2 confidence bins, for the three conditions. This shows that the subjects are, on average, able to use multiple bins to some good effect. (H) meta-I r 2 for the subjects in the same sort order as in (B) with the same plotting conventions as in (C).
OPEN MIND: Discoveries in Cognitive Science rating and confidence, P (r, c ), with P (r ) = AE c P (r, c ); P (c ) = AE r P (r, c ). Then, the mutual information is the difference between two entropies (measured, for convenience, in bits). One entropy, which we write as H 2 (r), is the overall uncertainty about the accuracy of the actor. For binary choice, this quantity varies between 0 bits, if the actor is perfectly accurate, as d 0 → ∞ (or indeed if the actor is perfectly inaccurate; always getting the answer wrong) and 1 bit, if the action a is completely uncorrelated with the truth d, which happens as d 0 → 0. The second entropy, H 2 (r |c) is the weighted average uncertainty about the accuracy that remains after observing the confidence rating c, where the weights come from the probability of seeing that rating c. The confidence judgement is very sensitive and efficient if most of the initial uncertainty about the accuracy is removed by the rating, making this last term near 0.
More formally, is the entropy of the accuracy; (2) is the conditional entropy of accuracy given confidence; (3) Table 1 provides an illustration of the way that one can calculate mutual information. Here, we show the four combinations of correct (r = 1) and incorrect (r = 0) and high (c = h) and low (c = l ) confidence for condition 3 for subject 5 (see the rightmost violin plot in the first row of Figure 1). We binarized the subject's nearly continuous report of confidence at the optimal point shown by the magenta bar in the figure (i.e., a threshold of 0.72). In this case, the probability of being correct is P (r = 1) = 855/998, with an entropy of H 2 (r ) = 0.593 bits; the probability of high and low confidence are P (c = h) = 420/998 and P (c = l ) = 578/998 respectively; the conditional probability of being correct given high confidence is P (r = 1|c = h) = 415/420 with an entropy of h 2 [P (r |c = h)] = 0.093 bits; and the conditional probability of being correct Table 1. Calculation tableau for meta-I for condition 3 for subject 5 (see also Figure 1). Here, we see the prevalence of the four combinations of being correct (r = 1) or incorrect (r = 0) and having high (c = h) or low (c = l ) confidence, dividing the subject's judgement at the single threshold of 0.72 shown by the magenta line in Figure 1 rectitude confidence prevalence r = 0 c = l 138 given low confidence is P (r = 1|c = l ) = 440/548 with an entropy of h 2 [P (r|c = l )] = 0.793 bits. Thus the full mutual information is as in the figure. Figure 2C generalizes this to show meta-I for two ways of turning the (nearly) continuous measure of confidence that the subjects reported into a set of mutually exclusive bins. 1 The lower (magenta) border of the lozenge for each condition (distinguished by the fill colour) is the result of choosing the best binarization of confidence (something that Shekhar and Rahnev (2021) explored explicitly). The short horizontal magenta lines on the violin plots of Figure 1 show where this binary separation falls for those three subjects-trying to separate green and red masses vertically. These thresholds are ultimately an expression of the bias in confidence reporting of the subjects-we see their levels roughly reflecting how the subjects employ the confidence scale. The upper (green) border of each lozenge shows the case of three optimized levels of confidence arising from the two thresholds shown as lower and upper triangles in Figure 1. First, note that meta-I values are (by construction) higher for the extra bins-a phenomenon we explore further below. Second, the subjects are sorted as in Figure 2A (by meta-d 0 in the lowest contrast condition); however the lozenge for this condition is almost monotonic; and the lozenges for the higher contrasts have a similar degree of monoticity to those in meta-d 0 , suggesting that meta-I is somewhat consistent with meta-d 0 . Along with this, we see that meta-I also increases with d 0 -although we will later qualify this finding-it arises here partly because of the rather modest levels of the actors' d 0 s in this study (with most M-ratios being less than 1). meta-I is a measure of metacognitive sensitivity. As for the relationship between meta-d 0 and the M-ratio, measuring metacognitive efficiency requires normalizing for a quantification of potentially available information about confidence. Guggenmos (personal communication) thus suggested taking the analagous step of calculating a meta-I -ratio by normalizing meta-I . One possible normalizer would be a quantity we write as meta-I (d 0 ) that would arise as the value of meta-I for a first-order rater following a conventional signal detection theory analysis based on the actor's d 0 where The green curve in Figure 2D shows meta-I (d 0 ) as a function of d 0 . For relatively low values of d 0 , as seen in Shekhar and Rahnev (2021), this increases with d 0 . However, for large d 0 , it decreases again, since meta-I is bounded above by the entropy of the accuracy H 2 (r )-and as d 0 rises, the actor becomes increasingly accurate, and so this entropy decreases.
1 See Shekhar and Rahnev (2021) for their analysis of the quantization of the continuous confidence report.
They also performed a sophisticated examination of different models of metacognitive rating based on this (notably suggesting the influence of a particular sort of noise). However, for our present purpose of analyzing our model-free construct meta-I , we will make a simple comparison with a binarized estimate of meta-d 0 .
OPEN MIND: Discoveries in Cognitive Science
This manoeuvre of normalizing meta-I by meta-I (d 0 ) parallels the M-ratio's use of d 0 itself to normalize meta-d 0 . It captures the inability of a first order actor with poor perceptual abilities to judge confidence well; and the consequences for metacognitive sensitivity of the lack of variability in the rectitude of an actor with excellent perceptual abilities. We write meta-I r 1 = meta-I /meta-I (d 0 ), and show it in Figure 2E for the two and three confidence bins of Figure 2C, but now sorted by the M-ratio of the lowest contrast condition (i.e., the sort used in Figure 2B). As for the M-ratio, we see that this normalization has more nearly equated the estimates of meta-cognitive efficiency for the three conditions, to a roughly equivalent degree to the M-ratio. Figure 2F shows the relationship between the M-ratio and meta-I r 1 for all subjects and all conditions for the case of two (magenta) and three (green) confidence bins (with the cases joined by vertical lines). We can see that there is a very close relationship between the M-ratio and meta-I r 1 , at least in this regime of actors and raters, confirming the impression from comparing Figures 1B and 1D.
How, though, should we think about the fact that there are apparently different values of meta-I for different numbers of confidence bins? All else being equal, a rater that can accurately distinguish a larger number of levels of accuracy should reasonably be considered to be more metacognitively sensitive and efficient-since this rater can offer a finer perspective on the chance of failure. Equivalently, meta-I will benefit from the deblurring of the ratings that occurs when they are split into more levels, at least provided that these levels are used well. This is not a property of meta-d 0 or the M-ratio-the main consequence of increasing the granularity of the confidence report is to affect the fitting process for estimating the rater's equivalent d 0 -it has no direct bearing on that version of sensitivity or efficiency.
To assess the consequence of increasing granularity, we evaluated the average across the subjects in Shekhar and Rahnev (2021) of the ratio of meta-I for between two and ten confidence bins and for just two bins. Here, we approximately optimized the thresholds on a subject-and condition-specific basis ( Figure 2G). From the increase with the number of bins, it is apparent that the subjects are able on average to report confidence at at relatively fine granularity-particularly in the most difficult (blue) contrast condition-but that this ultimately saturates (with many fewer than the ∼500 confidence bins of the experimental report). One wrinkle here is that we calculated the efficiency normalizer, meta-I (d 0 ), assuming continuous confidence judgements can be made by a first-order rater (i.e., with an infinite number of correctly-employed confidence bins). This is reasonable, because this estimate is based on a model that allows calculation to arbitrary accuracy. However, it could be questioned as a comparator, and it would also be possible to normalize by a version of meta-I(d 0 , b) that uses to optimal effect the same number (b) of confidence bins as the empirical rater. Figure 2H reports the result of normalizing meta-I by the theoretical upper bound H 2 (r) we mentioned above rather than meta-I (d 0 ). We call the resulting measure of metacognitive efficiency meta-I r 2 . H 2 (r) is shown as a function of d 0 in the black curve in Figure 2D. This also accounts well for the fact that, for high d 0 for the actor, metacognitive sensitivity cannot be high, since, as we noted, there is little entropy in H 2 (r) to reduce by H 2 (r |c). However, in cases such as the second order model we consider later in which the rater can have access to much better information than the actor, it allows us to assess the efficiency in absolute term. The data in Figure 2H suggest that this regime is not relevant for the data in Shekhar and Rahnev (2021), in that the raters appear to be generally rather worse than the actors.
A final issue with information theoretic measures concerns estimation of entropies and conditional entropies. The mutual information associated with continuous variables, such as OPEN MIND: Discoveries in Cognitive Science confidence in some experiments, is known to be hard to estimate, because of biases, and so care is necessary (Kozachenko & Leonenko, 1987;Paninski, 2003;Panzeri & Treves, 1996;Witter & Houghton, 2021). Biases are typically weaker for discrete variables, which are employed in many experiments on confidence. Here, we use randomized or exact permutation methods as a simple way to correct for biases.
A SECOND-ORDER DECISION-MAKER
In order to examine meta-I and meta-I r 1;2 f g in more detail, we turned to a simulation which allows us to abstract the relevant factors away from the noise associated with the ratings of individuals. We simulate choices and ratings from a simple, realized form of a second-order decision-making (Fleming & Daw, 2017;Jang et al., 2012;Mamassian & de Gardelle, 2022). On any trial, the actor and rater collectively receive three Gaussian distributed signals that bear on a true underlying quantity d = ±1 (Figure 3A-C); the actor generates a binary choice a ( Figure 3A); and the rater generates one of a number of discrete confidence values c ( Figure 3E and 3H). Here: is the primary decision variable with type-1 sensitivity d 0 ¼ 2=A β ¼ α þ B β allows the rater partial insight into the basis of the actor's decision (6) γ ¼ d þ G γ provides the rater with independent or unique information about d (7) where { α , β , γ } are independent standard N (0, 1) distributed random variables. We assume that d = ±1 with equal probability, and that the actor is unbiased, and so makes a decision based on the sign of α, with a = −1 if α < 0, a = 1 if α > 0 and is indifferent if α = 0. The rater bases its choice on the observation of the action a and the random variables β and γ, whose combination arranges for potentially partial correlation between the actor's and rater's information about the true state of the stimulus, d, as in the standard second order model. Here, the rater's confidence c = P (a = d |a, β, γ) is the probability that the actor's action a was correct given all the information that the rater possesses.
Although for didactic convenience, the realization of the information structure relating actor and rater is different here from the canonical stochastic detection and retrieval model of Jang et al. (2012), the ultimate statistical relationships are the same; we provide a translation in the Supplement. Thus, briefly, and as discussed at length in Fleming and Daw (2017), this model has various natural limits of metacognitive interest. In the case that B → 0; G → ∞, the rater has exactly the same information as the actor, and so could act like the naive Bayesian decision theorist above. Then, meta-d 0 would be the same as d 0 , and the M-ratio would be 1. In the opposite case, B → ∞; G → 0, the rater would have perfect knowledge about the rectitude of the actor's choice (based on the equivalent of perfect post-decisional information, also known as a confidence 'boost'; Mamassian and de Gardelle (2022)) and so could be as sensitive as it is possible to be. If B → ∞; G → ∞, then the rater has no specific information about the basis of the actor's choice on a trial, and so, like the incompetent weather forecaster, could do no better than reporting the overall expected accuracy on each trial (which here is Φ(d 0 /2), where Φ is the cumulative distribution function for a normal distribution). In nonlimiting cases, β provides the rater with information about the data on which the actor based their choice, which she has to combine with her private, e.g., post-decisional, information about d.
It is didactically convenient to consider response-specific confidence, although here, the symmetry of the problem means that all the measures will be the same for a = −1 and a = 1. Figure 3D and 3E show the two critical quantities that govern confidence judgements for a = −1 (for the case that d 0 = 1; B = 1, G = ffiffi ffi 5 p ). First, Figure 3D shows the posterior density that the rater will observe β and γ given that a = −1. These values slightly favour the lower left quadrant (β < 0; γ < 0), since a = −1 implies that the actor saw α < 0. Note that this preference is stronger as a function of β than γ; this is because B is quite small, and we know that α < 0 (since a = −1). Second, Figure 3E shows the optimal confidence P (d = −1|a = −1, β, γ) that the rater would have about d = −1 given all the information in her possession. The plot shows contours at {0.1, 0.5, 0.9} to indicate more precisely the shape of this distribution. Coarsely, if β is very negative, which, because B is small, likely means that α was very negative, then the rater is rather confident that a = −1 is correct, unless γ is very large and positive, to counteract this. The slopes of the contours for negative β largely reflect the relative information about d in α and γ. If β is very positive, then since a = −1, it can only be that α is very close to 0, and so the rater has to rely on γ, implying that the contours run largely perpendicular to the γ axis. . (D) The density P (β, γ |a = −1) slightly favours the lower left quadrant, but with substantial noise. The distribution integrates to 1; color scale not shown for convenience. (E) The conditional probability P (d = −1 | a = −1, β, γ) is the accuracy afforded by the rater's information set (a = −1, β, γ). If β, γ ≪ 0, then the decision a = −1 is likely to be true. The contour lines show the boundaries where this objective confidence crosses the values shown-the enclosed regions are where objective confidence ratings would be provided. (F) If we consider the regions of β, γ that define these bins of confidence, we can assess the expected accuracy-defined by the combination of the probability of ending up in one of the confidence bins c (a = −1, β, γ) and the chance of being correct (white) or incorrect (black) in that bin. The mutual information I (r, c) between being correct and c (given a = −1) is 0.104 bits. (G-I) The same as (D-F) except for the case that a = 1. Since the problem is symmetric, this is essentially the same as for a = −1. Figure 3F shows the consequence for the confidence ratings. Here, we consider the four bins implied by the contours in Figure 3E): {[0, 0.1), [0.1, 0.5), [0.5, 0.9)[0.9, 1.0]}. These bins were chosen to keep them separated on the plot; we consider issues of the nature of the bins later. The total height of each bar integrates the total probability mass (from Figure 3D) that ends up in each of the regions delineated in Figure 3E. This quantifies the fraction of confidence reports that will end up in each confidence bin. For each of these confidence reports, the actor could be correct or incorrect; we show the expected proportion of correct reports in white; and incorrect reports in black. If the confidence bins were very narrow, then since all calculations are probabilistically correct, the relative heights of black and white parts of a bar would be given by just the confidence level associated with this bar (since this is exactly what the confidence quantifies). However, since the confidence regions are rather wide, we have to calculate a weighted average, where the weights are purely determined by the probability mass in Figure 3D and the quantity that is averaged is the precise confidence in Figure 3E. Thus, for instance, the mean accuracy in the [0.5, 0.9) bin is slightly less than the centre of this internal (0.75). In this instance, we can calculate meta-d 0 = M-ratio = 1.7 based on the statistics in these confidence bins. Figure 3G-H show exactly the same as Figure 3D-F, but for the case that a = 1 instead. The distributional plots are mirror symmetric, favouring positive rather than negative values of β, γ.
OPEN MIND: Discoveries in Cognitive Science
The confidence values in Figure 3H are exactly the same as in Figure 3F, since this rater is just as good for a = 1 as for a = −1.
We now consider meta-I and meta-I r 1;2 f g for this case. First, the actor is 69% accurate (with d 0 = 1), making the unconditional entropy H 2 (r |a = −1) = 0.89 bits. Second, P (c |a = −1) is the total height of the bars in Figure 3F Here, although the rater is therefore more accurate than the actor (as similarly reflected by the M-ratio), her absolute efficiency meta-I r 2 = 0.12 is rather low, since the rater's unique information γ is subject to quite some noise, with G being large. Figure 4 shows the same as Figure 3, but for the case that the rater enjoys a much greater amount of unique, post-decisional, information (with G = ffiffiffiffiffiffi ffi 0:5 p ; Figure 4C). Figure 4D now shows bimodality, since γ is only likely to be near d = ±1, and less likely to take a value near 0. The mode associated with γ = −1 has a greater mass than that for γ = 1, since a = −1. The conditional distribution in Figure 4E now shows a much starker contrast-with the highly accurate γ being the main determinant of the confidence in the actor's choice (so if γ > 0, then the rater is rather confident that the actor erred). Figure 4F shows the consequence of this for the rating buckets. Now, the extreme values are much more likely-and are duly more pure in the sense that the rater can be rather sure about the rectitude of the actor. Here, meta-d 0 = M-ratio = 4.5, showing the benefit of the well-informed rater. Again, the case for a = 1 ( Figure 4G-H) is the mirror image of the case for a = −1.
If we carry out the same calculations of meta-I and meta-I r 1;2 f g for this case, we observe that the individual entropy terms for the bars in Figure 4F are 0.15, 0.84, 0.82, 0.11 respectively. However, the fact that most of the weight in the average is on the outer two bars means that the total remaining uncertainty about the accuracy as I 2 (r |c, a = −1) = 0.27. This makes meta-I = 0.62 bits, and meta-I r 1 = 12. In this regime, the M-ratio and meta-I r 1 diverge.
However, the actor's performance is not a good yardstick for the rater, since the rater has substantially more information. Thus, meta-I r 2 = 0.7 is a more useful measure of the high absolute efficiency of the rater, which reflects the high signal to noise ratio of the rater's unique information, with G being small. Figure 5 compares the various metacognitive sensitivity and efficiency measures for various values of the actor's type 1 sensitivity d 0 , and for different qualities of the unique information of the rater G 2 . Here, as in Figures 3 and 4, B = 1. By contrast with the earlier figures, however, the rater optimally deployed four confidence bins.
These plots cover the two regimes discussed earlier. In one, where G 2 is not too small, the rater is at least co-dependent on the information that the actor used. Here the M-ratio (B) and meta-I r 1 (D) largely agree (although, as we will see later, meta-I r 1 correctly exploits extra confidence bins). However, in the other regime, where the rater is mostly dependent on its own source of information (G 2 is small), and the actor's performance is poor, then both the M-ratio and meta-I r 1 diverge. Here, the absolute efficiency, meta-I r 2 (E), of the rater is more relevant. Indeed, we can see that meta-I r 2 is largely constant as a function of d 0 for very low G 2 . This property is shared by meta-d 0 ; however meta-d 0 lacks an appropriate scale (since the actor's d 0 is not an appropriate baseline). Figure 3, except that the standard deviation of the rater's unique information γ is G = ffiffiffiffiffiffi ffi 0:5 p . The bimodal distributions in (D, G) come from the two narrow possibilities for γ, with the weight on γ being near to −1 being higher in (D), because a = −1 there. Now the confidence contours are defined almost exclusively by γ (the near vertical arrangements in E and H); and the accuracy bins are nearly exclusively one colour (when the rater's confidence is 0-0.1, the actor is almost always incorrect.
OPEN MIND: Discoveries in Cognitive Science
As we saw in Figure 2F and 2G, one particular issue that spurs the use of meta-I is the effect of increasing the granularity of the confidence rating. Figure 6 examines this issue from the perspective of our second-order rater, showing how meta-I (the same would be true of the efficiency measures) increases with the number of confidence levels for different qualities of actor and different amounts of independent information provided to the rater (quantified as by G 2 ). Here, the thresholds defining the levels were again set to optimize meta-I . In this idealized case, the extra levels are never harmful, but the degree to which they are helpful varies quite substantially. As we saw in Figure 2G, the increase is greater for lower d 0 (condition 1 in that figure); it also grows with G 2 . Various factors are involved: for instance whether there is such certainty of the actor (d 0 = 4) or the rater (G 2 ≃ 0.1) that two levels suffice. Note that these Figure 5. Meta-cognitive sensitivities and efficiencies for the second order observer as a function of d 0 and log 10 G 2 for the case that B = 1 and there are four confidence bins that are optimized to maximize meta-I (and so differ for each combination of d 0 and G 2 ). (A, B) meta-d 0 and the M-ratio, with the latter suggesting that the increase in meta-d 0 for larger d 0 is not a sign of efficiency. (C) meta-I , showing that as d 0 gets large, the mutual information decreases, since the entropy of accuracy, H 2 (r ), decreases. (D) Normalizing meta-I by the mutual information meta-I (d 0 ) leads to values that are close to the M-ratio (B) away from the regime in which G 2 is small so the rater has access to higher quality information than the actor. (E) Normalizing meta-I by the entropy of the accuracy H 2 (r ) provides a measure of absolute efficiency which is roughly constant for small G 2 , as the rater's unique information dominates. plots show the benefit as the ratios between 4, 8, 16 levels and 2 levels; as expected the total meta-I decreases as G 2 gets larger.
Even for a given number of levels, the mutual information can vary as a function of the actual confidence intervals. For instance, for the cases of Figures 3 and 4 The efficiency measures meta-I r 1 and meta-I r 2 are ways of measuring the hypo-and hypermetacognitive sensitivity for meta-I . In Figure 7, we compare meta-I (A) and the efficiency ratios (B, C) for particular idealized actors and raters (green and cyan lines) with the same measures for the actual second order rater of the previous figures for the maximum number (B) and log meta-I r 2 (C) for the naive first order Bayesian case with continuous confidence levels across different values of d 0 for the case of standard signal detection theory (green; discriminating two Gaussian distributions with means d = ±1 and standard deviation σ = 2/d 0 ) and for the extreme mixture case (cyan; with an actor that is 50% correct with probability 2(1 − Φ(d 0 /2)) and 100% correct otherwise). The magenta points are from the second order model with the three values of B shown in the titles and the values of G 2 = {10, 3.7, 1, 0.27, 0.1}, from bottom to (top) for the case of 16, optimally-spaced confidence levels. Points for the same value of G are connected by dotted lines for graphical convenience.
(16) of optimized confidence bins (magenta points). The green line comes from the same rater that defined meta-I(d 0 ), i.e., the case of standard signal detection theory in which the actor discriminates two Gaussian distributions with means d = ±1 and common standard deviation σ = 2/d 0 , and the rater acts as a naive Bayesian type 1 rater with a continuous range of confidence levels. Thus, the green lines in Figure 7A are the same as in Figure 2D; and the green lines in Figure 7B are flat at 0, since meta-I r 1 is defined as the ratio between meta-I for a rater and meta-I (d 0 ) itself. The absolute efficiency of this idealized rater decreases as d 0 gets smaller, since the actor makes more errors, but the rater lacks the information to discriminate them.
The cyan lines show meta-I and meta-I r 1;2 f g for the less standard model of actor and type 1 rater that was considered as a reductio ad absurdum by Rahnev and Fleming (2019), for which the error rate associated with a conventional d 0 (which is p err = 1 − Φ(d 0 /2)) comes from a mixture model in which the actor and rater are either completely guessing (with actual and believed probability of error 50%); happening with a mixture proportion of 2p err ; and actual and believed probability of error 0%, with a mixture proportion of 1 − 2p err . The values of meta-I and meta-I r 1;2 f g for this rater are higher than for the standard one, since there is extra information about the source of errors. This reminds us that d 0 is an incomplete measure of the actor's process, again making it important to interpret cautiously measures such as the M-ratio and meta-I r 1 that use it directly for normalization. One could consider meta-I and meta-I r 1;2 f g values lower than these numbers to be hypo-efficient; and ones larger than these to be hyper-efficient-at least relative to these raters. The magenta points show that as G 2 decreases (bottom to top), the second order model generally goes from hypo to hyper-efficiency; but B also plays a role. This is clearest for the meta-I r 1 , where for intermediate values of G, the rater becomes hypo-efficient as d 0 grows for large B, when the rater cannot prosper from the extra information the actor enjoys. Fleming and Lau (2014) noted that standard ways of quantifying metacognition are based on distributions such as those in Figures 1 and 3F, 3H (at least if extra factors such as the time participants take to rate confidence are not taken into account; Desender et al., 2022). Indeed, Nelson (1984) listed eight such measures, which do not include meta-d 0 or meta-I or meta-I r 1;2 f g , being measures of metacognitive sensitivity and efficiency that we advocate here. meta-I is the mutual information between the actual accuracy of the choices of the actor and the confidence ratings produced by the rater about those choices. This is a simple function of the same statistics used to calculate meta-d 0 and the M-ratio, and shares some of the desirable properties of those quantities (along, of course, with the less desirable ones rooted in assumptions such as that the decision-making process is stationary, with a fixed strength of evidence; Rahnev & Fleming, 2019). However, like correlation measures (Nelson, 1984), meta-I has the additional benefit of not depending on a potentially imperfect fit to a model of type 1 choice that might not be exactly appropriate, and it also scales appropriately with such factors as the number of levels of confidence. Note that Bowman et al. (2022) suggested using the mutual information to measure a form of type 1 sensitivity in a study of awareness rather than confidence.
DISCUSSION
That apparent metacognitive efficiency can increase with the number of levels with which subjects rate confidence suggests that in experiments collecting confidence ratings, it would be worth paying some extra attention to the way that reports are solicited. In the data from Shekhar and Rahnev (2021), metacognitive efficiency increased up to 10 levels of reporting-at least on average-something we could observe clearly by looking at alternative quantizations of the nearly continuous confidence data they had collected. It would be interesting to carry out this exercise on other data. It should be noted that evaluating mutual information measures for a truly continuous confidence report is tricky from modest amounts of data, because of known biases (Kozachenko & Leonenko, 1987;Paninski, 2003, Panzeri & Treves, 1996Witter & Houghton, 2021); and so further study in particular cases would be most valuable.
We illustrated meta-I and meta-I r 1;2 f g , and compared them with the other measures, using both data from Shekhar and Rahnev (2021) and a simple case of a second order rater (Fleming & Daw, 2017), which is not restricted to having confidence at least as large as 50% (as would be true of a naive type 1 Bayesian), and can be either hypo-or hyper-metacognitively efficient.
The evaluation of the mutual information is completely bias-free. Of course, as noted in the Introduction, bias can affect the mutual information, by affecting the utilitization of the confidence levels, thereby increasing the conditional entropy of the accuracy given the rating (H(r|c, a)). However, like all information theoretic quantities (though unlike the M-ratio; Xue et al., 2021), meta-I is completely unaffected by the labels that are given to the confidence levels-it is only influenced by the conditional accuracy that these levels afford. Indeed, meta-I would be unaffected if the labels were scrambled, so that subjects notionally reported 'high confidence' in the actor's choice to mean that an error was likely, and 'low confidence' when an error was not. This is also unlike meta-d 0 , which makes assumptions about the appropriate monotonicity of the reporting levels in order to be able to calculate a notional type 1 d 0 . It might be possible to include in the optimization process that leads to the evaluation of meta-d 0 an additional reordering of the confidence levels-although this would then create a complex combinatorial optimization problem (with n! possible orders for n levels). It would be interesting to examine other information theoretic mechanisms that might preserve at least the order of the levels. For instance, one might imagine the act of reporting as being like a noisy channel, in which subjects can stochastically report levels that are somewhat different from their true confidence. A treatment that would have this effect (albeit, technically, by varying the confidence criteria rather than the confidence signal) is exactly what was suggested by Shekhar and Rahnev (2021) as a process model for metacognitive inefficiency in the data that we analyzed above (see also Guggenmos, 2022). One limit of such noisy processes might offer a good formalization of the common empirical practice of recording confidence on a continuous scale (e.g., using a slider), but then for the experimenter to create a set of bins whose width would be determined by the structure of the stochastic report.
We noted that different positioning of the bins of confidence (even keeping the number constant) could lead to different values of meta-I. We approximately optimized the bins on a case by case basis, 2 a bit like efficient coding of sensory information (Laughlin, 1981;Zhaoping, 2006), where coding levels are adjusted to reflect the 'natural' statistics of the information that is being coded. In the context of social inference, Bang et al. (2017) suggested maximising the entropy of confidence reports. The mutual information of Equation 1 can equivalently be written as the difference between the unconditional entropy of the confidence reports (given the action) and the conditional entropy given the accuracy (and the action). Thus maximizing the entropy can be beneficial for improving meta-I . However, the consequences for the second term of the mutual information, namely the conditional entropy of the confidence reports given the accuracy (and the action), also need to be taken into account. It would certainly be interesting to examine the optimal meta-I solutions in more detail. If, as in Bang et al. (2017), the confidence regions change over time (for instance, to optimize their utilization), then metacognitive effectiveness and meta-I will change too, and would need to be tracked in an appropriately dynamic manner, something that poses potential statistical problems. It has been noted that the test-retest reliability of the M-ratio is compromised when the number of rating levels increases, suggesting that it would be important to investigate the equivalent for meta-I and meta-I r 1;2 f g .
There is much discussion of the need for meta-d 0 to be corrected by the type 1 sensitivity d 0 in order to assess metacognitive efficiency-since cases with high d 0 are intrinsically easier. This has some undesirable consequences-for instance if d 0 is very low, then even a modest meta-d 0 can lead to an extremely large M-ratio, something that has inspired the use of the difference (meta-d 0 − d 0 ) rather than the M-ratio, or the logarithm of the M-ratio in such circumstances. We showed that meta-I r 1 has a similar issue and suggested that in the regime for the second-order model in which the rater is replete with its own sources of information that exceed those of the actor, it is more appropriate to consider the absolute efficiency of metacognition, normalizing meta-I instead by the unconditional entropy of the accuracy H 2 (r ), which is an upper bound to the mutual information, and is the total available variability that confidence could potentially rate. This alternative ratio meta-I r 2 assesses just how little of the available unconditional entropy of the accuracy is lost to a high conditional entropy (H 2 (r |c, a = −1)) in the mutual information equation.
One might look at both meta-I or the meta-I -ratio as potential correlates of brain structure and function (Baird et al., 2015;Fleming et al., 2010). Note that the informational quantities can formally also accommodate tasks which use multiple d 0 values (for instance, if the quality of sensory information is different from trial to trial, as in the mixture curve of Figure 7). However, interpretative care is necessary (Rahnev & Fleming, 2019).
Like other information theoretic proposals in neuroscience, meta-I arguably offers more insight into bounds on the nature and quality of the computations involved in metacognition than into the neural realization of these computations. Process models such as those in Desender et al. (2022), Guggenmos (2022), and Shekhar and Rahnev (2021) or even the simple second order model that we considered (Fleming & Daw, 2017;Jang et al., 2012) are an attractive alternative, albeit adopting far stronger assumptions. Nevertheless, meta-I or meta-I r 1;2 f g would be drop-in replacements for other measures such as the M-ratio for such assessments as volume-based morphometry for regions whose size is correlated with the quality of metacognition (Fleming et al., 2010). Of course, despite the attractive theoretical properties of information theoretic measures, they are far from unique in measuring the quality of raters. Indeed, so-called skill scores (a term of art in assessing the sort of probabilistic forecasters with which metacognition is concerned) can be based on (strictly) proper scoring rules (Gneiting & Raftery, 2007) (a class including the famous quadratic Brier score; Brier (1950); and the logarithmic scoring rule that underpins meta-I ). Correcting the evaluation of forecasters to reflect the difficulty of their forecasting tasks is also a concern in that literature.
In sum, the problem of confidence is inherently one of information-that the actor has about the true state of the stimulus; and that the rater has about the same quantity and about what the actor used. It therefore seems appropriate to use the methods of information theory to judge the relationship between the stimulus, the actor and the rater. This study reused data that had been made publicly available based on the study of Shekhar and Rahnev (2021).
Data and code for reproducing the figures are available at https://github.com/Peter-Dayan -TN/metainf.git.
|
v3-fos-license
|
2019-04-10T13:13:19.832Z
|
2019-02-08T00:00:00.000
|
104375111
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3263/9/2/81/pdf",
"pdf_hash": "e8196fcb132362271a009d43685a6cfc13f87e83",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:633",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Materials Science"
],
"sha1": "4a9f78f4d20a20de7fe8155a028ac66f096dea1a",
"year": 2019
}
|
pes2o/s2orc
|
and uranium(VI) reactions at the magnetite (111) surface.
: Neptunium and uranium are important radionuclides in many aspects of the nuclear fuel cycle and are often present in radioactive wastes which require long term management. Understanding the environmental behaviour and mobility of these actinides is essential in underpinning remediation strategies and safety assessments for wastes containing these radionuclides. By combining state-of-the-art X-ray techniques (synchrotron-based Grazing Incidence XAS, and XPS) with wet chemistry techniques (ICP-MS, liquid scintillation counting and UV-Vis spectroscopy), we determined that contrary to uranium(VI), neptunium(V) interaction with magnetite is not significantly affected by the presence of bicarbonate. Uranium interactions with a magnetite surface resulted in XAS and XPS signals dominated by surface complexes of U(VI), while neptunium on the surface of magnetite was dominated by Np(IV) species. UV-Vis spectroscopy on the aqueous Np(V) species before and after interaction with magnetite showed different speciation due to the presence of carbonate. Interestingly, in the presence of bicarbonate after equilibration with magnetite, an unknown aqueous NpO 2+ species was detected using UV-Vis spectroscopy, which we postulate is a ternary complex of Np(V) with carbonate and (likely) an iron species. Regardless, the Np speciation in the aqueous phase (Np(V)) and on the magnetite (111) surfaces (Np(IV)) indicate that with and without bicarbonate the interaction of Np(V) with magnetite proceeds via a surface mediated reduction mechanism. Overall, the results presented highlight the differences between uranium and neptunium interaction with magnetite, and reaffirm the potential importance of bicarbonate present in the aqueous phase.
Introduction
Uranium and neptunium are important radionuclides in many aspects of the nuclear fuel cycle [1] and are often present in higher activity radioactive wastes requiring long term management and disposal strategies [2][3][4]. In most countries, the preferred long term management strategy of radioactive wastes is through their disposal in deep geological facilities [1]. Additionally, uranium is present in many contaminated land situations associated with, for example, reprocessing and mining activities [5][6][7][8]. Understanding the environmental behaviour and mobility of these actinides is essential to underpin remediation strategies and safety assessments, and due to the prevalence of Np(V) in oxidized environments, Np could be a credible analogue for investigation of plutonium geochemistry.
In natural and engineered environments iron oxide minerals are ubiquitous [9] and have the potential to affect the mobility of radionuclides significantly through both incorporation and surface complexation processes [10][11][12][13][14][15][16][17][18][19][20][21]. In aerobic environments the redox states of the actinides (U, Np and Pu) are dominated by relatively soluble hexa-and pentavalent species, while under anoxic and progressively more reducing conditions, poorly soluble tetra-and trivalent species dominate [22]. Moreover, under reduced environments, magnetite (Fe (II) Fe (III) 2O4) will be a significant iron oxide phase present in many geological disposal situations [9] and form as a corrosion product from zero valent iron in construction materials and stored wastes. Magnetite has been shown to affect the mobility of uranium [12,17,[23][24][25][26][27], neptunium [16,28] and plutonium [13,17] through Fe(II) oxidation coupled to reduction of these actinides to less soluble (tri-and) tetravalent species including incorporation (U) and adsorption (U, Np, Pu) processes. It is postulated that the reduction of U(VI) occurs concurrently to oxidation of magnetite to maghemite [23,29,30]. This was observed for U(VI) interactions with magnetite surfaces [23,29,30] which reportedly occurs through the initial adsorption and subsequent reduction of adsorbed U(VI) to pentavalent uranium (U(V)) coupled with the oxidation of Fe(II) to Fe(III) [26]. On the surface of magnetite, U(V) subsequently disproportionates to U(IV) and U(VI) [31][32][33][34] and forms as a final reduction product either surface bound U 4+ (at low surface coverage) or nanoparticulate uraninite (UO2) (at elevated surface coverage) [24]. Conversely, in the case that U(V) incorporates into the structure of magnetite, it has been shown that this disproportionation to U(IV) and U(VI) is inhibited [18,21]. Finally, the U(VI) reduction rates and extent of reduction to U(IV) show a strong dependency on the stoichiometry of magnetite (Fe(II)/Fe(III)) [23] and are inhibited by aqueous bicarbonate through the formation of uranyl carbonate (surface) complexes [27,35].
The interaction of Np(V) with magnetite surfaces reportedly occurs through the initial adsorption of Np(V) followed by the reduction to surface Np 4+ or nanoparticulate NpO2 species [16,28,35]. In the case of plutonium, it is postulated that aqueous plutonium species (e.g., Pu 4+ and PuO2 + ) interact with redox active (magnetite) and inactive (e.g., goethite and montmorillonite) mineral surfaces through adsorption followed by reduction to tetravalent and trivalent plutonium [13,17,36,37]. Interestingly, no detailed information is available on the mechanisms of Np(V) reduction on magnetite surfaces, specifically on the influence of bicarbonate species in solution on these mechanisms.
In this study, we aimed to provide a direct comparison of the interactions of actinide species (specifically U(VI) and Np(V)) with magnetite in order to obtain detailed information on their mechanisms of interaction including the influence of carbonate on these mechanisms. To achieve this we performed experiments on U(VI) and Np(V) interactions with single crystal magnetite with nearatomically flat (111) crystal faces to enable surface specific analyses of these mechanisms. We used two surface sensitive X-ray techniques (grazing incidence X-ray absorption spectroscopy (GI-XAS) and X-ray photoemission spectroscopy (XPS)) to obtain detailed information on the mechanisms of interaction of U(VI) and Np(V) with magnetite (111) surfaces.
Materials and Methods
All experiments were performed in an anaerobic chamber with solutions that were purged with argon gas. Single (near-)stoichiometric, natural magnetite crystals of 10 by 10 by 0.5 mm with the (111) crystal face polished were purchased from SurfaceNet GmbH, Germany. The crystals were used in the experiments as received. In total 4 experiments were performed at pH 7 in different buffer systems. The buffer solutions used were: 10 mM 3-(N-morpholino)propanesulfonic acid (MOPS); and 5 mM NaHCO3. Two experiments were performed in buffer solutions (with either MOPS or NaHCO3 buffers) with 10 ppm (42 μM) depleted uranium (as U(VI)O2 2+ ) (U MOPS and U NaHCO3). A second set of two experiments was performed in the same buffer solutions with 6 ppm (25 μM) neptunium-237 as Np(V)O2 + (Np MOPS and Np NaHCO3) [38].
The 5 mM NaHCO3 and 10 mM MOPS buffer solutions were prepared by dissolving MOPS or NaHCO3 in 18.2 MΩ in a controlled atmosphere (CO2 free anaerobic COY chamber) and the pH was adjusted to 7 using NaOH or HCl, respectively. The experiments were spiked with 600 ppm (2.5 mM) UO2 2+ (in ~6 mM HCl) or 20 kBq/ml (3 mM) Np(V)O2 + (in ~15 mM HCl, obtained from LEA-Cerca) stocks. At this stage magnetite single crystals were added to the buffer solutions and reacted for 14 days to ensure that the interaction of the actinides with the magnetite had completed [39]. Additionally, U(VI) and Np(V) spiked buffer solutions without magnetite were retained throughout the experiment to determine any removal of U(VI) and Np(V) from solution due to oversaturation. After 14 days, the magnetite crystals were removed from the aqueous phase and placed on aluminium holders with a Kapton cover and placed inside sealed PP bags and stored in a Kilner jar within the anaerobic chamber until further analyses (GI-XAS and XPS). Additionally, after 14 days, the aqueous phase of all buffer solutions was filtered through 0.2 μm nylon syringe filters and analysed for total uranium using an Agilent 7500cx ICP-MS and for neptunium using a Tri-Carb 2800tr liquid scintillation counter (LSC). The neptunium solution phases from experiments before and after reaction with magnetite single crystals were also analysed for UV-Vis spectroscopy on a Jenway 6850 spectrophotometer using a quartz cuvette to assess the speciation of aqueous neptunium in solutions equilibrated with magnetite or in the absence of magnetite. Combined, UV-Vis spectroscopy and electroplating followed by α-spectroscopy confirmed that the Np experiments were performed at 6 ppm (25 μM) NpO2 + . Interestingly, the (3 mM) Np(V) stock also had ultra-trace concentrations (several μM) of 239,240 Pu present resulting in estimated concentrations of several ppb (tens of nM) of Pu in the Np MOPS and NaHCO3 experiments. This Pu trace in the NpO2 + stock solution (15 mM HCl) did not have a defined oxidation state speciation. PHREEQC [40] calculations were performed using the specific interaction theory database (sit.dat) [22] to evaluate the equilibria between different aqueous and solid (redox) species of Fe, U, Np and Pu at concentrations relevant to the experimental systems.
Grazing incidence X-ray Absorption Spectroscopy (GI-XAS).
Single magnetite crystals from the 14 days equilibrated U and Np experiments were analysed for fluorescence detected GI-XAS [41] on the uranium and neptunium LIII-edges to determine the speciation of uranium and neptunium on the magnetite surfaces. These analyses were performed at beamline I18 at Diamond Light Source, UK, using a Si(111) monochromator and a 4-element silicon drift fluorescence detector [42]. The energy of the beam was calibrated using the K-edge spectrum of an yttrium foil. The samples were mounted on an eight-axis sample stage (Newport) and the crystals were aligned by positioning them parallel to the beam direction and attenuating to half of the beam intensity as measured by the ion chamber. Next the crystals were rotated along the axis perpendicular to the beam direction to optimise the total external reflection and maximise the intensity of the uranium and neptunium Lα1 fluorescence lines. To minimize the potential of beam induced changes to the magnetite (111) surface (e.g., drying and subsequent decrease in the intensity of the detected fluorescence), the crystals were moved horizontally and realigned during analyses. The uranium and neptunium X-ray absorption spectra collected were inspected for Bragg peaks, merged, normalized and background subtracted using Athena [43]. Athena was also used to determine potential changes in the XANES spectra during the analyses, and confirmed no beam induced damage occurred during the XAS analyses. FEFF8 [44] was used to calculate theoretical scattering paths using the crystallographic information for schoepite [45], magnetite [46] and NpO2 [47] and the Fourier Transform (in R-space) of the EXAFS spectra were fitted using Artemis [43]; the amplitude correction factor (S0 2 ) was fixed at 1 throughout and the statistical validity of the addition of each scattering path was tested using the F-test [48].
X-ray Photoemission Spectroscopy (XPS).
In addition to the GI-XAS analyses we performed additional analyses using X-ray photoemission spectroscopy (XPS) to provide further insight into the actinide oxidation state in the reacted samples [31,[49][50][51]. The magnetite crystals that were analysed using GI-XAS were returned to the labs and stored anaerobically at room temperature prior to XPS analysis (SPECS Near-Ambient Pressure XPS system); the XPS analyses were performed at 25 mbar. This allowed us to obtain higher resolution XPS data on the redox speciation of Fe, U, Np (and Pu) on the surface of the equilibrated magnetite crystals. As discussed earlier, trace (ppb) levels of Pu were also present in the Np(V) stock solution and interestingly, these were also detected in the XPS analyses. For the XPS analyses, monochromatic Al Kα X-rays (1486.6 eV) were used. Wide scans were recorded with an analyser pass energy of 60 eV and a step size of 0.5 eV. Narrow scans were recorded with an analyser pass energy of 30 eV and a step size of 0.1 eV. The XPS spectra were processed and analysed using the CasaXPS software package (Casa Software Ltd., UK). The photoelectron binding energies were calibrated by setting the C 1s adventitious carbon peak to 284.8 eV [52]. A Shirley background was used for spectra with high signal to noise ratio, otherwise, a linear background was used. The Fe 2p 3/2 transition was analysed for the Fe(II) and Fe(III) contributions using the methodology described by Biesinger et al. [53]. For the fits to the Fe 2p 3/2 transitions a symmetrical Gaussian Lorentzian peak shape (GL(30)) was used and to minimize the number of independent variables, the relative peak positions and full width half maximum of all peaks were constrained. Furthermore, the relative peak intensities of all Fe(II) peaks were constrained as were all the relative peak intensities for all Fe(III) peaks to the relative peak intensities determined by Biesinger et al. [53]. The analyses of the actinide 4f transitions were based on the methodology described by Ilton et al. [32,51]. The primary peaks for the uranium, neptunium and plutonium 4f transitions were fitted using an asymmetric Gaussian/Lorentzian peak shape (uranium: A(0.35,0.5,0)GL(45), neptunium and plutonium: A(0.25,0.3,0)GL (45)). Any satellite peaks were fitted using a symmetrical Gaussian peak shape (GL(0)). The doublet separation between the actinide 4f 7/2 and 5/2 transitions was kept constant at 10.85, 11.8 and 12.875 eV for uranium, neptunium and plutonium, respectively [54][55][56][57] and the ratio of the peak area of the 4f 7/2 and 5/2 transitions was fixed at 0.714 [54][55][56][57]. Figure 1 shows the percentage of uranium and neptunium removed from solution in the MOPS and NaHCO3 buffered experiments. Without carbonate present in solution, U(VI) was almost completely removed from solution after 14 days (99%, U MOPS) while when carbonate was present in solution only ~7% of the added U(VI) was removed from solution after 14 days (U NaHCO3). PHREEQC modelling using the specific ion interaction theory database [22, and references therein] predicts that U(VI) exists dominantly as positively charged polynuclear uranyl hydroxide complexes (72%: (UO2)3(OH)5 + , and 22%: (UO2)4(OH)7 + ) during the U MOPS experiment and dominantly as negatively charged uranyl carbonate complexes (51%: UO2(CO3)2 2-, 34%: UO2(CO3)3 4-, and 10%: (UO2)2(CO3)(OH)3 -) during the U NaHCO3 experiment (Table S1). Previous studies have shown that uranyl carbonate species inhibit reduction to U(IV) by decreasing the concentration of aqueous UO2 2+ [58]. Furthermore, at pH 7, the interaction of the negatively charged uranyl carbonate species with negatively charged surfaces is also expected to be inhibited [35]. Indeed, it is likely that the inhibition of adsorption and reduction due to the presence of uranyl carbonate complexes also occur on the (111) magnetite surface in the presence of carbonate [59].
Results and Discussion
In both the Np MOPS and NaHCO3 experiments approximately 28% of the added Np(V) spike activity was removed after 14 days of reaction. PHREEQC using the specific ion interaction theory database [22, and references therein] was used to model the Np(V) speciation in both buffer solutions. Np(V) speciation was predicted to consist of 100% NpO2 + in the Np MOPS experiment and of 86% NpO2 + and 14% NpO2CO3 -in the carbonate experiment. Interestingly, the predicted presence of a significant but not dominant contribution from the negatively charged Np(V) carbonate species did not significantly influence the removal of Np(V) from solution ( Figure 1).
Uranium Interaction with Magnetite
The U LIII-edge GI-XAS (a-c) and the Fe 2p and U 4f XPS (d and e) results from the magnetite crystals equilibrated with the U MOPS and NaHCO3 buffer solutions are summarised in Figure 2. The XANES spectra from the magnetite crystals equilibrated with both the U MOPS and NaHCO3 buffer solutions were similar to the U(VI) standard rather than the U(IV) standard. This suggests that the uranium speciation on the magnetite crystals was dominated by U(VI) even though magnetite has previously been shown to be able to reduce U(VI) to U(IV). [12,33] Furthermore, the best fits to the EXAFS analyses (Figure 2b,c, and Table 1) include 2 oxygen backscatterers at 1.77 -1.79Å (reducing the number of oxygen backscatters resulted in an increase in the R-factor) diagnostic for uranyl speciation. Additionally, the fit to the EXAFS from the magnetite crystal equilibrated with the U NaHCO3 buffer solutions included 5 oxygen backscatterers at 2.31 Å and the fit to the EXAFS from the magnetite crystal equilibrated with the U MOPS buffer solution included 6 oxygen backscatterers at 2.21 and 2.41 Å. The best fit to the U MOPS equilibrated magnetite crystal could also be fitted with 0.5 Fe backscatters at 3.49 Å. This suggests that U(VI) interaction with the magnetite (111) surface occurred through the formation of an inner sphere complex as predicted by Missana et al. [60] for uranyl adsorption onto magnetite, and observed in iron oxides, for example, for uranyl adsorption to ferrihydrite [20]. The data quality of the spectrum from the magnetite crystal equilibrated with the U NaHCO3 buffer did not allow detailed interpretation of the species on the surface of magnetite. As observed for the interaction of uranyl with sheet silicates in the presence of carbonate [35] and ferrihydrite [20] this is likely a ternary uranyl-carbonate surface complex, which corresponds with the aqueous speciation of uranyl which is dominated by uranyl carbonate species (Table S1). Overall, the XANES and EXAFS results indicate that uranium removal from solution occurred predominantly through surface complexation of uranyl (U(VI)) species to the magnetite surface. ; and (b and c) the EXAFS (green lines) and corresponding fits (purple lines) to the U MOPS and U NaHCO3 samples; XPS data for the iron 2p 3/2 transition (d) and the uranium 4f 7/2 and 5/2 transitions (e) (black lines) and the fits (coloured lines, blue and purple lines represent Fe(III) and Fe(II), respectively, and the orange and green lines represent U(VI) and U(IV), respectively) to the spectra of the U MOPS and U NaHCO3 samples, in the iron 2p 3/2 plot (d and e), the fitted Fe(II) and Fe(III) peaks have been merged for clarity; the U MOPS sample was analysed for XPS on two locations, denoted by -1 and -2. Table 1. Summary of the fits to the uranium and neptunium LIII-edge EXAFS for spectra in (Figure 2b,c) U MOPS and U NaHCO3 and Np MOPS and Np NaHCO3 (Figure 3b,c) (CN: coordination number, R: radial distance, σ 2 : Debye-Waller factor and S0 2 : amplitude correction factor, MS: multiple scattering); included is also the size of the XANES edge step before normalization (Figure 2a and 3a).
Sample
Scattering The XPS data and their respective fits are summarized in Figure 2d,e and Table 2 and S2. The fits to the Fe 2p 3/2 transition are representative of magnetite and fitting using the methodology by Biesinger et al. [53], resulted in calculation of the relative amounts of Fe(II) and Fe(III) in magnetite assuming the area under the fitted peaks is linearly proportional to the respective fraction of Fe. Table 2 shows that the Fe(II) contribution to the 2p 3/2 transition varied from 23.6% to 29.4%, compared to a predicted Fe(II) contribution of 33.3% in stoichiometric magnetite. This shows that the magnetite crystals were (slightly) sub-stoichiometric and suggesting partial oxidation of the surface. A variation in the contribution of Fe(II) to the 2p 3/2 transition was also observed for two different spots on the same crystal (U MOPS-1: 23.6% and U MOPS-2: 26.5%). The (polished) crystals were prepared from natural magnetite samples and used as received, hence, we postulate that this variation resulted either from variations in the ratio of octahedrally and tetrahedrally coordinated iron on the surface, [61] or from variations in magnetite oxidation to maghemite (i.e., variations in a magnetitemaghemite solid solution) at the surfaces of such a natural sample [29,30]. Table 2. Summary of the fits to the XPS results from the U (Figure 2d,e) and the Np (Figure 3d,e) samples; the values represent the percentages of the fitted areas of the peaks associated with the different redox states of Fe, U, Pu and Np; the U MOPS sample was analysed on two locations, denoted by -1 and -2. The uranium 4f 7/2 and 5/2 transitions were fitted to obtain more detailed information on the uranium oxidation state (Figure 2a-c and Table 1) [49]. Here, the peaks of the uranium 4f 7/2 and 5/2 transitions were best fitted with two U species only: U(VI) and U(IV). The binding energy of the 4f 7/2 transition for these two species was fitted to be 380.2-380.7 eV and 381.6-382.2 eV (Table S2), with peak separations (ΔE) observed at 1.3-1.8 eV. These binding energies are within the range determined for U(IV) and U(VI) [27,32,51,55,59,62,63]. Additionally, the observed peak separations (1.3-1.8 eV) are larger than expected for a system with U(V) present (observed ΔE of the U 4f 7/2 transitions for U(V) and U(VI) is 0.8-1.0 eV; ΔE for U(IV) and U(VI) is 1.3-1.5 eV) confirming our interpretation that the dominant species are U(VI) and U(IV) [52]. The observation of U(IV) on the magnetite (111) using XPS compared to the lack of any observable U(IV) contribution to the EXAFS spectra may be the result of heterogeneity, with the XPS analyses probing a much smaller area than the GI-XAS analyses. Also there are often challenges associated in determining the uranium oxidation state using conventional XAS techniques [49]. For example, possible preferred orientation of the uranyl moiety on the magnetite surface combined with a (polarized) synchrotron X-ray beam could result in an overestimation of the U-Oax coordination number based on the EXAFS analyses [41]. The two techniques should be probing roughly the same depth. Singer et al. determined that on stoichiometric magnetite (111) surfaces (Fe(II): 33.3%), the U(IV) accounts for 41-42% and 35-36% in the absence and presence of bicarbonate respectively [27]. This is comparable to the presented results for U MOPS-1 (Fe(II): 26.5%; U(IV): 43.8%) and U NaHCO3 (Fe(II): 29.4%; U(IV): 37.5%) ( Table 2). On the other hand, the location on the magnetite crystal equilibrated with the U MOPS buffer solutions with the lowest fitted Fe(II) contribution to the Fe 2p 3/2 transition (Fe(II) = 23.6%) shows a U(IV) contribution to the uranium 4f 7/2 and 5/2 transitions (15.5% U(IV)) significantly lower than that expected for stoichiometric magnetite and observed for U MOPS-1. These trends agree with past observations that lower Fe(II)/(III) ratios at the surface of magnetite [23] and increased HCO3 -concentrations reduce the potential for U(VI) reduction to U(IV) on the (111) surface of magnetite [27]. Interestingly, no U(V) contribution to the U 4f 7/2 and 5/2 transitions could be fitted on these single crystal magnetite experiments, and in contrast to nanoparticulate magnetite systems where U(V) was observed [32,34].
Satellite (Fe(III)) U(IV) U(VI) Pu(III) Pu(IV) Np(IV )
The iron and uranium redox couples were further examined using PHREEQC modelling in the presence of MOPS and HCO3 -buffers and summarized in Figure S1. This shows that U(VI) reduction to U(IV) as uraninite (crystalline UO2) coupled to iron oxidation is thermodynamically favourable ( Figure S1). However, in natural systems reduction of U(VI) to U(IV) generally favours the formation of nanocrystalline uraninite with a reported stability similar to amorphous UO2 phases [27,35]. PHREEQC modelling suggests that the reduction of U(VI) to amorphous UO2 coupled with Fe(II) oxidation to Fe(III) is thermodynamically unfavourable ( Figure S1). This could explain the limited amount uranium reduction on the magnetite surface (Figure 2a-c). Finally, the XPS data (Figure 2de) highlight the influence of the surface Fe(II) in the magnetite crystal on the extent of U(VI) reduction consistent with the observations of Latta et al. [23]. Figure 3 summarizes the Np LIII-edge GI-XAS data (a-c), the Fe 2p, Np and Pu 4f XPS data (d) and the UV-Vis spectroscopy data (f) from the Np MOPS and NaHCO3 experiments. We also note the presence of Pu in the XPS data resulting from an impurity in the Np stock. This nano molar Pu concentration gave an XPS signal which we were able to fit (Figure 3e). and corresponding fits (purple lines) to the Np MOPS and Np NaHCO3 samples; XPS data for the iron 2p 3/2 transition (d), and neptunium (and plutonium) 4f 7/2 and 5/2 transitions (e) (black lines) and fits to the spectra (coloured lines, blue and purple lines represent Fe(III) and Fe(II), respectively, the purple and orange lines represent Pu(IV) and Pu(III), respectively, and the olive lines represent Np(IV) for the Np MOPS and Np NaHCO3 samples, in the iron 2p 3/2 plot (d), in the iron 2p 3/2 plot (d), the fitted Fe(II) and Fe(III) peaks have been merged for clarity; and (f) the UV-Vis spectra for a pH 4 NpO2 + (Np(V)) solution (prepared by diluting the stock solution in DI water only), two solutions with 10 ppm neptunium and 10 mM MOPS (Np MOPS no magnetite) or 5 mM NaHCO3 (Np NaHCO3 no magnetite) and the aqueous phase after 14 days equilibration with magnetite (Np MOPS and Np NaHCO3), the vertical grey lines represent the absorption bands observed in the spectra.
Neptunium
The XANES spectra from the magnetite crystals equilibrated with the Np MOPS and NaHCO3 buffers had similar edge steps (0.013 and 0.018, respectively, Table 1) suggesting that the magnitude of neptunium interaction with the magnetite (111) surfaces was similar. This confirms the aqueous geochemistry data where removal was approximately 28% from both buffer solutions (Figure 1). Additionally, the shape of the XANES from these two samples was comparable and very similar to the Np(IV) standard (Figure 3a) indicating that the oxidation state of neptunium on the magnetite (111) surface was dominated by Np(IV). Fitting the Np LIII-edge EXAFS spectra from the magnetite crystals equilibrated with the Np MOPS and Np NaHCO3 buffer solutions was possible with a single shell of 8 oxygen backscatterers at 2.32-2.34 Å (Figure 3b and c and Table 1). No statistical validity could be achieved by adding Np(V)-Oax scattering paths at 1.8-1.9 Å confirming no significant dioxygenyl Np(V) (NpO2 + ) was present on the magnetite crystals. Additionally, due to the data quality of the EXAFS spectra, no further backscatterers (Np or Fe) could be fit to the spectra and thus it was not possible to distinguish between nanoparticulate NpO2 or surface bound Np 4+ [16,35,38,65].
The fits to the neptunium 4f 7/2 and 5/2 transitions of the XPS spectra from both magnetite crystals ( (Table S2), respectively. These binding energies and the presence of satellite peaks for the neptunium 4f transitions match previously determined binding energies for Np(IV) (4f 7/2: 402.8-403.8 eV and 4f 5/2: 414.3-414.6 eV) [54,56,[66][67][68] and further confirm Np(IV) as the dominant redox state of neptunium as observed from the GI-XAS analyses. Furthermore, the fits to the Fe 2p 3/2 transition are representative of magnetite and fitting using the methodology by Biesinger et al. [53] resulted in relative amounts of Fe(II) and Fe(III) in magnetite assuming the area under the fitted peaks is linearly proportional to the respective fraction of Fe. Table 2 shows that the Fe(II) contribution to the 2p 3/2 transition of 27.5% and 26.9% for the magnetite crystal equilibrated with the Np MOPS and Np NaHCO3 buffer solutions, respectively, compared to a predicted Fe(II) contribution of 33.3% in stoichiometric magnetite. This suggests (slightly) sub-stoichiometric magnetite similar to the U experiments.
The results from the UV-Vis spectroscopy on the aqueous phase before and after equilibration with magnetite are plotted in Figure 3f and include the spectra of a Np(V) standard solution at pH 4 (0.1mM HCl), the Np MOPS and Np NaHCO3 buffer solutions equilibrated with the magnetite single crystals and parallel Np MOPS and Np NaHCO3 controls which were not equilibrated with the magnetite single crystals. The UV-Vis spectrum of the pH 4 Np(V) solution exhibits a single, intense absorption band at 981 nm, consistent with NpO2 + [69] and confirming the oxidation state of Np(V) in the stock solution. The UV-Vis spectra of both of the Np MOPS buffer solutions (with and without magnetite) also show this single absorption band at 981 nm confirming NpO2 + dominated in solution even after reaction with the magnetite crystal. The spectra of the Np NaHCO3 buffer not equilibrated with magnetite shows a dominant absorption band at 991.5 nm and a shoulder at 981 nm (Figure 3f), indicative of both NpO2CO3 -and (less) NpO2 + [69]. Interestingly, the UV-Vis spectrum for the Np NaHCO3 buffer equilibrated with magnetite does not exhibit the NpO2 + absorption band at 981 nm, rather an absorption band is observed at 1018 nm in addition to the NpO2CO3 -absorption band at 991.5 nm (Figure 3f). To the best of our knowledge, an absorption band at 1018 nm for neptunium has not been observed, to date. The absorption band at 1018 nm is absent in the UV-Vis spectra for the solutions not reacted with magnetite ( Figure 3f). This suggests that this absorption band could be a result of NpO2 + complexing with, both, CO3 2-and Fe(II) or Fe(III). The identification of such a potential ternary complex warrants further investigation but it appears to be similar to ternary complexes of hexavalent uranyl with carbonate and alkaline earth metals [70] and the postulated ternary complex of pentavalent uranyl with carbonate and iron [71].
Interestingly, the differences in the aqueous speciation of neptunyl observed in the UV-vis spectra did not affect the removal of Np from solution during the interaction with the magnetite crystals ( Figure 1). Finally, the UV-Vis spectra indicate that Np in solution is dominated by Np(V), and the GI-XAS and XPS data confirm that Np on the magnetite (111) surface after two weeks is dominated by Np(IV). Combined with the dominance of Np(V) species in solution in both buffer solutions, this indicates that reduction of Np(V) on magnetite appears surface mediated [16,28], regardless of the presence or absence of aqueous bicarbonate.
Plutonium
The XPS of the Np MOPS and NaHCO3 samples also show clear peaks representing the Pu 4f 7/2 and 5/2 transitions at similar intensities to the Np 4f 7/2 and 5/2 transitions (Figure 3e). Taking into account that the experimental solutions contained in the order of 0.1% Pu by mass (compared to Np), this indicates that that the interaction of Pu with magnetite is significantly stronger than that of Np(V). Furthermore, the plutonium 4f 7/2 and 5/2 transitions could be fit with two species ( Table 2). The observation of Pu(III) and Pu(IV) on the surface of the magnetite crystals is compatible with previous observations of either Pu(III) or Pu(IV) on the surface of magnetite [13,17], while more focused experimental investigations are essential to determine the influence of HCO3 -on the surface reactivity and speciation of Pu(IV) and Pu(III), including their respective aqueous speciation (Table S1).
Conclusions
Here we show that uranium (as UO2 2+ complexes) interacts with magnetite surfaces through surface complexation of uranyl and partial reduction to U(IV). The surface complexation of UO2 2+ is highly dependent on the presence of bicarbonate in solution through the formation of negatively charged aqueous uranyl carbonate complexes. By complementing GI-XAS with XPS analyses, we show that uranium speciation on magnetite crystals is dominated by U(VI) surface complexes (GI-XAS) and that the extent of uranium reduction is variable on a single magnetite crystal and dependent on the stoichiometry of magnetite (XPS). Also, there was no evidence for pentavalent uranium in these reactions with a (111) crystal face of magnetite and in contrast to interactions with nanoparticulate magnetite or during the formation of nanoparticulate magnetite. For neptunium, and combining GI-XAS, XPS and UV-Vis spectroscopy, we determined that the interaction of neptunium (as NpO2 + ) is independent of the presence of carbonate in solution and resulting aqueous complexes, with 28% of Np removed from solution in in the presence and absence of carbonate, and exhibited a dominant oxidation state of Np(IV) on the magnetite crystals. Even though the extent of interaction between Np(V) and the magnetite (111) surface did not vary depending on the presence of bicarbonate, the aqueous Np speciation (as detected by UV-Vis spectroscopy) was significantly different, with 100% of the Np(V) present as NpO2 + in the absence of carbonate. The aqueous speciation, in the presence of carbonate was dominated by NpO2CO3 -, while after the interaction with magnetite we suggest a previously unidentified aqueous species appeared as identified by the appearance of an absorption band at 1018 nm in the UV-Vis spectra. Based on the aqueous chemistry of the experiments, we propose that this is likely a ternary Np(V) complex with (bi)carbonate and iron. Finally, XPS analyses showed Pu, from impurities in the Np stock, was strongly associated with the magnetite crystals and displayed both Pu(III) and Pu(IV) oxidation states. Overall, the presented results highlight the differences between uranium, neptunium and plutonium interaction with magnetite, and reaffirm the importance of bicarbonate present in the aqueous phase.
Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: stability fields for the dominant redox species for Fe, U, Np and Pu based on geochemical speciation modeling using PHREEQC, Table S1: dominant aqueous speciation of U, Np and Pu based on the geochemical speciation modelling using PHREEQC, Table S2: summary of the binding energies (XPS) of the main peaks for the actinide 4f transitions.
|
v3-fos-license
|
2022-10-22T15:18:04.931Z
|
2022-10-20T00:00:00.000
|
253051514
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1057/s41267-022-00562-2.pdf",
"pdf_hash": "92209ed292a6df447a353d8196636c5c778e3fea",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:635",
"s2fieldsofstudy": [],
"sha1": "068e7f38ed479a92056e321056dac16f47ccc4be",
"year": 2022
}
|
pes2o/s2orc
|
Equivalence in international business research: A three-step approach
A primary research area within the field of international business (IB) is to establish the extent to which concepts, theories, and findings identified in one country are applicable to other contexts and which are unique and cannot be found in other contexts. Researchers in IB acknowledge the importance of the context in their studies, but the practice of assessing equivalence (or invariance) is not widely diffused within the community. We first discuss the components of equivalence (construct, method, and item equivalence), and we offer a three-step approach to address equivalence in the writing and revision of a paper. We aim to help editors, reviewers, and researchers produce more reliable research and navigate the tension between generalizable relationships and context-specific ones, both theoretically and empirically, before performing analysis and hypothesis testing. We then apply equivalence to the construct of firm economic performance as a case study, but the same logic can be applied to other constructs as well.
INTRODUCTION
A primary research area within the field of international business (IB) has been to establish the extent to which concepts, theories, and findings identified in one country are applicable to other contexts (Cuervo-Cazurra2012; Sekaran, 1983) or what is inherently unique in a context that cannot be found in other contexts (Teagarden, Von Glinow & Mellahi, 2018). Central to the search for standard or generic relationships or uniquely and fully contextualized relationships is equivalence. Neither fully contextualized nor generalizable relationships between constructs can be achieved unless researchers can clearly distinguish differences across research contexts and show that a construct is context-specific or contextinvariant.
Researchers are aware that the measures and constructs used in IB research need to be made comparable across countries to make claims about the study findings. Despite this acknowledgement, the practice of assessing measurement equivalence (or invariance) is not widely diffused within the community (Hult, Ketchen, Griffith, Finnegan, Gonzalez-Padron, Harmancioglu, & Cavusgil, 2008). For example, we found that only about 15% of studies with a multi-country sample in JIBS in 2021 and 2020 assessed the equivalence of the constructs and the instruments. In addition, far fewer authors have assessed whether the methodology of the data collection was comparable across the samples. Therefore, although researchers know that the problem exists, the practice of assessing equivalence across countries has not spread broadly.
Failing to assess equivalence is the source of attenuated estimators, which reduce the power of statistical tests of hypotheses and provide misleading results (Davis, Douglas, & Silk, 1981;van de Vijver & Leung, 1997). This has contributed to a plethora of mixed findings in IB research. Frequently, when there are contrasting findings in the literature that do not converge over time, there is a methodological problem (Boyd, Gove, & Hitt, 2005;Ferguson & Ketchen, 1999;Short, Ketchen, & Palmer, 2002).
On the one hand, the increasing availability of large international archival databases of firms has increased the possibilities for academics interested in cross-country comparisons. Prominent examples are Compustat Global, ORBIS, fDi Intelligence, and IBES, as well as large multi-country surveys such as the International Social Survey Program (ISSP), European Values Study (EVS), World Values Survey (WVS), European Union Statistics, and OECD Databank. While the proliferation of these international sources of data makes multi-and cross-country comparisons easier (Boyd, Gove, & Solarino, 2017), it increases the risk that authors will not ascertain the equivalence of these data. The emergence of big data further complicates the picture, as big data are generally collected in automatic ways, possibly under different conditions, making the assessment of equivalence even more important.
With this commentary, we aim to suggest ways in which editors, reviewers, and researchers can produce more reliable research and navigate the tension between generalizable relationships and context-specific ones, both theoretically and empirically, before performing analysis and hypothesis testing. Therefore, we first discuss the components of equivalence (construct, method, and item equivalence), offering a list of key questions that should be addressed in the writing and revision of a paper.
We then apply equivalence to the construct of firm economic performance because it is one of the most common constructs used in IB, but the same logic can be applied to other constructs and measures, such as R&D intensity or entry modes.
EQUIVALENCE: KEY CONCEPTS
Before turning to a more formal definition of equivalence, we highlight the importance of the comparability of constructs and measures as well as the pitfalls involved when comparability is absent. When a researcher samples data from two or more countries, there is a chance that the results of the analysis might mistakenly make the researcher conclude that an effect is stronger in certain countries than in others. Because of the lack of equivalence in the constructs and measures, subsequent studies that try to replicate the findings are likely to arrive at diverging conclusions, giving rise to mixed findings. Overall, the lack of equivalence threatens the generalizability of the findings and our ability to develop cumulative knowledge.
Equivalence can be absent for a wide variety of reasons. Van de Vijver (1998) proposed a conceptual scheme that organizes the sources of nonequivalence into three categories: construct, method, and item bias. Construct bias is the most fundamental source of non-equivalence and recognizes that a construct itself might have different meanings across countries, posing a fundamental threat to international business research. A lack of theoretical validity results in comparing apples and oranges. Certain concepts have meanings that are strongly nation-or culture-specific. Such concepts are called emic concepts, in contrast to universal or etic concepts (Triandis, 1972). For instance, we know that there are several varieties of capitalism (Hall & Soskice, 2001;Witt & Redding, 2014). Differences in the capitalism system result in managers giving different meanings to firm performance and being satisfied with different performance outcomes (Makino & Yiu, 2014). Comparing firm performance between US and China (e.g., Chan, Makino, & Isobe, 2010) might not be informative, as the meaning of performance to managers in the two countries is different.
Method and item bias relate to measurement validity and can be assessed quantitatively (statistically; Meredith & Teresi, 2006;Riordan & Vandenberg, 1994). Method bias results from three sources of bias that can emerge from the sample, the administration, or the instrument used to collect the data. Samples might be different because they are taken from different populations, or because the subgroups within each population are not equally represented. Differences in sampling procedures could result in differences in the underlying population. These differences then become erroneously interpreted as substantive differences across countries. Different procedures in survey administration or in the instrument across countries can also affect the quality of the data collected and the results of the analysis. In psychology research, there is consensus that the mean of the data collection can affect the distribution of the responses and non-responses, as well as the response style (Billiet, Koch, & Philippens, 2007;Harzing, 2006). In the context of IB, the dimension of the sampling procedures across populations is particularly relevant (Häder and Gabler, 2003;Heeringa & O'Muircheartaigh, 2010).
Item bias refers to anomalies at the item level. Item performances depend on a person's status on a target dimension and the errors that exist around such a dimension. Errors can arise from poor translations of the items (Harkness, Villar, & Edwards, 2010) across groups, or because the item carries different meanings across subgroups. For instance, Makino and Yiu (2014) revealed that not only the means but also the kurtosis of ROA varies systematically across Asian countries. Consequently, unless these differences were appropriately considered, this item would result in biased regression coefficients.
MEASUREMENT EQUIVALENCE:
IMPLEMENTATION To establish equivalence in cross-and multi-country studies, researchers should follow a three-step approach. Table 1 displays the process and shows the key questions journal editors and reviewers should ask, what authors should do to prove equivalence to journal editors, and what actions authors can take if equivalence cannot be confirmed.
Step 1: Construct Equivalence To obtain construct equivalence and avoid construct bias, it is crucial to draw thorough insights into cross-country similarities and differences in various phenomena. Concepts or constructs used in management research may be conceptually similar but do not perform the same function in all countries. The first question journal editors and reviewers should ask authors is whether the study adopted an emic or etic standpoint or where it lies on the emic-etic continuum. Were the authors trying to apply a foreign or universal contract to a local phenomenon, or were they aiming to identify or assess how a local construct is generalizable across multiple countries? An example is stock options. Stock options are tools of corporate governance aimed at aligning the interests of senior managers and shareholders and motivating managers to maximize shareholders' value. Most of the applications of stock options in cross-country studies have shown that they worked well in Anglo-Saxon countries but failed to deliver the same results in Europe (Zattoni, 2007). In China, stock options awarded to managers resulted in increasing tunneling rather than minimizing the agency problem (Jiang, Kling, Bo, & Driver, 2017). The application of stock options as an etic construct with the same meaning that this governance tool has in Anglo-Saxon countries is likely to be the source of diverging findings. Applying a construct without considering the context results in an incomplete assessment.
Second, journal editors and reviewers should challenge authors to clarify the relationship between theory and context and contextualize theories and variables (theories in context) or theorize about context (theories of context; Whetten, 2009). Building on the example of stock options, different findings might be rooted in the structure of the firms awarding stock options: whether the firm has, or not, a controlling shareholders, and on the nature of the controlling shareholder (state, family, etc.). In some countries and in the presence of a dominant shareholder, stock options are often used to reward loyalty rather than incentivize performance (Melis, Carta, & Gaia, 2012). In some circumstances, certain constructs can have partial equivalence -similar, but not identical, meanings. Therefore, journal editors and reviewers should ask authors to clarify the extent of the construct equivalence.
To avoid construct bias, authors should obtain insight into cross-country similarities and differences in various phenomena. They could use expert panels and focus groups to identify cross-national differences (Harding, 2013;Johnson, 1998). An understanding of the legal and societal structures of the countries can help direct the areas of inquiry and find similar contracts. When authors cannot empirically prove that the data are comparable across countries, journal editors should ask them to inquire why it is the case. Exploring why variables across countries carry different meanings could be an important contribution and will help building cumulative knowledge. Journal editors should encourage authors to assess which institutionallevel antecedent can be used as a predictor of such differences (Cheung & Au, 2005).
Step 2: Method Equivalence To minimize the risk of method bias and improve methodological equivalence, journal editors and reviewers should ask authors to define as clearly as possible the precise unit(s) of comparison across which they make final contrasts, and then demonstrate that, first, the samples and subsamples from the populations are similar enough to be compared.
Sample populations with different compositions might affect the results. For example, in a crosscountry comparison of the role of women in society that includes countries with limited women's rights, having an equal representation of men and women might result in limited data reliability (Usunier, 1998). Some countries are deeply multicultural. For example, India is made up of highly diversified ethnic, religious, and linguistic groups. In contrast, others are explicitly multicultural, such as Switzerland, which emphasizes the defense of local particularisms also in politics and economics. Then there are cases like African countries where for historical reasons the ''ethnic'' dimension matters more than the ''national'' one. Editors and reviewers should always ask authors to clarify what Table 1 Steps and key questions for assessing measurement equivalence Step Look for alternative measures Conduct separate analysis for each group Suggest how to refine the construct they mean by ''representative sample''. Researchers often use random samples of listed multinationals (MNEs) in IB research. However, the differences in the composition of the populations of firms listed in different indexes across the world might have affected the results of the analysis. The population of MNEs on stock exchanges varies greatly across countries (e.g., smaller firms in Italy vs. larger firms in Germany). To avoid method bias, researchers must not only compare sampling procedures (similar sample sizes, stratified samples, etc.) but also understand the distribution of firm characteristics in the population to ensure that they are comparing apples to apples. Mandler, Bartsch, and Han (2021) paid particular attention to balancing the respondents to their survey in terms of sociodemographic to ensure that the subsamples captured the same population across countries, and that the differences were not due to one country having more respondents from a sociodemographic group compared to the other.
Sample equivalence is related not only to the subgroup population within a country but also to large imbalances across group sizes. Groups with larger sample sizes have more weight, masking the lack of equivalence between the groups. The imbalance affects the results of regressions and factorial invariance studies and manifests when one group is twice as large as another (Yoon & Lai, 2018).
Moreover, journal editors and reviewers should ask authors to demonstrate that in surveys and interviews, the administration conditions were comparable across informants. For example, the conditions in which a survey is completed can influence the responses (Edwards, 2008). Extensive literature covers survey and interview administration and describes how to assure honest and comparable responses (Aguinis, Villamor, & Ramani, 2021;Solarino & Aguinis, 2021). Finally, in survey research, authors should supply evidence that the survey was not affected by response style, item wording, reverse items, or issues with common method bias (Podsakoff, MacKenzie, & Podsakoff, 2012).
Step 3: Item Equivalence Finally, assuming that construct and method biases are absent, journal editors and reviewers should ask authors to demonstrate that item bias is absent. Item bias should be addressed statistically. Table 2 presents a synthetic overview of the four methodologies discussed later in the commentary. Journal editors should recommend one of the following options when authors have not used one, depending on the nature of the study and the number of countries involved. We offer a guided application of the four methodologies in the next section. We briefly introduce them and describe their strengths and weaknesses.
The first approach is multi-group confirmatory factor analysis (MGCFA). MGCFA assesses whether a construct has the same underlying meaning across groups. This methodology is suitable for testing item equivalence across a limited number of groups (two to three, four at the most), as convergence can be an issue, especially for a comparison with more than two groups (Asparouhov & Muthén, 2014). Furthermore, authors should refrain from testing the equivalence using pairwise comparison between groups because it will increase type I errors. In some circumstances, when authors could not even establish partial equivalence, they analyzed the data separately across groups (Hirst, Budhwar, Cooper, West, Long, Chongyuan, & Shipton, 2008). However, this approach is appropriate only if the purpose of the study does not involve comparing groups (Somaraju, Nye, & Olenick, 2021), and the conclusions are specific for each group and not suitable for making comparisons among them.
The second approach relies on item response theory (IRT), which offers a suitable approach for examining the extent of the differential functioning of each item. Researchers should compare not the mean but the standard deviation of the performance measures across countries (Jebb, Morrison, Tay, & Diener, 2020). This method is particularly suitable for single-item indicators. If the theory supports the use of a specific performance measure, and its standard deviation is similar across countries, then the researcher can use that specific performance measure to perform the cross-country comparison. The difference in the means can then be explained using the predictor variable. This approach has the benefit of having no limitations on the number of groups that can be tested concurrently. However, it can test only one indicator at a time.
Finally, we propose that researchers can adopt two network approaches to test for item equivalence. The first is comparable to a nomological network, where the behavior of the correlations between performance measures should be similar enough to allow for a comparison. The nomological network has been criticized for not being able to provide a practical and usable methodology for actually assessing construct equivalence and validity. We propose using exponential random graph (p*) models as a practical solution. This approach allows for a large number of indicators in the analysis. However, this is also a weakness, as it requires a large number of indicators to assess, build, and simulate the possible correlation networks (at least 20). If the networks created do not support similarity across countries, then researchers should try the second network approach we suggest.
This second network approach consists of using cluster analysis to identify those performance indicators that behave more similarly and thus belong to the same cluster. The benefit of this approach is that it can identify solutions that other methods cannot. The solution cannot be explained theoretically, only empirically. As with MGCFA, these methodologies are not suitable for comparing many groups. However, they can accommodate a larger number of groups than MGCFA. In the next section, we apply construct, method, and item bias to the case of firm performance.
A Practical Example with Organizational Performance As a practical example, we compared performance measures across all listed firms in Mainland China (hereafter China), Hong Kong-SAR (hereafter Hong Kong), and Singapore from 2009 to 2018. The sample was composed of 2451 firms and 26,145 year observations. We chose this sample because previous studies have often compared Hong Kong and Singaporean firms with Chinese ones (Carlsson, Nordegren, & Sjöholm, 2005;Eng & Spickett-Jones, 2009;Huang, Kerstein, & Wang, 2018;Song, Zeng, & Zhou, 2021). Moreover, studies using Asian samples have become more numerous over the last decade (Bai, Du, & Solarino, 2018;Boyd & Solarino, 2016).
We chose performance as a case study because it is one of the most commonly assessed variables in business research (Boyd & Solarino, 2016;Combs, Crook, & Shook, 2005). However, the discussion below can be applied to any variable under investigation, whether innovation, job satisfaction, leadership, or something else. We collected data for the most commonly used performance measures, adapting the list from Combs et al. (2005). Table 3 reports the list of performance indicators. Table 4 reports the descriptive statistics of the performance measures by country.
Step 1: Construct equivalence Previous studies have found that performance is a multidimensional construct (Hamann, Schiemann, Bellora, & Guenther, 2013;Rowe & Morrow, 1999;Tosi et al., 2000). Those studies were single-country studies. Not much has been discussed in the literature about how, and to what extent, the construct of organizational performance is valid and reliable across countries. There are good reasons to expect that organizational performance might not be fully equivalent across countries.
There are several ways for scholars to measure performance. Accounting measures are defined as the historical performance of organizations and are assessed through the use of accounting data (Fryxell & Barton, 1990). When observed longitudinally, they can be interpreted as growth measures. Growth measures, while logically distinct from accounting measures (Hamann et al., 2013), are often based on the latter. Therefore, they suffer from similar issues, and we treat them jointly.
Each country has its own accounting ''standards'' or generally accepted accounting principles (GAAP). For instance, some countries ask to account for specific balance sheet items in different ways (e.g., inventory and asset depreciation; Ball, Robin, & Sadka, 2008). The adoption of the International Accounting Standards has not improved the situation, with significant differences remaining across countries (Barth, Landsman, & Lang, 2008;De George, Li, & Shivakumar, 2016). Furthermore, differences arise among managers' use of earnings management techniques (Han, Kang, Salter, & Yoo, 2010) to smooth or accentuate profits in a given year, use which is susceptible to the cultural values that dominate the country. For instance, individualism is positively related to the magnitude of earnings discretion, while uncertainty avoidance is negatively related. Other studies have found that the quality of information reporting in financial statements is not consistent across countries because of differences in how institutions monitor and enforce adherence to reporting standards. In countries where regulatory enforcement is weaker, and penalties for false reporting are minimal (Holthausen, 2009), accounts are more easily manipulated. If the cultural dimension is not considered, accounting measures are only partially equivalent.
Market measures are computed using capital market indicators, such as total shareholder return (TSR) or stock price changes. Stock market performance reflects future opportunities and cash flows, in contrast with accounting returns, which entail a historical perspective. Also market measures differ across countries. Market liquidity, market size, market regulations, and transparency vary across stock markets. Market liquidity and size affect the valuation of stock prices, as shares listed in smaller and less liquid markets have lower valuations (Bleck & Liu, 2007). Moreover, market regulations affect the content of reported financial information (Alford, Jones, Leftwich, & Zmijewski, 1993). Market size, liquidity, and transparency of information are necessary for the correct functioning of the market. In a transparent market, shareholders are able to distinguish ''good'' from ''bad'' projects and thus achieve the first-best outcome by liquidating poor projects. Overall, as markets are highly sensitive to local dynamics, especially smaller ones, their indicators are not fully comparable. Finally, hybrid indicators, such as price-earing (PE), Tobin's Q, and the marketto-book ratio, present the problem of both types of accounting and market measures of performance. Overall, we should not expect full equivalence of performance indicators across countries.
Step 2: Method equivalence The second step consisted of demonstrating that the methodology for collecting the data is the same across countries. This implies that the samples are not particular or skewed toward a particular dimension but are representative of the population. In our data collection, we did not impose any boundary conditions on the sample (industry, number of employees, etc.). We collected data from all listed firms in the Hong Kong, Singapore, and Shanghai stock exchanges' main listings. In our case, we did not collect a representative sample but population data. The Chinese and Hong Kong samples represent 45% and 38% of the overall observation, respectively, while the Singaporean sample represents the remaining 17%. Given the unbalanced samples in terms of observations, the largest samples in the dataset will dominate the ''regression outputs'' obscuring the contribution of the smaller sample, and the different composition of the samples because the lack of boundary conditions will further bias the analysis and make it harder to replicate the results.
Step 3: Item equivalence Finally, item bias can be mitigated by assessing which measures behave similarly across countries, as we show in the examples that follow. Measurement equivalence is a property of the instrument used to measure the desired variable and implies that the same concept is measured in the same way across subgroups. In our case, it is the performance of firms across countries. Put differently, this occurs when firms with the same standing on the latent trait (performance) but sampled from different groups (countries) have equal expected observed scores on the assessment (Drasgow, 1987;Mellenbergh, 1989).
MGCFA
The first method we used to test measurement equivalence was MGCFA (Joreskog, 1971;Steenkamp & Baumgartner, 1998). We followed Vandenberg and Lance's (2000) multi-step approach to assess measurement equivalence (or invariance) across groups. The steps include testing for (1) configurational equivalence, (2) metric equivalence, (3) scalar equivalence, and (4) measurement equivalence (or uniqueness equivalence). The first model tested for configurational equivalence with all the performance indicators load on a single factor. The model did not converge. We then tested the model using three latent factors: one for accounting measures, one for market measures, and one for growth measures. Similarly, this model did not converge. As the final step, we followed Cheung and Lau's (2012) recommendation and tested a subset of the items. Table 5 reports the different models tested for configurational and metric equivalence. Despite several iterations, we were unable to determine which performance indicators were equivalent across China, Hong Kong, and Singapore.
Equality of variance. The second approach tested for the equality (homogeneity) of the performance indicators' variance across countries. We applied Levene's test. This test is an alternative to the Bartlett's test, which is less sensitive to the skewness of the data (departures from normality). We also included the variations on the test, as suggested by Brown and Forsythe (1974). The results are presented in Table 6. Only a few performance measures had comparable variances across China, Hong Kong, and Singapore: net income growth, turnover growth, and EPS (partially) and EPS growth. The most common performance indicators (e.g., ROA, ROE, Tobin's Q) did not have equal variance across countries. Therefore, only a few measures are equivalent.
A network perspective
The last methods we suggest are rooted in network analysis. The first method consists of building a network across the performance indicators for each country and then testing their similarity using an exponential random graph (p*) approach. Faust and Skvoretz (2002) developed a statistical approach for comparing networks independently from the underlying structure, node characteristics, or other characteristics. This approach is based on a set of parameter estimates to predict tie probabilities and then compare and contrast those statistics across n networks. This approach allowed us to compare structural network effects, escaping the assumption of dyadic interdependence commonly adopted in network analysis. The p* methodology can assess the statistical likelihood of specific network configurations that explicitly model nonindependence among dyads by including parameters for structural features that capture hypothesized dependencies among ties as a tool for addressing divergence among networks. In the p* framework, the probability of a digraph G is expressed as a log-linear function of a vector of parameters #, 1 an associated vector of digraph statistics x(G), and a normalizing constant Z(#): P* models have several benefits compared to traditional network analysis. They allow for the comparison of networks starting from their parameters (and do not assume that observations are independent), and they accommodate attributes and structural estimates as predictors of a given network (Snijders, Pattison, Robins, & Handcock, 2006). Using this logic, two networks, A and B, are similar if their structural tendencies and degrees are similar. If such a condition exists, it should be possible to predict tie probabilities in one network not only from its own parameter estimates but also from those estimated from the other network. Should the two networks require different # for their estimation, then the networks are different.
In the case study, to assess whether the performance metrics were comparable across countries, we adopted a two-step process. The first step consisted of computing the correlation matrix across the performance indicators in each country to create the ''performance network.'' We selected only the correlations significant at 5% or higher to establish a dyadic relationship between two We used the networks generated from the performance indicators to predict the Chinese network of performance metrics (baseline network). The results are presented in Table 7. Only the Chinese network can predict itself. Therefore, the relationships between the performance measures in China could not be predicted by those of firms listed in Singapore or Hong Kong or by the performances of Chinese firms listed on the Hong Kong market. As a robustness check, we also tested whether the Chinese data network could be predicted using data from Taiwan, without success. Overall, this methodology suggests that the performance metrics between China, Hong Kong, and Singapore are not equivalent.
The second approach for identifying among performance measures those that behave more similarly across countries was based, similarly, on creating networks of performance measures, but then identifying which ''cliques of performance metrics'' appear similar across all countries. We used Ucinet 6 for the analysis, and the results of the clique analysis are displayed in Table 8. The results show that shareholder return, PE, and EBITDA margin belonged to the same clique in all three countries.
This allowed us to provide a solid narrative of why these three variables should be chosen as performance metrics. However, we have no theoretical justification for why these measures are more comparable than the other measures.
DISCUSSION
If the goal of IB research is to examine the extent to which theories, models, and constructs are valid and applicable across countries or cultural contexts, scholars aiming to perform cross-country and multi-country studies should first assess whether equivalence exists for the desired construct and variables. While STEM disciplines and psychology have been at the forefront of establishing equivalence in multi-group/multi-country studies, IB and business research has been lagging behind. This lag contributes to problems related to the credibility of our findings (Aguinis, Cascio, & Ramani, 2017;Bergh, Sharp, Aguinis, & Li, 2017;Byington & Felps, 2017;Rynes, Colbert, & O'Boyle, 2018).
In this article, we raised the issue and suggested a series of steps that journal editors, reviewers, and authors should follow to assess the presence of equivalence and the absence of biases at the construct, method, and item levels. The lack of measurement equivalence has substantial implications for IB researchers. First, it can lead researchers to draw inaccurate or, worse, false conclusions, preventing the accumulation of findings and the solidification of knowledge. Many debates in IB, and in business more in general, would benefit from the assessment of measurement equivalence. Recent review papers in JIBS and other IB outlets have highlighted the prevalence of contrasting findings in our field. For example, a recent review of top management teams in international business Networks are similar if the t-ratio is smaller than 0.1. (Cuypers, Patel, Ertug, Li, & Cuypers, 2022) found that in almost every area of the debate, there are mixed findings. For example, the top management team structure might or might not have a positive relationship with the performance of an international joint venture. Cuypers and colleagues rightfully suggested that we should further investigate the issue. We also add that researchers should also assess whether the lack of convergent findings is due to a lack of equivalence. For example, respondents from international joint ventures often come from different countries and have possibly diverging strategic goals. Therefore, they might not interpret the performance of the international joint venture in the same way. These differences in the responses could be a source of mixed findings due to construct, method, or item bias, rather than due to an underlying latent theoretical issue we have not discovered yet.
Other examples of how the lack of equivalence could be driving the mixed findings in the literature arise from debates around family firms and internationalization. A large amount of attention in explaining differences in findings has been given to family characteristics (Arregle, Chirico, Kano, Kundu, Majocchi, & Schulze, 2021), including resources, compensation practices, and individuallevel characteristics of family managers. Much less attention has been given to the equivalence of the measures across studies and multi-country studies, and how the lack of equivalence in the constructs, methods, and items contributes to the confusion on the relationship between family firm and internationalization. For example, studies on entry modes by family firms arrived at diverging conclusions regarding whether family firms entering new markets aim for low-commitment modes to minimize risks (Monreal-Pérez & Sánchez-Marín, 2017;Scholes, Mustafa, & Chen, 2016) or high-commitment modes to maximize control (Abdellatif, Amann, & Jaussaud, 2010;Pongelli, Calabrò , & Basco, 2019). Some researchers have attempted to explore the role of moderators, such as ownership structure (Pongelli, Caroli, & Cucculelli, 2016). A lack of measurement equivalence could be a possible source of these conflicting findings. The contextual factors in host countries that are missed because of the lack of equivalence could make family firms choose one or another type of entry mode.
This lack of measurement equivalence within and between studies has broader implications for replicating findings. Researchers who aim to replicate and build on previous literature first need to replicate the original study and then assess whether the research findings are due to methodological artefacts or are substantive. A lack of equivalence raises the possibility that there are biases in published research comparing outcomes across countries (see ''Making AIB and IB Relevant and Legitimate'' in AIB Insights 17 (2)), making replication difficult, if not impossible.
Measurement of Non-equivalence as a Research area
International business research is often presented with the tension between context specificities and attempts to create generalizable knowledge that can be applied to other research contexts. The lack of accurate assessments in equivalence has resulted in mixed and contrasting findings that have impaired the field's ability to generate cumulative knowledge. In contrast, assessing the non-equivalence of constructs across countries could be a fruitful avenue of research for IB scholars. It will help researchers uncover truly emic dimensions and false etic constructs and understand, when theorizing about a phenomenon, what is what is truly etic and what is ''contextually specific''. Theories could then be developed by considering false etic constructs and exploring their source of non-equivalence. This would help researchers generate studies that capture underlying effects with a greater degree of precision and avoid ecological fallacies by assessing and verifying that the underlying assumptions are comparable across countries and samples.
For example, comparative studies on board independence have failed to arrive at conclusive findings about the extent to which board independence matters to firm performance (Dalton, Daily, Ellstrand, & Johnson, 1998;Mutlu, Van Essen, Peng, Saleh, & Duran, 2018). On one side of the equation, there is board independence, which has different meanings in different countries. Some countries have gone all-out on board independence, requiring boards to be made up mostly of independent board members. Other countries have demanded a few independent board members but granted them special and veto powers. Overall, the construct of board independence (and how it is measured today as a percentage or number of independent board members) fails to capture that the independent board in the US has a different role and a different power compared to the one in Europe (Practical Law, 2022). Even worse, in some countries, independent boards have more a ceremonial than the substantive role that Western theories assign them. Board independence and other ''good governance practices'' have been adopted in many countries. However, these practices are often not helpful in achieving the desired outcome (Chen, Li, & Shapiro, 2011). Assessing the etic and emic, or localized, value of board independence could be a way to develop context-specific researchers who can inform managers and policymakers. Therefore, researchers should question the emic and etic values of the construct and develop more nuanced theories that account for its etic or emic value. On the other side of the equation is firm performance. As discussed previously, this construct has limited equivalence across countries. Therefore, it is not surprising that the comparative literature on corporate governance and board independence has yet to find consensus because of the lack of equivalence in the constructs under investigation. To solve this debate, like many others, researchers should identify which elements (of corporate governance) are truly generalizable across countries and which depend on the specific context of a country.
Contribution to the Performance Literature
The discussion of what constitutes ''performance'' for a firm has puzzled scholars for more than 30 years (Venkatraman & Grant, 1986), and the debate has yet to be settled (Hamann et al., 2013;Richard, Devinney, Yip, & Johnson, 2009). For instance, studies have found that company performance is represented by three, four, or even eight factors (Hamann et al., 2013;Rowe & Morrow, 1999;Tosi et al., 2000) not necessarily strongly related to each other. In IB, measurement of the performance construct across countries is further complicated by the fact that institutions differ between nations, and these differences substantially affect how companies report and disclose performance measures and subsequent market reactions (Kumar & Zattoni, 2016).
We contribute to the literature on the validity of construct performance by highlighting the importance of the context in which performance is assessed. The previously mentioned studies explored the multidimensionality of performance in a single country. Therefore, they were unable to capture how the same construct dimensionality would be transferable to other research settings. We demonstrated empirically that the construct of performance needs to be assessed, keeping in mind that it is partially equivalent across countries, at best, and that researchers need to find which measures are the most suitable for the analysis given the countries under investigation.
Equivalence and Big Data
The issue of equivalence will become more pressing as the use of big data becomes more widespread, as large amounts of data can be collected from people and firms from different countries. Depending on the origin, data processing technologies, and data collection methods, big data present the same issues that we have discussed. Furthermore, due to the automated approach to data collection, these issues are amplified. Big data are developing their own measures of data quality, which include volume, variety, and velocity (Schroeck et al., 2012). IBM defined a fourth dimension of big data quality, veracity, which refers to ''the level of reliability associated with certain types of data'' including ''truthfulness, accuracy or precision, correctness'' (IBM, 2012;Schroeck et al., 2012). Big data researchers will need to develop an appropriate set of criteria rooted in existing debates on equivalence. Big data collected from authoritarian regimes might not be comparable to those in democratic countries. The conditions under which these data are collected differ, and users behave differently because of the limitations on individual freedoms in authoritarian countries. Existing recommendations suggest the use of automated deception detection techniques to increase objectivity by decreasing potential human bias, credibility tracking tools, and sensitivity to linguistic dimensions (Lukoianova, & Rubin, 2014). Big data researchers in IB will have to bring the use of these tools a step forward by assessing how and whether there are differences in deception, credibility, and sensitivity in expressions in each country. Leung and Bond (1989) and Hofstede warned us about the risk of a lack of equivalence, but IB studies have not been fully responsive to such calls. Effective changes in publishing norms will require journal editors to recognize and emphasize that studies are not informative unless the researchers can prove that the variables under investigation carry the same meaning in each country. In many journals, we have seen the use of multi-country samples without an assessment of the extent to which the data collected from these samples were comparable. The problem of the validity of these multi-country samples is worsened by the use of large datasets and big data, where validity and equivalence issues are rarely discussed. Second, journal editors and reviewers should ask researchers to be more transparent with their methodology. Readers should be able to check whether the data were collected using the same procedures across different countries, and there are no possible sources of bias that could affect the study outcomes. Have the measures been properly calibrated across countries so that they capture the desired property correctly? Are the samples comparable, and therefore, no subsample carried more weight in the analysis, risking possible biased results? Or did the data come from similar sources so that the data were collected in a similar manner in all countries? Assessing and acknowledging partial, full, or lack of equivalence should be a distinctive feature of IB research. An analysis of equivalence should always precede any analysis in a multi-and cross-country study.
CONCLUSIONS
Authors can play a substantive role in the equivalence debate. Multi-and cross-country research in IB should serve the purpose of creating unique and novel insights, generating broader concepts, or identifying local constructs or phenomena that shape business, rather than purely comparing what is similar and what is different across countries. In this regard, authors have an advantage over journal editors in making the context come alive and telling stories that would otherwise be unknown. Extreme situations or boundary conditions on existing theories can act as a prompt for discussing why a study is needed, why the contextual differences matter, and why something works in one country but not in another.
A primary research area within the field of international business (IB) is to establish the extent to which concepts, theories, and findings identified in one country are applicable to other contexts and which are unique and cannot be found in other contexts. Researchers in IB acknowledge the importance of the context in their studies, but the practice of assessing equivalence (or invariance) is not widely diffused within the community. We first discuss the components of equivalence (construct, method, and item equivalence), and we offer a three-step approach to addressed equivalence in the writing and revision of a paper. We aim to help editors, reviewers, and researchers produce more reliable research and navigate the tension between generalizable relationships and context-specific ones, both theoretically and empirically, before performing analysis and hypothesis testing. We then apply equivalence to the construct of firm economic performance as a case study, but the same logic can be applied to other variables as well.
OPEN ACCESS
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/ by/4.0/. NOTES 1 Different models can use different profiles of digraph properties.
|
v3-fos-license
|
2017-08-27T07:03:11.927Z
|
2016-04-28T00:00:00.000
|
13574307
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.cwejournal.org/pdf/vol11no1/Vol11_No1_p_47-55.pdf",
"pdf_hash": "d450f6abc2cd931b57a6c37ceafd27828c0b17b4",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:636",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "d450f6abc2cd931b57a6c37ceafd27828c0b17b4",
"year": 2016
}
|
pes2o/s2orc
|
Quality Assessment of Full-Scale Municipal Wastewater Treatment Plant consisting UASB Reactors and Polishing Ponds during its Start-up Phase in India
Amongst the technologies available, the upflow anaerobic sludge blanket (UASB) process has been one of the most widely applied methods for municipal wastewater treatment especially in countries of warm climatic conditions like India. However, past about one decade has witnessed rapid decline in the UASB popularity and its implementation. There has been criticism from various sections on the performance of UASB reactors for not complying with the prescribed discharge standards. It is a general hypothesis that the UASB reactors are not meant for diluted wastewater like municipal sewage when typically the BOD is less than 150 mg/l, COD 250 mg/l and sulphates are more than 150mg/l. An attempt has been made through this study to investigate the reasons on the basis of quality assessment and field observations on UASB reactors and it’s post-treatment of a newly commissioned (start-up) municipal (sewage) wastewater treatment plant commonly called ‘STP’ having capacity of 14 million litres per day (MLD). Study was aimed to know the gaps during the commissioning stage which could be related to poor removal efficiencies. This paper briefly discusses some issues related to operation and maintenance of the UASB plants with purpose for improvements.
INTRodUcTIoN
The UASB (Upflow Anaerobic Sludge Blanket Process) Technology, based on anaerobic principles for wastewater treatment, has been widely used across the world, particularly in countries of warm climatic regions like India.It gained popularity because the technology offers moderate capital investments, low O&M cost, requires no energy for process, easy to implement, fairly good removal efficiencies and mediocre foot print.At present there are about 300 installations worldwide.Amongst the nations which favour UASB for sewage treatment, India, Brazil, Columbia, Indonesia are one of the most leading countries in the world 1,2,3,4 .In India alone, there are about 70 UASB installations for municipal application which are in operation with total flow handling capacity of approximately 3000 MLD (population equivalent 30 million).Indian experience of UASB technology is very diverse and unique.
Like any other anaerobic treatment process, UASB effluent also needs an adequate post-treatment unit to further polish the effluent so as to meet the discharge norms for river discharge 5 .In general, polishing ponds based on natural process was being widely used as a secondary step (posttreatment) to further reduce pollutants from the effluent of UASB reactors.Though this combination of treatment removes organic and solid load without any energy input but has its disadvantages as well.The polishing pond occupies large surface area which is giving unpopularity to this combination.
Aiming at monitoring of the plant during the initial phase of start-up and operational issues involved therein, an attempt has been made through this study to investigate the performances of UASB reactors and polishing ponds.The study is based on the quality assessment and field observations of recently commissioned full-scale UASB sewage treatment plant in Northern India.It discusses few basic lapses during the commissioning phase due to which not only the UASB reactors but the overall performance of the treatment plant is affected.The outcome of the present work could be useful to understand the fundamental reasons during the initial start-up of UASB reactors that may help to improve the performances.
UASB Technology in India and operational Issues
The application of UASB technology in India was triggered after the successful introduction, demonstration and performances of UASB plants under the Ganga Action Plan Phase-I at Kanpur and Mirzapur in early 1990s.Under the Yamuna Action Plan (YAP) ten (10) UASB sewage treatment plants in the province of Haryana and five (05) in Uttar Pradesh (UP) were commissioned in one-go in late 1990s.Figure 1 show a map of Yamuna Basin for places / cities where YAP was implemented.
From that time until 2010, UASB was the most preferable choices of sewage treatment in India.Not only due to good removal efficiencies, the choice was also due to major advantages like no energy requirement, minimal O&M cost, less sludge production, and resource recovery in the form of biogas for electricity generation.However, during the last six years i.e 2000 onwards, there has been rapid decline in the UASB implementation.This may be attributed to several reasons like poor cases and stories pertaining to performances, lack of willingness to operate properly, negligence on the part of operators and management, lack of technical know-how, resources, motivation, and not realizing The raw sewage is pumped to the inlet chamber of the STP from the main pumping station (MPS) which is located at about 3.50 km at Boodi Ka Nagla, Agra from the STP premises.This STP has two UASB reactors each to handle flow of 7 MLD.Treated effluent is discharged directly into the river Yamuna through the effluent channel, which is flowing adjacent to the STP.
The raw sewage first enters into the inlet chamber and then overflows to the screen chamber through a rectangular notch.After screening, the sewage is passed through mechanical grit chamber and the manual grit channel is kept as standby.The de-gritted sewage is passed through 2 division boxes and split into 4 streams uniformly distributed among 4 distribution boxes, each UASB reactor having 2 distribution boxes on either side.In one reactor there are 16 feeding boxes and each distribution box conveys the wastewater to the 8 feeding boxes which distribute the sewage uniformly over the bed of the reactor through down take pipes.The treated effluent from UASBRs is taken to polishing ponds for further treatment.Ponds are shallow basins having retention time of one day.Finally, the treated effluent from these ponds is chlorinated for pathogen removal in the Chlorine Contact Tank before it is discharges in the river Yamuna, through a 200m long concrete
MeThodologY
Before sampling and analysis, a theoretical framework was developed for taking samples and their analysis for different parameters like pH, alkalinity, BOD, COD, TSS, sulphates, VFA and for sludge (TS, TSS, VSS, Ash Content).Samples were collected from different locations within the STP units.Four different sampling stations/locations, namely, inlet of the STP marked as (S1), grit chamber outlet (S2), UASB reactor outlet (S3) and final outlet of the STP (S4) were selected and are marked in the Figure 3. channel.A view of 14 MLD UASB reactor and Polishing Ponds is given in Figure 4.
Digested sludge accumulated in the UASB reactor is drained and conveyed directly to the sludge drying beds for dewatering and drying.Biogas produced in the UASB reactor after mist elimination is collected into a biogas holder which has 8 hours gas holding capacity and excess biogas is metered and flared using biogas flaring system.The sizes of different units are given below:
Name of Unit
No.
Size / dimension (lXBXd)
Inlet Three samples (morning, noon and evening time) on daily basis from each sampling station were collected over a period of about eight months (February to September).These samples were based on "Grab" basis but before their analysis, respective samples were mixed together for testing various parameters like pH, alkalinity, TSS, BOD, COD, sulphates and VFA.Presence of high concentration of sulphates is inhibiting to biological activity.VFA indicates Another important aspect of this study was to investigate the nature of the sludge of UASB reactors.Sludge samples from different sludge ports in the UASB reactors were also taken once a week.Sludge analysis was done for TS, VSS and Ash Content.All the tests were conducted in accordance with the "Standard Methods for Water and Wastewater Examination" 6 .The laboratory facility at the STP site and AMU was used for conducting the analysis.
ReSUlTS
The summary of data collected from on-site monitoring is presented in Figures 5, 6 7, 8, 9, 10, 11 and in Tables 1, 2, and 3.
dIScUSSIoN
The pH of raw sewage ranges in between 7.85 and 8.12 and that of UASB effluent ranges in between 7.69 and 8.03.The alkalinity of raw sewage was found to be unexpectedly high.It ranges in between 790 and 880 mg/l.The incoming BOD to UASBR varies in between 162 mg/l and 186mg/l whereas the reactor was designed for average BOD of 250 mg/l.There has been only 30-40% removal of BOD from the UASB reactors.The total COD in UASB effluent varies between 204 and 252 mg/l and the removal efficiency of UASBR is very low i.e about 30%.UASBR effluent has TSS value ranging from 219 to 242 mg/l.Generally it is observed that when TSS value is high, the COD value is also high.The average TSS removal efficiency of the UASBR is found to be 45 percent.It can be clearly seen that the final effluent from the Polishing Pond does not comply with the discharge standards.TSS concentration varies from 96 to 123mg/l which is finally discharged into river Yamuna.
The sulphate concentration in the raw sewage is also unexpectedly very high.The value ranges in between 365 to 410 mg/l.Sulphates are converted into hydrogen sulphide (H 2 S) which is readily soluble in water.Presence of sulphide is highly toxic to microorganisms and is a competitor for the consumption of oxygen 7 .The VFA concentration is within the limit but VFA to alkalinity ratio was found disturbed.This low value of VFA/alkalinity ratio is one of the inhibiting factors for degradation of organic compounds in the process of anaerobic digestion.
Sludge data has revealed that there is presence of high concentrations of total solids in the sludge.Irregular drawing of sludge from the reactor has led to the accumulation of solids in the reactor and higher percentage of ash content indicates the presence of inert material which is restricting the biochemical reactions in the reactor.It is observed that higher percentage of inert suspended solids that enter UASB has a direct impact on steady state VSS to TSS ratio in the reactor and ash content to the tune of 60% is present in the UASB reactors.This indicates that about 40% active biomass is present which is not good enough to degrade organic matter.
Field observations and conclusions
The following observations and conclusions are made: i.
No trained personnel were found deputed on full-time regular basis for proper monitoring of the newly commissioned treatment facility.ii.
The operation and maintenance manual was not available at the plant site.iii.
The chemist who was deputed had no a d e q u a t e k n ow l e d g e o f wa s t ewa t e r analysis, particularly of sludge analysis from UASBRs.
iv.The laboratory was not equipped with chemicals, glass-ware and instrumentation required for detailed physico-chemical and microbial analysis.v.
The operation of sludge withdrawal was irregular and unplanned.This was one of the main reasons for high concentration of total solids and ash content in the UASBRs.vi.
The screens and grit removal facilities were not working properly.Presence of high ash content in the UASB sludge is one of the reasons of ill-functioning of grit removal facility.vii.
The size of the down-take pipes (HDPE) in the UASB reactors is only 90mm which chokes the flow due to presence of floating particles and solids.viii.
Choking of down-take pipes is removed manually by inserting a rod with some force.This damages the HDPE pipe which is never visible from the outside.ix.
Sulphate concentrations in the raw sewage were found extremely high that clearly indicates that prior to technology choice, wastewater survey was not done properly.UASB systems are inhibiting to high concentrations of sulphates.x.
Despite some negligence, the overall performance of the treatment plant during its start-up phase was found good.This indicates that UASBRs can perform much better if proper attention on O&M is given.xi.
After stabilization, the performance of the plant was found satisfactory but the effluent parameters were not conforming to the prescribed discharge standards as polishing ponds are not able to deliver those values.xii.
New post-treatment alternatives like Extended Aeration System, down-hanging sponged media system (DHS), Constructed Wetlands etc. may be explored, replacing polishing ponds.
AcKNoWledgMeNT
Authors are thankful to UP Jal Nigam Agra for permission, providing access to use their lab and accommodation at the plant site during the course of study.
|
v3-fos-license
|
2023-08-11T15:14:49.985Z
|
2023-08-01T00:00:00.000
|
260799044
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.journalofdairyscience.org/article/S0022030223004423/pdf",
"pdf_hash": "869e7811f44f3778eb28a8062fe0e6efc11f0d27",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:637",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "29ec274a4599fbfee2082713828c229837737bf0",
"year": 2023
}
|
pes2o/s2orc
|
Single-step genome-wide association analyses for selected infrared-predicted cheese-making traits in Walloon Holstein cows
This study aimed to perform genome-wide association study to identify genomic regions associated with milk production and cheese-making properties (CMP) in Walloon Holstein cows. The studied traits were milk yield, fat percentage, protein percentage, casein percentage (CNP), calcium content, somatic cell score (SCS), coagulation time, curd firmness after 30 min from rennet addition, and titratable acidity. The used data have been collected from 2014 to 2020 on 78,073 first-parity (485,218 test-day records), 48,766 second-parity (284,942 test-day records), and 21,948 third-parity (105,112 test-day records) Holstein cows distributed in 671 herds in the Walloon Region of Belgium. Data of 565,533 single nucleotide polymorphisms (SNP), located on 29 Bos taurus autosomes (BTA) of 6,617 animals (1,712 males), were used. Random regression test-day models were used to estimate genetic parameters through the Bayesian Gibbs sampling method. The SNP solutions were estimated using a single-step genomic BLUP approach. The proportion of the total additive genetic variance explained by windows of 50 consecutive SNPs (with an average size of ~216 KB) was calculated, and regions accounting for at least 1.0% of the total additive genetic variance were used to search for positional candidate genes. Heritability estimates for the studied traits ranged from 0.10 (SCS) to 0.53 (CNP), 0.10 (SCS) to 0.50 (CNP), and 0.12 (SCS) to 0.49 (CNP) in the first, second, and third parity, respectively. Genome-wide association analyses identified 6 genomic regions (BTA1, BTA14 [4 regions], and BTA20) associated with the considered traits.
INTRODUCTION
The cheese-making properties (CMP) of bovine milk are economically important for the dairy industry since a significant and growing fraction of the milk produced worldwide is used to make cheese (Wedholm et al., 2006;Cassandro et al., 2008; Food and Agriculture Organization of the United Nations, 2015; The World Dairy Situation, 2016).Milk coagulation is a key step that strongly influences the efficiency of cheese production.Furthermore, the composition, in particular casein and fat, and physicochemical properties of milk have considerable effects on its processing properties and the quality of resulting dairy products (Walstra et al., 2005;Visentin et al., 2017b;Nilsson et al., 2019).The coagulation ability of milk is also associated with the pH of milk, protein composition, SCC, milk fatty acids (FA), and milk calcium content (CC) (Bencini, 2002;Pastorino et al., 2003;Wedholm et al., 2006;Bobbo et al., 2016).The coagulation ability of milk can be evaluated using developed milk coagulation properties such as rennet coagulation time (RCT, min), which is the time from the addition of coagulant to milk to the beginning of coagulation, the time to a curd firmness of 20 mm, and curd firmness at 30 min after coagulant addition (a30, mm) (De Marchi et al., 2007;Bittante, 2011;Visentin et al., 2015).It has been documented that the CMP can be affected by environmental factors such as feeding, udder health, season, and physiological stage (e.g., parity, lactation stage), but they are also genetically influenced (De Marchi et al., 2007;Cassandro et al., 2008;Visentin et al., 2017a;Atashi et al., 2022a).Therefore, genetic selection can be used to improve cheese-making traits, and consequently, to improve the quality and the amount of cheese yield per volume of milk.
There are different laboratory methods developed for direct recording of individual milk coagulation properties (Pretto et al., 2011;Bittante et al., 2012;Bobbo et al., 2016); however, they are time consuming and expensive to implement on a large scale; then, genetic selection based on direct measures of milk coagulation properties is limited (Sanchez et al., 2018(Sanchez et al., , 2019)).Fourier-transform mid-infrared (MIR) spectroscopy has been proposed as an alternative method for prediction of various milk characteristics, including fractions of protein, fat, casein, minerals, and milk FA contents (De Marchi et al., 2009, 2014;Soyeurt et al., 2009Soyeurt et al., , 2011;;Visentin et al., 2015).The MIR technology also is considered as a cheap method to measure individual CMP on a large scale, which is needed for breeding programs in dairy cattle (De Marchi et al., 2009, 2014;Sanchez et al., 2018).
However, enough knowledge on the genetic background of cheese-making traits is needed before they are included into the breeding program.Furthermore, identification of genomic regions and individual genes responsible for genetic variation in CMP will improve our understanding about the biological pathways involved and can be used for improving cheese-making traits.Atashi et al. (2022a) investigated the genetic parameters and genomic regions associated with selected infrared-predicted cheese-making traits in the Dual-Purpose Belgian Blue (DPBB) population, which is the second most important cattle breed reared by dairy farmers in the Walloon Region of Belgium.However, to the best of our knowledge, no comprehensive study has been performed to investigate genetic background and the genetic architecture of cheese-making traits in Walloon Holstein cows.Therefore, the aim of this study was to estimate genetic parameters and identify genomic regions associated with milk production and selected infrared-predicted cheese-making traits in Walloon Holstein cows.
Phenotypic Data
No human or animal subjects were used, so this analysis did not require approval by an Institutional Animal Care and Use Committee or Institutional Review Board.The data used consisted of test-day records of traits including milk yield (MY), SCC, MIR predicted fat percentage (FP), protein percentage (PP), casein percentage (CNP), CC, coagulation time (CT), a30, and titratable acidity (TA).The FP and PP records were generated by the official milk recording in the Walloon Region of Belgium using MIR spectrometry and commercially available instruments and calibrations from FOSS (Foss Electric A/S).Test-day SCC records were log-transformed to SCS based on the following equation: SCS = log 2 (SCC/100,000) + 3.
The MIR prediction equations used to predict CNP, CC, CT, a30, and TA were obtained from various studies (Soyeurt et al., 2009;Colinet et al., 2010Colinet et al., , 2013Colinet et al., , 2015)).Table 1 shows the calibration and the cross-validation statistics of the used MIR prediction equations.Milk MIR spectra were obtained by the analysis of individual milk samples on MilkoScan FT6000 spectrometer (Foss, Hillerød, Denmark).The MIR spectra were preprocessed to remove baseline variation and then were standardized.The MIR prediction equations were applied on standardized spectra from individual milk samples collected in the frame of the Walloon milk recording scheme.Full cross-validation was performed to assess the accuracy of the developed equations.The cross-validation coefficients of determination were 0.95, 0.82, 0.63, 0.42, and 0.68 for CNP, CC, CT, a30, and TA, respectively.The ratio performance/deviation of cross-validation, the ratio of standard deviation (SD) to standard error of cross-validation, was 4.47, 2.34, 1.64, 1.31, and 1.77 for CNP, CC, CT, a30, and TA, respectively.
Data were edited to include only cows with known birth date, calving date, and parity number.Only records from the first 3 parities that had data for all included traits on a given test-day were kept.Records from DIM lower than 5 d and greater than 365 d were eliminated.Age at the first calving was calculated as the difference between birth date and first calving date and restricted to the range of 540 to 1,200 d.Daily MY, FP, and PP were restricted to the range from 3 to 99 kg, 1 to 9%, and 1 to 7%, respectively (ICAR, 2022).Test-day records of the other considered traits were edited to remove records outside the range of mean ± 5 SD.Within cow, if parity 3 was present, parities 1 and 2 were also present, and if parity 2 was present, parity 1 was also present.The number of test-day records in the first-, second-, and third-parity cows were 485,218 (on 78,073 cows), 284,942 (on 48,766 cows), and 105,112 (on 21,941 cows), respectively.The data were collected from 2014 to 2020 on 78,073 animals distributed in 671 herds in the Walloon Region of Belgium.On average across the data set, 5.88 test-day records were available per cow per lactation.Pedigree depth of the animals was traced back to 5 generations.The used pedigree consisted of 186,548 females and 10,076 males.
Genotypic Data
Genotypic data were available for 6,617 (1,712 males and 4,905 females) phenotyped animals or those animals included in the pedigree.Individuals were genotyped using the BovineSNP50 Beadchip v1 to v3 and EuroG MD (SI) v9 (Illumina, San Diego, CA).Single nucleotide polymorphisms in common among the 4 chips were kept.Nonmapped SNP, SNP located on sexual chromosomes, and triallelic SNPs were excluded.A minimum GenCall Score of 0.15 and a minimum GenTrain Score of 0.55 were used to keep SNP.Then, the genotypes were imputed to HD with a reference population of 4,352 HD individuals (1,046 males and 3,288 females) using FImpute V2.2 software (Sargolzaei et al., 2014).Single nucleotide polymorphisms with Mendelian conflicts and those with minor allele frequency (MAF) less than 5% were excluded.The difference between the observed and expected heterozygosity was estimated, and if the difference was greater than 0.15, the SNP was excluded (Wiggans et al., 2009).Finally, 565,533 SNPs located on 29 BTA were used in the genomic analyses.
Variance Component Estimation
The (co)variance components and breeding values for the considered cheese-making properties were estimated based on the integration of the random regression test-day model (RR-TDM) into the single-step GBLUP procedure (SS RR-TDM) using the following single-trait, multiple-lactation (first 3 lactations) model (Paiva et al., 2022): tively, the random regression coefficients of herd-year of calving, permanent environmental, and additive effects modeled using Legendre polynomials of order 3; and e ijklmn is the residual effect.The herd-year of calving, permanent environment, additive genetic, and residual variances were assumed as follows: where HY is the 12 × 12 covariance matrix of the herd-year of calving regression coefficients; I is an identity matrix, ⊗ represents the Kronecker product function, P is the 12 × 12 covariance matrix of the permanent environmental regression coefficients; Ga is the 12 × 12 covariance matrix of the additive genetic regression coefficients, R is the residual covariance matrix, and blocks within R r p = + ∑ contain residual variance Coagulation time is defined here as the sum of the rennet coagulation time plus the time to a curd firmness of 20 mm measured by the computerized renneting meter.
(r) that depends on parity (p).Residual variance was the same within each parity.The H is a matrix that combines pedigree and genomic relationships, and its inverse consists of the integration of additive and genomic relationship matrices, A and G, respectively (Aguilar et al., 2010): where A is the numerator relationship matrix based on the pedigree for all animals, A 22 is the numerator relationship matrix for genotyped animals, and G is the weighted genomic relationship matrix obtained using the following function: The G* is the genomic relationship matrix obtained using the following function described by VanRaden (2008): where Z is a matrix of gene content adjusted for allele frequencies (0, 1, or 2 for aa, Aa, and AA, respectively); D is a diagonal matrix of weights for SNP variances (D = I); M is the number of SNPs, and p i is the MAF of the ith SNP.The H matrix was built scaling G based on A 22 considering that the average of the diagonal of G is equal to the average of the diagonal of A 22 , and the average of the off-diagonal G is equal to the average of the off-diagonal A 22 .
The (co)variance components were estimated by Bayesian inference using the GIBBS3F90 software (Aguilar et al., 2018).Gibbs sampling was used to obtain marginal posterior distributions for the various parameters using a single chain of 200,000 iterates with a sampling interval of 20 samples.The first 50,000 iterates of the chain were regarded as a burn-in period to allow sampling from the proper marginal distributions.Post-Gibbs analysis was performed using the software POSTGIBBSF90 (Aguilar et al., 2018), using the retained 7,500 samples.Genetic (co)variances on each test-day were calculated using the equation described by Jamrozik and Schaeffer (1997).Daily heritability was defined as the ratio of genetic variance to the sum of the additive genetic, permanent environmental, herd-year calving, and residual variances at a given DIM.
The vector of genomic estimated breeding values (GEBV) of the included traits for each animal i, which included daily GEBV from all DIM (5 to 365) in each parity, was estimated by multiplying the vector of additive genetic predicted regression coefficients by the matrix of Legendre orthogonal polynomial covariates; that is, GEBV Tg i i = ˆ, where ĝi is the vector of additive genetic predicted regression coefficients for animal i, and T is a matrix of orthogonal covariates associated with the Legendre orthogonal polynomial functions.
Genome-Wide Association Study
The GWAS analyses were performed for all included traits in the first 3 parities considering following 3 lactation stages: (1) from 5 to 60 DIM, representing the ascending production stage and lactation peak; (2) from 61 to 200 DIM, representing the middle lactation stage; and (3) from 201 to 365 DIM, representing the production decline up to the end of the lactation (Oliveira et al., 2019).Therefore, the GEBV for each lactation stage of each animal i (for each trait in each parity) were obtained by averaging (summing for MY) the daily GEBV solutions of the specific DIM; that is, where GEBV i ˆ, 1 GEBV i ˆ, 2 and GEBV i ˆ3 are the GEBV for the first, second, and third lactation stages of animal i obtained by averaging (summing for MY) the GEBV from 5 to 60, 61 to 200, and 201 to 365 DIM, respectively.Furthermore, the GEBV of animal i through the entire lactation were obtained by summing (averaging) the daily GEBV solutions of all DIM; that is, where GEBVe i ˆ is the GEBV of animal i through the entire lactation, obtained by averaging (summing for MY) the GEBV from 5 to 365.
The SNP effects for each lactation stage were estimated individually for each trait in each parity using the postGSf90 software (Aguilar et al., 2014).The animal effects were decomposed into those for genotyped (a g ) and ungenotyped animals (a n ).The animal effects of genotyped animals are a function of the SNP effects, a g = Zu, where Z is a matrix relating genotypes of each locus and u is a vector of the SNP marker effect.The variance of animal effects was assumed as where D is a diagonal matrix of weights for variances of markers (D = I), and σ u 2 is the additive genetic variance captured by each SNP marker when the weighted relationship matrix (G) was built with no weight.The SNP effects were obtained using the following equation: where λ was defined by VanRaden ( 2008) as a normalizing constant, as described below: The percentage of the total additive genetic variance explained by the ith genomic region was estimated as follows: where a i is the genetic value of the ith region that consists of 50 adjacent SNPs, σ a 2 is the total additive genetic variance, Z j is the vector of the SNP content of the jth SNP for all individuals, and ûj is the marker effect of the jth SNP within the ith region.The additive genetic variance explained by 50-SNP moving windows, with an average size of ~216 KB, was calculated across the whole genome, and those windows explaining at least 1.0% of the total additive genetic variance were considered promising regions and used to identify positional candidate genes.The concept of grouping SNP into windows was adopted as a way to better capture the genetic information such as the extent of linkage disequilibrium (LD) in neighboring SNPs (Habier et al., 2011).
Identification of Positional Candidate Genes for the Studied Traits
The animals included in this study were genotyped using the BovineSNP50 Beadchip v1 to v3 and EuroG MD (SI) v9 (Illumina, San Diego, CA); then, the genotypes were imputed to the BovineHD Beadchip, which is based on the bovine reference genomes assembly UMD3.1.However, new bovine reference genome assembly ARS-UCD1.2,assembled using long sequencing reads, filled gaps and resolved repetitive regions of the UMD3.1 assembly, and has more credible annotation information (Rosen et al., 2020).The Lift Genome Annotations tool, available through a simple web interface (https: / / genome .ucsc.edu/cgi -bin/ hgLiftOver), was used to convert coordinate ranges of the identified genomic regions from the UMD3.1 to the ARS-UCD1.2assembly.Then, to identify possible candidate genes associated with the considered traits, genes located within the identified genomic regions (i.e., between the start and end of genomic coordinates of the identified regions based on the ARS-UCD1.2assembly) were further investigated.We identified genes using the National Center for Biotechnology Information (NCBI) Map Viewer tool for the ARS-UCD1.2assembly as the reference map.
RESULTS AND DISCUSSION
The lactation curves of daily average of phenotypic records for the considered traits are presented in Figure 1.The peak of MY occurred at the DIM 41 (27.59 kg), 35 (35.03 kg), and 32 (39.40 kg) for the first, second, and third parity, respectively.The FP, PP, and CNP curves were observed to be lower for first-parity cows than for second-and third-parity cows.The TA curves were higher for first-parity cows than for second-and third-parity cows.The descriptive statistics for studied traits in the first 3 lactations are presented in Table 2. Daily MY averaged 24.2, 27.9, and 30.0 kg in the first 3 lactations, which is comparable to those previously reported for Walloon Holstein cows (Bastin et al., 2013).Somatic cell score was the trait with the greatest coefficient of variation (53.54 to 59.45%), whereas a30 had the lowest coefficient of variation (7.17 to 7.54%).Cassandro et al. (2008) reported that among milk coagulation and production traits (MY, FP, PP, CNP, SCS, TA, RCT, and TA) of Italian Holstein, SCS has the highest and casein percentage has the lowest coefficient of variation.The average a30 ranged from 32.11 to 32.33 mm, which is in agreement with previous studies (Cassandro et al., 2008;Atashi et al., 2022a).However, mean a30 reported for Finnish Ayrshire cows ranged from 25 to 27 mm (Ikonen et al., 2004).The average (SD) SCS were 2.25 (1.36), 2.46 (1.46), and 2.77 (1.48) in the first, second, and third parity, respectively.Averaged MY, and SCS were higher with increasing parity in line with previous studies (Bastin et al., 2013;Atashi and Hostens, 2021), whereas averaged TA was lower with increasing parity as has been reported for DPBB cows (Atashi et al., 2022a).
Average daily heritability (h 2 ) estimates for the studied traits are shown in Table 3. Heritability estimates ranged from 0.10 (SCS) to 0.53 (CNP), 0.10 (SCS) to 0.50 (CNP), and 0.12 (SCS) to 0.49 (CNP) in the first, second, and third parity, respectively.Heritability estimates were generally higher in first parity than in later parities.The mean daily h 2 estimates for CT, a30, and TA ranged from 0.47 to 0.52, 0.46 to 0.47, and 0.46 to 0.51 in agreement with previously reported estimates for dairy cattle (Vallas et al., 2010;Tiezzi et al., 2013).Atashi et al. (2022a) reported that h 2 of infraredpredicted CT, a30, and TA in DPBB ranged from 0.40 to 0.48, 0.36 to 0.39, and 0.41 to 0.50, respectively.The high h 2 estimated for milk coagulation properties supports possible genetic improvement of milk coagulation ability in the population of Walloon Holstein cows.Cecchinato et al. (2011) reported that h 2 of RCT (min), a30, and TA ranged, respectively, from 0.22 to 0.58, 0.05 to 0.32, and 0.11 to 0.42 in Italian Holstein cows.Colinet et al. (2012) reported that the daily h 2 of infrared-predicted TA in the first-parity Holstein cows in Wallonia ranged from 0.45 to 0.60 with an average of 0.57.Mean daily h 2 of CC ranged from 0.47 to 0.50, which is in line with those reported for Montbéliarde dairy cows (Sanchez et al., 2021).The variation found for h 2 of CMP in the literature can be explained by the differences in the studied breeds, structure of the data, number of records, statistical models, and the length of the period of data collection.
Typically, GWAS methods are based on testing the significance of SNP effects on the traits of interest.However, SNPs within a genomic region can be highly correlated and jointly influence the phenotype.Furthermore, the genetic information in neighboring SNPs, such as the extent of LD, is not used in the GWAS depends on single SNP (Bao and Wang, 2017).Therefore, window-based GWAS procedure have been proposed as an effective procedure to estimate the combined effect of several consecutive SNPs in a specific region and identify genomic regions explaining a given amount of genetic variance (Aguilar et al., 2019).Window-based GWAS may use different window types (distinct or slid- Coagulation time is defined here as the sum of the rennet coagulation time plus the time to a curd firmness of 20 mm measured by the computerized renneting meter. ing windows) and variable window sizes (defined as the number of SNP or the number of base pairs).However, the absence of a universal approach for hypothesis testing is an important challenge of window-based GWAS, even though it is quite a common procedure in genetic studies.The common form for declaring significance is to use a threshold on the additive genetic variance explained by individual window (Aguilar et al., 2019).However, it is unclear what window size is optimal, and no standard presently exists to define the threshold on explained genetic variance.Therefore, determining the proper window size is usually subjective, and researchers often have not justified their choices or sometimes have acknowledged that their choices are arbitrary (Beissinger et al., 2015).Medeiros de Oliveira Silva et al. ( 2017), using the BovineHD SNP panel, considered 50 adjacent SNP windows (with an average of 280 KB) that explained at least 0.50% of additive genetic variance as the threshold to declare significance.Atashi et al. (2020), using the BovineHD SNP panel, considered 50 adjacent SNP windows that explained more than 1% of the total additive genetic variance as the threshold to declare significance for milk production and lactation curve parameters.Han and Peñagaricano (2016) considered 1.5-MB windows that explained at least 0.50% of the total genetic variance as the threshold to declare significance.Suwannasing et al. (2018) considered windows that explained more than 1% of the total genetic variance as the threshold to declare significance.Tiezzi et al. (2015) calculated the variance absorbed by 10-SNP moving windows and reported the 10 windows explaining the largest amount of genomic variance as the most important windows.In this study, a window-based GWAS through the single-step genomic best linear unbiased predictor (ssGBLUP) was used.The results were presented by the proportion of total genetic variance explained by a window of 50 adjacent SNPs with an average size of ~216 KB and windows explaining for at least 1.0% of the total additive genetic variance were used to search for positional candidate genes.We used 1 SNP as the moving step of the window, which ensured that we do not miss genomic regions potentially associated with the traits due to the combination of SNPs.General information (starts and end SNP numbers, window size, start and end genomic positions, and the variance explained by each windows) about the results of single-step GWAS for the included milk production and cheese-making traits are presented in Supplemental Data S1-S108 (9 traits [MY, FP, PP, CNP, CC, SCS, CT, a30, and TA] × 3 parities × 4 stages per parity; https: / / github .com/hadiatashi/ Holstein -Cheese).The Manhattan plots of the proportion of total additive ge-netic variance explained by 50-SNP windows are shown in Supplemental Figures S1-S9 (https: / / github .com/hadiatashi/ Holstein -Cheese).The windows associated with the studied traits along with corresponding genes are presented in Table 4.In total, 6 genomic regions distributed over 3 chromosomes (BTA1, BTA14 [4 regions], and BTA20) were identified that are associated with one or more of the included traits.However, there was no genomic region that explained more than 1.0% of the total additive genetic variance of SCS.The following are the results discussed by chromosome.
BTA1
The genomic region located from 144.38 to 144.47 MB (UMD3.1 assembly) on BTA1 was associated with TA and CT.This window was 84.82 KB in size and explained more than 2.0% of the total additive genetic variance of TA in all defined lactation stages and more than 1.0% of the total additive genetic variance of CT in the first 2 stages of first 3 lactations.Moderate negative genetic correlations were found between TA and CT (−0.46 in the first 3 lactations), which may explain why these traits are affected by the same genomic region (Supplemental Tables S1-S3, https: / / github .com/hadiatashi/ Holstein -Cheese).This region has been previously reported to be associated with PP, TA, CT, a30, and CNP in the DPBB cows (Atashi et al., 2022a,b).Iung et al. (2019) reported that this region is associated with SCS and MY in the Brazilian Holstein population.Sanchez et al. (2021) reported that this region is associated with milk mineral content of magnesium (Mg), potassium (K), sodium (Na), and phosphorus (P) in Montbéliarde dairy cows.This region harbors the solute carrier family 37 member 1 (SLC37A1) gene.SLC37A1 is highly expressed in the mammary glands (Chamberlain et al., 2015;Raven et al., 2016) and encodes a glucose-6-phosphate transporter involved in the blood glucose homeostasis and sugar transport (Pan et al., 2011).It has been reported that the SLC37A1 gene has a very strong effect on milk mineral contents (Sanchez et al., 2021;Zaalberg et al., 2021) and is associated with MY, FP, and PP (Kemper et al., 2015;Raven et al., 2016;Pausch et al., 2017;Sanchez et al., 2017), milk FA profile and SCS (Iung et al., 2019), and milk's CMP (Sanchez et al., 2019) in dairy cows.Sanchez et al. (2019) reported that the SLC37A1 plays a role in inorganic anion transport and is a good candidate for CMP and milk composition.This gene has been shown to be associated with casein phosphorylation (Fang et al., 2019) and participates in fat metabolism or mammary gland development in Holstein cows (Wang et al., 2022).Positions of the identified genomic regions based on the UMD3.1 assembly. 3 Positions of the identified genomic regions based on the ARS-UCD1.2assembly.
5 MY = milk yield; FP = fat percentage; PP = protein percentage; CNP = casein percentage; CC = calcium content (mg/kg milk); SCS = somatic cell score, defined as SCS = log 2 (SCC/100,000) + 3: CT = coagulation time defined as the sum of the rennet coagulation time plus the time to a curd firmness of 20 mm measured by the computerized renneting meter; a30 = curd firmness, defined as the curd firmness measured 30 min after enzyme addition by the computerized renneting meter; and TA = milk titratable acidity measured in Dornic degrees (°D). 6 The GWAS analyses were performed for all considered traits in each parity, considering 4 stages of lactation: (1) from 5 to 60 DIM, representing the ascending production stage and lactation peak; (2) from 61 to 200 DIM, representing the lactation persistency stage; (3) from 201 to 365 DIM, representing the production decline up to the end of the lactation, and (e) from 5 to 365 DIM, representing the entire lactation.Table 4 (Continued).Genomic regions associated with milk yield traits and cheese-making properties in the first 3 parities in Walloon Holstein cows 1
BTA14
Four genomic regions located from 1.52 to 2.15 MB, 2.19 to 2.57 MB, 2.67 to 2.98 MB, and 3.13 to 3.38 MB (UMD3.1 assembly) on BTA14 were identified to be associated with one or more of the included traits.Hereafter, these regions are identified as BTA14-I, BTA14-II, BTA14-III, and BTA14-IV.These 4 regions combined explained 23.82 to 26.92% of the total genetic variance of FP.Genomic regions of BTA14-I, BTA14-II, and BTA14-III combined explained 7.5 to 10.87%, 7.09 to 9.73%, and 6.72 to 9.29% of the total genetic variances of CC, CNP, and PP, respectively.Genomic regions of BTA14-I and BTA14-III combined explained 3.27 to 5.08% of the total genetic variances of MY.The association between this region (1.52 to 3.38 MB on BTA14) and milk production traits has been reported by previous studies (Pausch et al., 2015;Jiang et al., 2019;Pedrosa et al., 2021).Iung et al. (2019) reported that the region located from 1.19 to 3.70 MB (UMD3.1 assembly) on BTA14 explained the highest proportions of genetic variance for FP and milk FA profile in the Brazilian Holstein population.The following are the results discussed by regions identified on BTA14.
The results showed moderate negative genetic correlated between CT and milk composition traits, whereas high positive genetic correlations were estimated between milk composition traits and a30 and TA.Therefore, these traits are correlated and can be affected by the same genes.Chen et al. (2023) reported that this region is associated with nitrogen efficiency index and milk true protein nitrogen in Walloon Holstein cows.Clancey et al. (2019) reported that SNP inside this region are associated with MY in Holstein cows.This region was 633.27 KB in size and harbors 30 genes including the scratch family zinc finger 1 (SCRT1), diacylglycerol O-acyltransferase 1 (DGAT1), cleavage and polyadenylation specific factor 1 (CPSF1), tonsoku like, DNA repair protein (TONSL), and spermatogenesis and centriole associated 1 (SPATC1).Most of the positional candidate genes found in this region support results from previous studies for MY and milk composition in dairy cattle (Kolbehdari et al., 2009;Li et al., 2010;Capomaccio et al., 2015;Jiang et al., 2019).DGAT1, involved in the last step of the synthesis of triacylglycerol, has a major effect on milk production traits (Jiang et al., 2010;Maxa et al., 2012;Nayeri et al., 2016;Clancey et al., 2019;Cruz et al., 2019).Nayeri et al. (2016) reported that SNP located within CPSF1 and TONSL are associated with MY in Canadian dairy Holstein cattle.The SHANK associated RH domain interactor (SHARPIN) gene product is inside this region.This product of this gene involved in the regulation of immune and inflammatory responses (Wang et al., 2012) and has been reported to be associated with the colostrum and serum albumin concentrations in Holstein cows (Lin et al., 2020).Sanchez et al. (2017) reported DGAT1, maestro heat like repeat family member 1 (MROH1), and BOP1 ribosomal biogenesis factor (BOP1) as the most important genes explaining the majority of the variability of milk protein composition in Montbéliarde, Normande, and Holstein dairy cattle.
BTA14-II.Furthermore, the region located from 2.19 to 2.57 MB on BTA14 was associated with milk composition traits including FP, PP, CC, and CNP.This region explained 3.33 to 3.71%, 1.08 to 1.38%, 1.09 to 1.42%, and 1.24 to 1.67% of the total additive genetic variance of FP, PP, CNP, and CC, respectively.Milk composition traits including FP, PP, CNP, and CC are strongly correlated and are expected to be influenced be the same genomic regions.The genetic correlations estimated among FP, PP, CNP, and CC ranged from 0.56 (FP and CC) to 0.97 (PP and CNP), 0.61 (FP and CC) to 0.97 (PP and CNP), and 0.58 (FP and CC) to 0.97 (PP and CNP) for the first, second, and third lactations, respectively.Atashi et al. (2020) reported that the region located from 1.86 to 2.12 MB on BTA14 is associated with 305-d MY and the lactation curve parameters in Holstein dairy cows.This region was 374.22 KB in size and harbors 22 genes.Among genes inside this region, nicotinate phosphoribosyltransferase (NAPRT), family with sequence similarity 83 member H (FAM83H), eukaryotic translation elongation factor 1 delta (EEF1D), pyrroline-5-carboxylate reductase 3 (PYCR3), scribbled planar cell polarity protein (SCRIB), lymphocyte antigen 6 family member H (LY6H), and mitogen-activated protein kinase 15 (MAPK15) have been previously reported as candidate genes for milk production traits in Holstein cows (Li et al., 2010;Buitenhuis et al., 2014;Ning et al., 2017;Wang et al., 2019).MAPK15 may affect milk composition through downregulating transactivation of the glucocorticoid receptor, as glucocorticoid is an important hormone in maintaining milking (Saelzler et al., 2006).The pyrroline-5-carboxylate reductase 3 (PYCR3) gene is located inside this region.This gene encodes a protein that belongs to the pyrroline-5-carboxylate reductase family of enzymes that responds to inflammatory, nutrient, and oxidative stress (Kuo et al., 2016) and has been reported to be associated with colostrum and serum albumin concentrations in Holstein cows (Lin et al., 2020).
BTA14-III.Additional region explaining large proportions of the genetic variance of MY and milk composition (CC, CNP, FP, PP) were found between 2.67 to 2.98 MB on BTA14, where 15 genes are located.Milk composition traits are strongly correlated and are expected to be affected by same polytrophic genes.This region contains lymphocyte-antigen-6 complex (LY6) including LY6K, SLURP1, LYNX1, PSCA, LYPD2, and LY6D in regulating the major histocompatibility complex.Tiezzi et al. (2015) reported that this region is associated with clinical mastitis in US Holsteins.Atashi et al. (2020) reported that this region is highly associated with 305-d MY and peak yield in Holstein cows.Among genes inside this region, adhesion G protein-coupled receptor B1 (ADGRB1), glycosylphosphatidylinositol anchored molecule like (GML), and lymphocyte antigen 6 family member D (LY6D) have been reported as candidate genes for MY, FP, PP, fat yield (FY), protein yield (PY), and milk FA profile in Holstein cows (Cole et al., 2011;Buitenhuis et al., 2014;Ning et al., 2017;Jiang et al., 2019).Costa et al. (2019) reported that the variant within ADGRB1 accounted for 2.44% of the total additive genetic variance for lactose yield in Fleckvieh cattle.The LY6D gene has been reported to be associated with MY and FY in Holstein cows (Jiang et al., 2014;Suchocki et al., 2016).The cytochrome P450 family 11 subfamily B member 1 (CYP11B1) gene is involved in glucose and lipid metabolism and has been reported as a functional candidate gene for milk production in dairy cows (Bülow and Bernhardt, 2002;Kaupe et al., 2007).
BTA14-IV.The genomic region located from 3.13 to 3.38 MB on BTA14, where the t-SNARE domain containing 1 (TSNARE1) gene is located, was associated with FP.This region was 246.80 KB in size and explained 1.12 to 1.25% of the total additive genetic variance of FP.TSNARE1 plays a role in intracellular protein transport and synaptic vesicle exocytosis (Smith et al., 2012;Luo et al., 2021) and has been reported as a candidate gene for traits including MY, FY, FP, PY, PP, and milk FA profile in Holstein cows (Buitenhuis et al., 2014;Jiang et al., 2019;Freitas et al., 2020;Bohlouli et al., 2022).Luo et al. (2021) evaluated physiological indicators of heat stress in Holstein dairy cows and reported a strong association of TSNARE1 with rectal temperature.
BTA20
On BTA20, the window located from 58.87 to 58.97 MB (UMD3.1 assembly), where the trio Rho guanine nucleotide exchange factor (TRIO) gene is located, was associated with TA.TRIO encodes a large protein that functions as a guanosine diphosphate (GDP) to guanosine triphosphate (GTP) exchange factor (Bateman et al., 2000).It has also been shown that the TRIO gene is associated with FY and milk BHB concentration in Holstein dairy cattle (Nayeri et al., 2019).The SNP inside this region has been reported to be associated with MY, FP, PP, FY, PY, SCS, and udder morphology traits in Holstein cows (Bennewitz et al., 2004;Schrooten et al., 2004;Höglund et al., 2009;Cole et al., 2011;Wang et al., 2022), and weaning weight and MY in beef cattle (Michenet et al., 2016).
CONCLUSIONS
Milk's CMP are among important breeding traits in dairy cattle breeds, especially in modern animal husbandry environments.This study aimed to estimate genetic parameters and to identify genomic regions associated with milk's cheese-making traits in Walloon Holstein cows.The findings showed that the included CMP are moderately heritable and could be included into the breeding program currently used for Walloon Holstein cows.Using available genotyping data, additional analyses were done to identify genomic regions associated with milk's cheese-making traits.Different milk production and cheese-making traits were associated with 6 genomic regions distributed over 3 chromosomes (BTA1, BTA14, and BTA20), which could further be used for genomic prediction purposes.The results confirmed most previously identified genes for milk production traits and identified several novel candidate genes (including SLC37A1, TRIO, and genes inside located from 1.52 to 2.15 MB on BTA14) for the studied cheese-making traits including CT, a30, and TA.Future research based on gene enrichment analysis might complement the GWAS results and help to deepen the understanding of the biological pathways related to the studied milk's cheese-making traits.).Genotyping was facilitated through the support of the Fonds de la Recherche Scientifique-FNRS under grant no.J.0174.18 (CDR "PREDICT-2").Relevant information supporting the results not presented here is provided in supplemental data (https: / / github .com/hadiatashi/ Holstein -Cheese).None of the data were deposited in an official repository because they are the property of the breeding organizations, and they are available upon reasonable request.Author contributions are as follows: Hadi Atashi: conceptualization, formal analysis, investigation, methodology, writing the original draft; Yansen Chen: reviewing and editing the draft; Hélène Wilmot: reviewing and editing the draft; Catherine Bastin: development of original trait definitions, reviewing and editing the draft; Sylvie Vanderick: Reviewing and editing the draft; Xavier Hubin: data curation, reviewing and editing the draft; and Nicolas Gengler: conceptualization, funding acquisition, project administration, resources, supervision, and reviewing and editing the draft.The authors have not stated any conflicts of interest.
Figure 1 .
Figure 1.Lactation curves for milk yield (MY), fat percentage (FP), protein percentage (PP), casein percentage (CNP), milk calcium content (CC) expressed as milligrams per kilogram of milk, SCS, coagulation time (CT) expressed in minutes, curd firmness after 30 min from rennet addition (a30) expressed in millimeters, and titratable acidity (TA) in Dornic degrees for the first (blue), second (red), and third (green) parity in Walloon Holstein cows.
H
. Atashi acknowledges the support of the Walloon Government (Service Public de Wallonie, Direction Générale Opérationnelle Agriculture, Ressources Naturelles et Environnement, SPW-DGARNE, Namur, Belgium) for its financial support facilitating his stay in Belgium through the WALLeSmart Project (D31-1392 and D65-1435).H. Wilmot, as a current research fellow, and N. Gengler, as a former senior research associate, acknowledge the support of the Fonds de la Recherche Scientifique-FNRS (Brussels, Belgium).The authors thank the INTERREG VA France-Wallonie-Vlaan-Atashi et al.: GENOME-WIDE ASSOCIATION STUDY FOR CHEESE-MAKING TRAITS deren program and the Walloon Government (Service Public de Wallonie, Direction Générale Opérationnelle Agriculture, Ressources Naturelles et Environnement, SPW-DGARNE, Namur, Belgium) for their financial support through the BlueSter project and previous projects.The authors also acknowledge the technical support by the Walloon Breeders Association (Elevéo, Ciney, Belgium).The University of Liège-Gembloux Agro-Bio Tech (Gembloux, Belgium) supported computations through the technical platform Calcul et Modélisation Informatique (CAMI) of the TERRA Teaching and Research Centre, partly supported by the Fonds de la Recherche Scientifique-FNRS under grant no.T.0095.19(PDR "DEEPSELECT" Atashi et al.: GENOME-WIDE ASSOCIATION STUDY FOR CHEESE-MAKING TRAITS
Table 1 .
Atashi et al.: GENOME-WIDE ASSOCIATION STUDY FOR CHEESE-MAKING TRAITS The calibration and the cross-validation statistics of the mid-infrared prediction equations used
Table 2 .
Atashi et al.: GENOME-WIDE ASSOCIATION STUDY FOR CHEESE-MAKING TRAITS Descriptive statistics for milk yield traits and cheese-making properties in Walloon Holstein cows 1
Table 3 .
Mean (SD) daily heritability for milk yield traits and cheese-making properties estimated across the lactation in the first 3 parities in Walloon Holstein cows 1 3
Table 4 .
Atashi et al.: GENOME-WIDE ASSOCIATION STUDY FOR CHEESE-MAKING TRAITS Atashi et al.: GENOME-WIDE ASSOCIATION STUDY FOR CHEESE-MAKING TRAITS Genomic regions associated with milk yield traits and cheese-making properties in the first 3 parities in Walloon Holstein cows 1 Chromosome 2
|
v3-fos-license
|
2019-03-08T14:22:46.210Z
|
2012-05-10T00:00:00.000
|
71316264
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/archive/2012/251201.pdf",
"pdf_hash": "b8ebe37091a286c2133bd4e2ff1d32f8875b8efe",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:639",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "a464ff3d97e40ac4d8d6b4592d0eb11249bb5c5f",
"year": 2012
}
|
pes2o/s2orc
|
Associations of the Burden of Coal Abandoned Mine Lands with Three Dimensions of Community Context in Pennsylvania
1 Department of Environmental Health Sciences, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD 21205, USA 2 Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD 21205, USA 3 Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD 21205, USA 4 Center for Health Research, Geisinger Health System, Danville, PA 17822, USA
Introduction
Pennsylvania has long been a witness to the negative impacts of energy fuel extraction industries.The quest for fossil fuels began in 1761 with coal mining, followed by petroleum drilling in 1859, and now a growing and controversial interest in natural gas drilling from shale [1].An extensive history of coal mining has left the state with the worst legacy of scarred and contaminated landscapes in the USA [2,3].These vast expanses of coal abandoned mine lands (AMLs) encompass terrestrial or aquatic sites of ore or mineral extraction, beneficiation, or processing, and waste deposit locations [2].Although the Surface Mining Control and Reclamation Act (SMCRA) of 1977 established a fund to reclaim coal mines abandoned prior to the statute, relatively little scientific evidence was used for priority classifications of sites based on public health protection [4].
The settings in which people live and work influence health [5,6].Characteristics of communities that are external to the individual have important implications for health [7,8].For example, communities that lack in material and social resources have a higher prevalence of cardiovascular risk factors and diseases [9][10][11][12], higher rates of chronic kidney diseases [13,14], poor maternal and infant health [15,16], and poor mental health outcomes [17][18][19], even after controlling for individual socioeconomic, lifestyle, and clinical factors.Communities characterized by visible environmental degradation from past coal mining activities may present a wide spectrum of external physical and psychosocial hazards that may compromise physical safety and expose persons to unattractive and contaminated landscapes that, encountered on a daily basis, could lead to impacts on health by modifying health-related behaviors or via other mechanisms [20,21].There are no prior studies of the burden of AML left behind by the coal industry and influences on community context, but current coal production is associated with increased community mortality from lung cancer, cardiovascular, respiratory, and renal disease, and hypertension [22][23][24][25].
We hypothesized that communities with a greater burden of AML would have greater socioeconomic deprivation, social disorganization, and physical disorder, three measures of community context that have been linked to adverse individualllevel health outcomes [26][27][28][29][30][31][32].The findings from such research could provide useful, public health-relevant guidance for reclamation strategies with coal AML and regulation of current drilling for natural gas deposits in shale.
Methods
In this ecologic study, we examined 10 AML measures using Reclaimed Abandoned Mine Land Inventory System (RAM-LIS) data in relation to 2000 Census-based measures of socioeconomic deprivation, social disorganization, and physical disorder across 1283 Pennsylvania communities.We used multiple linear and logistic regression to examine ecologic relations.We also conducted sensitivity analyses to evaluate whether reclamation status and spatial dependence influenced associations.ArcGIS 9 and ESRI ArcMap version 9.2 (Redlands, CA) were used to georeference data from all sources into one spatially linked database to create community context outcomes, AML exposure metrics, and maps.
Definition of Communities.
Because our study area was comprised of rural and urban areas, small towns, and villages, no single geography was ideal.Given this diversity, we implemented a mixed definition of community by combining minor civil divisions (MCDs) and census tracts (CTs), both of which honor county boundaries.MCDs are primary governmental divisions of a county categorized predominantly as townships, boroughs, or cities.MCD boundaries are too large and heterogeneous in cities and include dozens of urban CTs.In urban areas, CTs are small, relatively permanent statistical subdivisions that average about 4000 people, but rural CTs can be too large and heterogeneous (>100 miles 2 ) and include many small towns.We chose a mixed definition of community that used the MCD boundaries for townships and boroughs, given their sociological validity, and used census tract boundaries in cities to capitalize on the greater spatial resolution for more densely populated areas.
Study Area.
The geographic study area included all 943 communities (i.e., township, borough, or census tract) that had at least one abandoned mine and a sample of 340 "control" communities that had no abandoned mines.To reduce residual influences from the AML communities, we defined the non-AML control communities as those sharing a border with a community that was also free of AML but immediately adjacent to an AML community, (Figure 1).We used a sample of non-AML communities to exclude such areas as the urban region of Philadelphia, which may be less comparable to communities with AML on a variety of factors relevant to health.
2.
3. Data Sources.AML variables were derived from the RAMLIS based on national, state, and local data [3].RAMLIS includes information on approximately 30 mine features for abandoned and reclaimed (i.e., those that had at least one feature with a completed remediation activity) coal mines.Each feature was characterized by up to six dimensions (i.e., area, length, volume, height, flow, and count), but we focused on the more comprehensive count and area dimensions to create standard measures of place [33][34][35].For the dimensions of community context and key covariates of interest, data were abstracted from the USA Census 2000 shortform and long-form questionnaires, summary files 1 and 3, respectively.U.S. Census 2000 TIGER/Line files were used for cartographic community boundaries.
Outcome Variables.
From a combination of expert opinion and prior studies [30,[36][37][38][39][40], we used standard methods to generate summary scores for three dimensions of community context.The measure of socioeconomic deprivation was a modified version of the Townsend index, originally developed to measure material deprivation in urban environments in Great Britain and commonly used in epidemiologic studies [27,36].Because this study involved rural areas and small towns and boroughs that presented contrasting community landscapes, we developed and validated an alternative measure by replacing crowding and home ownership with indicators of low education, poverty, public assistance, and labor force nonparticipation.Socioeconomic deprivation and social disorganization were conceptualized as scales based on prior theory and operationalized as the sum of the z-scores of the appropriately transformed indicators.Our measurement model was tested using maximum likelihood factor analyses for each scale to ensure an adequate model fit to a single factor.Physical disorder was treated as an index [41] and modeled as an ordinal variable (low, medium, high).
Primary AML Exposure Variables.
To ensure sufficient spatial variability across the overall geography, we used only the 11 mine features that had at least 500 occurrences across the state (i.e., dry strip mines, flooded strip mines, abandoned structure and equipment, high walls, open mine shafts, subsidence prone areas, vertical mine shafts, acid mine drainage discharges, refuse piles, spoil piles, and untreated discharges).We derived measures of density, diversity, accessibility, and clustering for total abandoned and reclaimed features, reclaimed features only, and abandoned features only.These were then reduced to 10 primary AML variables, which were selected because they were generally orthogonal and represented a priori hypotheses about how AML may influence community context.
Three density metrics were formulated along a priori dimensions that captured aspects of AML burden relevant to (2) physical hazards that have the potential to cause bodily harm; and (3) toxic contamination stemming from various sources within abandoned mines.The density of aesthetic quality features was calculated as the count of dry strip mines, flooded strip mines, and abandoned structures and equipment divided by the area of the community.The density of physical hazards used the count of high walls, open mine shafts, subsidence prone areas, and vertical mine shafts divided by the area.The density of toxic contamination features was calculated as the count of acid mine drainage discharges, refuse piles, spoil piles, and untreated discharges divided by the area.Four additional density measures were created to fully characterize the burden of AML in communities and to evaluate SMCRA priority problems.The density of abandoned mine areas (total area of all abandoned mine sites within a community divided by area of community) and the density of acid mine drainage-impacted streams (total distance of affected streams within the community divided by community area) were included as easily interpreted, readily visualized, community-wide measures.SMCRA defines 17 AML problems as "high priority" that pose a threat to health, safety, and general welfare of people, so the density of SMCRA priority 2 and priority 3 areas was included to evaluate this scheme.
Three additional AML burden metrics were created.Diversity was measured as a count of the presence of the 11 mine features plus acid mine drainage-impacted streams.Accessibility characterized the "intensity of the possibility for interaction" between people and AML features [35].A single metric was calculated as nearest neighbor Euclidean distances from the population center of each community to each of the 11 mine features, summing the z-score-transformed distances for each feature, then standardizing for direction so that larger values represented increased accessibility.Finally, the mean of the interpoint squared distances between mine centroids measured the extent of clustering of abandoned mines according to the formula: where n is total abandoned mine centroids, x is longitude of mine centroid, y is latitude of mine centroid, i is abandoned mine number 1, j is abandoned mine number 5536 [42].Clustering was included because communities could have similar density measures but have widely dispersed or tightly clustered abandoned mines.We hypothesized that more tightly clustered mines were worse for community context.Clustering could not be calculated for 340 communities with no abandoned mines and mine features, 212 communities with no abandoned mine centroids, and 202 communities with one centroid.For the remaining 529 communities with two or more abandoned mines the calculated clustering metric was negated so that larger values, more highly clustered, represented greater AML burden.
The final 10 AML exposure variables used in the analysis were thus density of abandoned mine areas, aesthetic quality features, physical hazards, toxic contamination features, priority 2 features, and priority 3 features; clustering, accessibil-ity, diversity; and density of acid mine drainage-impacted streams.There was large variation in the measures across communities; Figure 2 shows AML features for three selected communities representing low, moderate, and high burden of AML.
2.6.Data Analysis.The goals of the analysis were to (1) evaluate main effect associations among AML burden variables and three dimensions of community context; and (2) conduct sensitivity analyses to assess reclamation status on these associations.All data analyses were at the community level and performed with SAS version 9.1 (Cary, NC).
For the main analyses, we used variables created from all reclaimed and unreclaimed features.Multiple linear regression was used for socioeconomic deprivation and social disorganization, while polytomous logistic regression was used for physical disorder, assuming a nonmonotonic relationship with exposure and a three-level outcome (low (0-50th percentile), medium (51st-90th percentile), and high (≥91st percentile)).Separate regressions were conducted for each AML exposure variable among the 943 AML communities.Three of the AML variables were modeled as continuous variables (accessibility, diversity, and density of acid mine drainage-impacted streams) and the others were modeled as categorical variables (reference group of zero values, with nonzero values frequency-divided into three or four groups).Unadjusted regressions for each AML exposure variable were then adjusted for potential confounders added one at a time, including population density (population per square mile), proportion male, race/ethnicity (proportion white, nonwhite, or Hispanic), age (eight categories), proportion "urbanized areas and clusters" as defined by the U.S. Census Bureau (of the total land area), current mining employment, and density of active mines (per square mile).All these were derived from census data except the last two, which were obtained from RAMLIS.The final fully-adjusted model (Model 1) was also evaluated within all 1283 AML and non-AML communities (Model 2).All models were evaluated for normality of residuals, homoscedasticity, linearity, and residual spatial variation (nonindependence) by examination of plots and similar diagnostic methods.
To assess whether reclamation decreased socioeconomic deprivation, a sensitivity analysis was conducted using the same modeling strategy but with AML burden variables created from reclaimed features only, adjusting for features that remained unreclaimed.Clustering and acid mine drainageimpacted streams had no analogous reclaimed status, so a total of eight AML variables were evaluated.These regressions were performed separately in the 510 communities with at least one reclaimed mine feature and in all 943 AML communities.
Results
Of the 943 communities with AML, 433 had only abandoned mine features while 510 communities had at least one reclaimed mine feature (Table 1).There was large variation in the three community context outcomes in all three community types, and large variation in AML exposure variables in AML communities.
Socioeconomic Deprivation.
With only AML communities included in the models, associations of the AML exposure variables with socioeconomic deprivation were attenuated, but remained significant, with increasing levels of covariate control.When control communities without AML were included, associations generally strengthened (Table 2, Model 2).The fully adjusted models showed several patterns of association (Table 2, Models 1 and 2).These included traditional "dose-response" (e.g., accessibility, density of acid mine drainage-impacted streams, and density of mine areas), "u-shaped" (e.g., density of physical hazards, density of toxic contamination features, and density of priority 2 areas and priority 3 areas, with tests for linear trend P < 0.05 except for density of priority 3 areas), and threshold (e.g., density of aesthetic quality features, test for linear trend P < 0.05) patterns.
Social Disorganization.
In fully adjusted models, associations between AML burden and social disorganization were of two main patterns (Table 2, Models 5 and 6).Higher clustering and density of acid mine drainage-impacted streams were associated with more social disorganization.
For the other AML variables, several followed the pattern of lower social disorganization with intermediate amounts of AML.Addition of control communities similarly tended to strengthen associations.
Physical Disorder.
There were trends of increasing physical disorder across categories of AML exposure, but in no case were we able to reject the null hypothesis of no association (all P > 0.05, data not shown).
Sensitivity Analysis.
The analysis of reclaimed mine features showed inconsistent results across AML variables (Table 2, Models 3 and 4) but generally suggested that reclaimed features had no or weakened associations with socioeconomic deprivation.Consistent with hypotheses, for mine area density, toxic contamination density, priority 2 density, and accessibility, reclaimed features were not associated with socioeconomic deprivation.For aesthetic quality density and physical hazards density, associations of the highest quartile of each with socioeconomic deprivation were weaker when reclaimed features only were included.In contrast to hypotheses, reclaimed priority 3 feature density showed stronger associations compared with the main analysis.There was no evidence that residual spatial correlation accounted for any of the results.
Discussion
Underground voids and vertical mine shafts, unpleasant views of old abandoned structures and spoil piles, and stretches of tainted streams serve as a backdrop for many communities across Pennsylvania.This study provides the first evidence to suggest that some of the legacy of AMLs were associated with higher socioeconomic deprivation.Relations between AML features and social disorganization were more complicated, in that a moderate amount of AML was associated with lower social disorganization (i.e., better community context).Finally, AML features were not associated with physical disorder.Because places influence individual health, it is important to understand how AML may contribute to community context, which can "get under the skin" to produce somatic responses [43].Comprehending how uncommon exposures like AML may influence community context may allow novel insights about context in general.Growing evidence suggests that healthy communities are the product, in part, of features relevant to social functioning (i.e., perceived reputation of the area and positive sociocultural features of the community) and community material and institutional resources (i.e., quality of physical aspects of the environment, presence of healthy home, work, and recreational environments, and existence of public services available to all residents) [7].Healthy communities, therefore, depend on a rich social fabric in combination with valuable material resources for members [44].AML in and around communities, "problems of industrial and consequent social decay, like the parallel problem of urban slums" [45], may engender poor health outcomes via several mechanisms.The community burden of AML may modify physical activity behaviors because of aesthetics or concerns regarding exposures, a conclusion supported by stronger associations with the more perceptible aspects of AMLs.Additionally, lack of resources in communities with higher socioeconomic deprivation can limit access to health care, healthy food establishments, and recreational spaces and also promote poor health behaviors such as cigarette smoking, alcohol consumption, and decreased physical activity [20].
As with many such studies, problems arise regardin temporal ordering.It is not possible to determine whether an association between AML and degraded community context might be the result of the reverse process.However, in this case, we argue that unlike other environmental hazards like landfills, factories, or hazardous waste incinerators preferentially sited in poor communities [46], the location of mining operations is entirely exogenous to the characteristics of the communities where they are placed and are a function of geologic features alone.For this reason, the potential for reverse causation is reduced.It is possible that the presence of mine operations shaped the socioeconomic characteristics of the community prior to the closing of the mine or that the cessation of mining activity itself led to community decline.These possibilities cannot be ruled out given the lack of data on the cessation of active mining.
Another relevant question is whether communities have greater socioeconomic deprivation simply because of the collapse of the mining industry.Disentangling this possibility with the reclamation analysis was difficult because communities with at least one reclaimed feature also had the greatest burden of AML.However, several observations suggested the weight of evidence supported the conclusion that the physical remains of past coal mining activities may have been a more important contributor to degraded community contexts than the collapse of the mining industry.First, the density of active coal mines and current mining occupation were associated with worse socioeconomic deprivation, suggesting that ongoing mining, not its disappearance, is associated with worse community socioeconomic deprivation.Second, several AML variables that would not be considered proxies for the collapse of the mining industry were associated with worse socioeconomic deprivation.For example, the density of acid mine drainage-impacted streams in communities or the accessibility of population centers to the nearest AML features would be difficult to link with the magnitude of economic collapse following the closure of coal mines, yet these were also associated with socioeconomic deprivation.
Social disorganization is generally considered as the "inability of a community structure to realize the common value of its residents and maintain effective social controls" [37,47].The apparent lack of social controls in communities contributes to the unwillingness of residents to deal with signs of neighborhood disorder [48], or the lack of strong social ties to connect neighbors to one another may pave the way for crime and delinquency.There are several plausible explanations for the seemingly counterintuitive association of environmentally degraded communities with lower social disorganization.For example, having some level of AML burden in communities may help to bring residents together in their community, motivating them to work cooperatively toward a common goal [49,50].In England, four studies have evaluated interactions between physical and social environments in disadvantaged neighborhoods and concluded that there remained strong feelings of mutual support and resilience [51].People living in communities with a higher burden of AML may have greater attachment to place, richer social networks, and stronger ties to the community [50].It is also possible that undesirable surroundings in communities may prompt people to desire to leave, but they may not have the freedom to do so, turning their attention to improving their communities, which may lead to stronger attachment [52].
The findings may have relevance to current natural gas drilling from the Marcellus shale, a process that requires considerable disruption of natural ecosystems for new road construction and drilling pads [53].The vertical and horizontal drilling and hydraulic fracturing processes require large quantities of water and can lead to chemical, metal, and radioactive material contamination of surface and ground waters and soils [53].Construction of miles of pipeline and compressor stations to transport the natural gas further compromises nearby air, water, and soil quality [54].Like the legacy of AML, without careful attention, Marcellus shale activities could have impacts on communities similar to those identified herein.
Because of the substantial amount of funding provided by the 2006 Amendment to SMCRA for reclamation of coal AML and the relative lack of empirical, public health-oriented evidence for prioritizing areas for immediate reclamation, results from this study may be useful to identify communities with the greatest need for reclamation.Based on associations with community socioeconomic deprivation, the results suggest that communities in the highest quartile of mine area density or physical hazard density, for example, may be the best candidates for investing SMCRA funds for reclaiming AML if public health protection were to be, as stated in SMCRA, a primary goal.
Figure 1 :
Figure 1: The geographic study area of Pennsylvania representing the 943 communities with at least one abandoned mine and the selected 340 control communities with no abandoned mines.
Figure 2 :
Figure 2: Example of three selected geographies in the study area representing low (Polk Township-1), moderate (Municipality of Murraysville Borrough-2), and high (Gaskill Township-3) burden of AML features in communities.
Table 2 :
Fully adjusted associations between burden of AML variables (reclaimed and unreclaimed) and socioeconomic deprivation and social disorganization.and non-AML communities, cIn 510 communities with at least one reclaimed mine feature,
Table 1 :
AML variables, dimensions of community context, and covariates across AML and non-AML communities.
d AQ = aesthetic quality, e PH = physical hazards, f TC = toxic contamination, g Italics indicates statistically significantly different from means of one or two other groups (P value at least <0.05), h Bold indicates means of all three groups statistically significantly differ from each other (P value at least <0.05).
|
v3-fos-license
|
2020-08-22T13:03:45.975Z
|
2020-08-21T00:00:00.000
|
221217680
|
{
"extfieldsofstudy": [
"Medicine",
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fsoc.2020.00046/pdf",
"pdf_hash": "181a0831d7124cffc17097ba263e6fed38b6a2aa",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:643",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "181a0831d7124cffc17097ba263e6fed38b6a2aa",
"year": 2020
}
|
pes2o/s2orc
|
The Past, the Present, and the Future: A Qualitative Study Exploring How Refugees' Experience of Time Influences Their Mental Health and Well-Being
The experience of time has a decisive influence on refugees' well-being and suffering in all phases of their flight experiences. Basic safety is connected both developmentally and in present life with a feeling of continuity and predictability. Refugees often experience disruption of this basic sense of time in their home country due to war, persecution, and often severe traumatization, during flight, due to unpredictable and dangerous circumstances in the hands of smugglers, and after flight, due to unpredictable circumstances in asylum centers, e.g., extended waiting time and idleness. These context-dependent disruptions of normal experiences of time may lead to disturbances in mental life and extreme difficulties in organizing one's daily life. This article is based on narrative interviews with 78 asylum seekers and refugees in asylum centers in Norway, exploring their experiences before, during, and after flight. The distinction between abstract, chronological time, and concrete time connecting situational experiences (daily activities, such as daily rhythm of sleep and wakefulness) proved important for understanding how the experiences became mentally disturbing and how people tried to cope with this experience more or often less successfully. Prominent findings were loss of future directedness, a feeling of being imprisoned or trapped, disempowerment, passivity and development of a negative view of self, memory disturbances with difficulty of placing oneself in time and space, disruptions of relations, and a feeling of loss of developmental possibilities. Some had developed resilient strategies, such as imagining the flight as a holiday trip, to cope with the challenges, but most participants felt deeply disempowered and often disorientated. The analysis pointed clearly to a profound context dependent time-disrupting aspect of the refugee experience. An insecure and undefined present made participants unable to visualize their future and integrate the future in their experience of the present. This was connected with the inherent passivity and undefined waiting in the centers and camps, and with previous near encounters with annihilation and death. A response was often withdrawal into passivity.
INTRODUCTION
"I am all that I inherited and all that I have acquired" (Mann, 1991).
Time is integrally bound up with our sense of identity, and different psychological definitions of personal identity tend to focus on the essential temporality of identity formation and identity development. This is, for example, described by Mann in his rather famous quote, which underlines the dynamics and continuity in identity formation. The subjective experience of time as a process is a basic human condition and connected with a feeling of being alive. This process gives a feeling of continuity and a feeling of some cohesion between the past, present, and future. This feeling of continuity is an experience both subjectively felt and socially and culturally determined by each group's relation to myths and rituals as well as socially organized linear time. The social organization of time gives the subjects the experience of developmental continuity connected with an experience of basic safety (Cipriani, 2013). Social phenomena and ways of organizing life specific to each culture creates a frame of reference that expresses the culture's time frame. The experience of time is thus socially determined (Sorokin and Merton, 1937), and ruptures in the social context (e.g., war, persecution) may threaten the individual's security. Developmentally, an individual's security is established early in reasonably safe attachment relationships (Schore, 1994;Crittenden and Landini, 2011), mediated through careful handling of the child's needs. Given adequate family and social contexts, the child will achieve a sense of safety that ensures the child that frustration and hunger will pass and thus install a feeling of hope for the future, a futuredirected sense of time (Voort et al., 2014). A context-dependent basic sense of time is thus established from early development, with cultural and social differences.
When society and culture do not provide safe contexts, the family and, ultimately, the caregiver-child relation are put under stress. Wars, persecutions, and suppression create circumstances under which developmental possibilities, and thus a sense of predictability for children and adults in a society, deteriorate (Hollifield et al., 2018).
Forced displacement represents such a developmental risk. At present, there is the largest number of refugees since World War II, more than 70 million people (including internally displaced). The Balkan wars in the 1990s represented the beginning of an era with increasing numbers of people forced into flight. A significant proportion came to Western countries even if the majority were either fleeing in their own country (internally displaced) or fled to neighboring countries. The result was nevertheless a radical change in the policies toward refugees in the many parts of the Western world. For several reasons, the approach chosen by authorities was to frighten people from coming to Western countries and to hinder people in crossing borders (Gammeltoft-Hansen, 2016). This has resulted in increased aid to refugees near their home country, but much resources are now spent on border control and surveillance. Some refugees achieve the status of UN quota refugees and gain direct permission to stay, but the majority who try to reach Western countries has to rely on human smugglers. The consequence is that the flight has become increasingly dangerous with high mortality rates and high risks of traumatization (Silove et al., 2017;UNHCR, 2018).
People flee because they fear for their lives; often after their society has been severely damaged, relatives and friends killed, and life made impossible. Planning for the future becomes difficult, and basic feelings of safety and continuity of identity may be jeopardized. An experience of a possible future based on the experience of a reasonably safe past and a peaceful present that may give opportunities may seem impossible (Volkan, 2003). Thus, identity development and sense of time are put under pressure for large groups who are forced to flee or forcibly displaced.
Death is the ultimate end of time-and in itself unimaginable. When people die, they "pass out of earthly time" and in some cultures enter on a different journey where rebirth ensures a dimension of circularity of time (Gire, 2014). Under normal circumstances, the fact that people die is part of life, with appropriate explanations given, such as that it is "God's will, " "destiny, " or more technical medical explanations of the course of an illness. The fear of dying, of being annihilated at the hands of external malignant forces (terror state regimes, violent paramilitary groups, bombs from above, torturers, human smugglers, or drowning during flight) is qualitatively different from dying in times of peace, even if unforeseen. Such atrocities may evoke anxieties that may disturb identity development. We know that many torture victims live with a constant fear of annihilation, and that time will suddenly end (Lerner et al., 2016).
These fears and anxieties are connected with the wars and atrocities refugees have experienced in their home countries and during their flight through the deserts in Africa, boat voyages across the Mediterranean, imprisonment, slavery and sexual abuse along the borders of Western countries, and so forth (Grande, 2016;Silove et al., 2017). Such upheavals may be profoundly identity changing and cause a disturbance in the sense of time, often in the form of a shortened sense of a future (Beiser, 1987). Upon arrival in a potential host country, difficulties continue, especially in relation to the waiting time in asylum centers during the application process.
This article is part of a larger mixed method study seeking to identify resilience promoting and resilience inhibiting factors, on individual and contextual levels, among asylum seekers and refugees living in reception centers in Norway. Resilience is here defined as "the capacity of a biopsychosocial system (can include an individual person, a family, or a community) to navigate the resources necessary to sustain positive functioning under stress, as well as the capacity of systems to negotiate for resources to be provided in ways that are experienced as meaningful" (Ungar, 2019, p. 2). Resilience is thus a contextually dependent strategy to cope with hardship including temporal uncertainties (Ungar, 2008). During explorative qualitative interviews, the phenomenon of time was repeatedly addressed and strongly colored the interviews. It became clear that refugees' experience of time influences their mental health and well-being in various ways, and we decided that part of the study would explore refugees and asylum seekers' experiences during and after flight, with special focus on the role of time. It was also clear that there was a reciprocal influence between participants' mental health and the experience of time, but this was not focused on in this study.
PERCEPTION AND EXPERIENCE OF TIME: WHEN THE FEELING OF TIME BREAKS DOWN
To understand the role that the experience of time has in refugees' lives, we must look into some basic dimensions of time, especially the distinction between abstract, linear time, and concrete time related to daily, situational activities, which give an experience of regularity and predictability in daily life.
Abstract, quantifiable time as a basic dimension in our existence must be experienced and learned as a way of orientation. This time dimension provides conditions that enable us to intervene in the course of events by defining, realizing, densifying, and coordinating social activities (Johansen, 2001). Johansen (2001) underlines two salient characteristics of the modern form of time. First, time is homogeneous, implying that time is similar for all types of activities at all places. Thus, the present can be seen as a cross-sectional picture of a neutral time that encompasses the whole. Second, all modern time is continuous. It is not defined by actions or events, but is a dimension floating through and between activities in the form of periods or "rooms of time." In other words, time is uniform and linear and transcends any situation. Yet, another characteristic is that humans, at least in the Western world, tend to think that time is available in a certain amount. Thus, our actions use parts of this amount, and we dispose, consume, exploit, or waste time. In other words, time is viewed as a resource to (and should) be used in different ways. In modern, capitalist society, time is therefore a force to be exploited rather than part of an action or event (Johansen, 2001).
On the other hand, concrete time, often called premodern time, is heterogeneous and discontinuous. This is time as connected with concrete, situational experiences, such as preparing food, organizing the house, making furniture, harvesting the fields, and so on. Time, in this connection, is not an abstract and independent dimension, and the abstract dimension of time somehow falls apart or cannot be taken into account as the action takes the time it takes. Thus, every experience, every set of actions have their own time. In other words, the homogeneous frame that time provides in a modern way of thinking is absent, which complicates the possibility of acting according to a common criterion. Time is built by concrete experiences, by commonly known cultural events, such as marriages, or by commonly known references, such as natural events, a sunset, or something that happened "5 winters ago." This type of time cannot be separated from its content, since it is not anything but the events that it describes (Johansen, 2001).
These basic dimensions of time relate to the conceptions of synchronic and diachronic time. Synchronic time is the time that contains the flow of human action in a given situation, while diachronic time incorporates the movement in time (Nielsen, unpublished). Nielsen relates these dimensions to Johansen's conceptions of concrete (synchronic) and abstract (diachronic) time and points to the advantage of connecting concrete time to actions and events, since time is not just defined as a continuously moving point between past and future, but rather tied to the meaning of our actions. Thus, the present is an event or a social situation, not just a moment in a time arrow-a moment that in itself has temporality.
Nielsen also illustrates how, although abstract or diachronic time is linear, it is also heterogeneous, comprising a variety of "temporal strands, moving at different paces and in different spaces" (p. 5). Thus, diachronic time can be present in synchronic time and provide another depth and content. This idea of the multilayeredness of time and meaning is also found in psychoanalytical theories (Gutwinski-Jeggle, 1992). Thus, a person can be seen as layers of different interpretations, based on different points in time, existing as possible interpretations in a person's psychic universe (Nielsen,unpublished,p. 6). Memories of past experiences coexist with perceptions of present situations that are viewed in terms of representations of these earlier experiences, and earlier experiences may be reinterpreted in terms of new experiences (Varvin, 1997).
A person's subjective experience is, in other words, a temporal phenomenon, unfolding in the interaction between the past, the present, and the future (Nielsen, unpublished) in a specific cultural context. How these temporalities interact in a person's mind and how people interpret time are complex processes, personal, and idiosyncratic. In the interviews, however, we may observe the end result, the different ways our participants interpret both similar and dissimilar experiences of their flight destinies.
There are cultural differences in how time is conceptualized, which are reflected in social organizations of life, in myths, in language, and so forth (Boman, 1960;Ricoeur, 1984;Johansen, 2001). Most participants in our study come from non-Western cultures where, as in Semitic cultures, time is a part of life in the sense that what has happened tends to be an integral part of what is. In contrast, in Greek (and Western) thought, time is part of life that corrodes and finally destroys what is (Boman, 1960;Varvin, 2003). Even though cultural differences in how time is conceptualized among the participants are important, this has not been the focus in this study.
A person fleeing his/her home country needs, however, to adapt to situations with quite different and often strange (unknown) time frames, for example, situations where one has to act immediately to avoid danger, with no time to reflect; indeterminate waiting in queues, in camps in unknown places; and the experience of seemingly endless waiting for a decision on asylum applications. At the same time, the refugee needs to preserve a daily rhythm with sleep, eating, attending to bodily functions, and so forth. Such daily routines are related to home situations and family life and often difficult to maintain in life as a refugee (Varvin, 2016).
Thus, for refugees, the time dimension may be disturbed both on the abstract, chronological level, and on the concrete, action level.
Abstract, homogeneous time is disturbed during war and extreme upheaval as social institutions and coordination of social activities tend to be profoundly disrupted. This influences the organization of ordinary, daily life (concrete time) and also of maintaining a continuity of time, with a secure sense of connections between past, present, and future. Concrete time of action is made difficult or impossible in situations where daily activities may be dangerous or impossible. This becomes evident when homes are destroyed and people are forced to live in the streets or in shelters with constant dangers, or during flight, when refugees are dependent on others to organize daily life, often under dangerous and inhuman conditions.
AIM
As time appeared in the material as a category of overriding importance, the aim of this study is to explore in what way the experience of time influenced refugees' mental health and well-being during and after flight.
The Research Context
Asylum seekers apply for asylum upon arrival in Norway, and the waiting time varies from months to years, depending on the number of arrivals and country of origin. Reception centers in Norway are managed by non-government organizations (NGOs), private institutions, or the local municipality, and receive their commission and funding from the Norwegian Directorate of Immigration. Reception centers are located in all parts of the country. There are regular activities for the inhabitants, such as Norwegian courses, sports, handicrafts, etc., and kindergarten is available for a limited number of hours. However, activities vary from center to center, as do the quality and scope.
Asylum seekers are members of the National Insurance Scheme and are entitled to the same health services as nationals (Helsedirektoratet, 2017). Most centers have local nurses or interprofessional health teams, and primary health care functions (Rambøl, 2016;Helsedirektoratet, 2017). Formal and informal barriers to health care exist, however, especially regarding mental health care. These include inhabitants' lack of information on services, insufficient ability to express their mental health needs, health services' lack of ability to handle traumatized refugees, or rejection from the specialist health services on referrals (Varvin and Aasland, 2009).
METHODOLOGY
This study presents qualitative data from the Norwegian part of a mixed method study with participants recruited in Norway and Serbia. Qualitative methodology was used to identify resiliencepromoting and resilience-inhibiting factors on both individual and contextual levels, for asylum seekers during their stay at asylum reception centers in Norway.
A short, open, semistructured interview guide was developed. The guide consisted of three main questions, created to gather the participants' narratives describing difficult and helpful pre-flight, flight, and post-flight experiences, including their perceived and experienced quality of life from the time they decided to leave their home and until the interviews were conducted. Participants were asked to provide examples of situations, persons, or activities that have had an impact on them during their refugee journey.
Seventy-eight participants were recruited at five reception centers for families and single adults in different counties, rural and urban, and also located in regions far away from the capital in order to achieve maximum sample variation (Patton, 2002). An information meeting about the study, with translators, was held in all centers. Participants volunteered in connection with the information meeting or were recruited through center administration and staff or during informal conversations when the researchers spent time in the reception centers.
The inclusion criteria were refugees and asylum seekers living in asylum reception centers, above 18 years of age. Exclusion criteria were refugees and asylum seekers below 18 years or too ill to be interviewed.
A purposeful sampling strategy, aiming at maximum variation, was applied to obtain as many different perspectives as possible, i.e., variation in age, gender, family situation, education level, and ethnic affiliation. This ensured inclusion of the subgroups: families with children, single mothers with small children, single women, and refugees arriving as unaccompanied minors but tested to be adults (above 18 years). Some of the participants were below 18 (minors) when they arrived in Norway, but had become 18 while waiting for their application to be treated. Asylum seekers from Syria, Iraq, and Afghanistan were of particular interest due to their common flight experiences, but all asylum seekers were invited to participate.
The participants consisted of 28 women and 50 men, and the mean age was 29.9 years. Even though we sought maximum variation, we recruited more men, as there were often more men participating at the information meetings and men tended to be more accessible in the common rooms in the centers. Since women were less accessible, they were less available for informal conversations followed by recruitment.
The participants all perceived themselves as refugees, and all had applied for asylum (47% application pending, 24% granted asylum, and 29% refused asylum). Length of stay in Norway varied (3 months to 10 years). A study under review, based on the quantitative part of the study, found that the mental health of the participants that were refused was significantly worse than that of the others (Grøtvedt et al., 2020), which is in accordance with other studies (Hocking et al., 2015). As we found that the time dimension had other and often more serious psychological consequences for those being refused asylum, we have chosen to address this particular groups in a separate paper.
Data Production
The data production lasted from 2016 to 2017, altogether 18 months. Translators were used when necessary, by phone or with the translator present (English and Norwegian were used without translator). Interviews took place in a quiet room at the asylum reception centers and lasted 1-3 h. All interviews were tape recorded, with the participants' consent, and transcribed verbatim. Field notes gave information on setting, participants' reactions during the interview, and researchers' reflections following the interview.
The research included participant observation/field notes in the selected reception centers. In this way, we could move beyond selective perceptions and discover issues that were overlooked during interviews. This contextual knowledge facilitated a better understanding of what had been expressed during interviews (Fangen, 2010), such as conditions influencing resilience. Spending time with the residents by participating in daily activities at the centers contributed to building rapport and facilitated informal and formal conversations/interviews.
Ethics
Informed written consent was sought from all participants. They were told that all information would be treated without name or any other direct identifiable information and that we would present the data in a way that secured confidentiality. They were informed that the research had no link to the asylum application process and that they could withdraw at any stage. Deidentification and confidentiality were ensured by using fictitious names, and other potential identifiers were also altered. Ethical approval for the study was obtained from the Regional Committee for Medical Research Ethics (REK) in Norway (REK 2016/65).
The Research Team
The research team consisted of six researchers with different backgrounds: psychiatry, nursing, anthropology, and master students (within the field of nursing) under supervision. The different backgrounds allowed for negotiation from different perspectives in the process of interpreting the material, basing our analysis on researcher triangulation.
Analysis
The chosen theoretical underpinning was phenomenology. This methodology attempts to understand the meaning of events and interactions within the framework of how individuals make sense of their world. As the interviews provided many and different expressions related to how time influences the participants' mental health and well-being, our intention was to explore the essential meaning of their descriptions. The data analysis was inspired by the principles of Giorgi's phenomenological analysis as modified by Malterud (Giorgi, 1985;Malterud, 2017).
We developed an analytical approach that relied on five interconnected stages: (a) familiarization; (b) indexing; (c) identification of a thematic framework; (d) interpretation: development of preliminary categories; (e) confrontation with existing theory on time experience by refugees; and (f) reinterpretation of themes, contextualization, and development of the final conceptual (or categories) framework.
The main researchers (authors of this article) also did interviews. In this way, the familiarization started during the interview process and was followed by in-depth reading of the interviews immediately after they were conducted. We all wrote down our first impression and reflections upon the interview, reflective notes that were available to all the authors. To ensure reliability, or consistency within the employed analytical procedures, identification of meaningful units and themes (indexing) was done through reading and rereading the interviews and notes in a collaborative process between the first and last author. Thus, this study is also inspired by hermeneutics, as we have obtained a constant and dynamic dialogue with the material, moving between perceiving and analyzing the material at a detailed level and by the authors trying to understand how the single pieces constituted broader themes. Through an interpretive process, a preliminary network of categories was developed, clustered around concepts based on the research questions for this study. The research questions were subsequently confronted with the material and refined. As it was important to see the gathered passages in context, rereading of whole interviews and transcripts from participant observations was done when considered necessary. Time appeared in the material as a category of overriding importance, something we had not anticipated. Subsequently, we identified, reviewed, and discussed a variety of empirical and theoretical sources addressing the phenomenon of time, as we found that the discourse on time has many ramifications. An interdisciplinary approach seemed fruitful, allowing for psychological as well as sociological and anthropological perspectives and theories in the process of interpretation (Cipriani, 2013). We gathered the passages relating to time in a table, giving us the main basis for the analysis. The following subcategories of the time category were developed: loss of future directedness, entrapment, passivity and relation to self, memory, disempowerment, waiting for answer, disrupted time and friendship, loss of development, and strategies to cope with the time/interpretations of time. These categories will be elaborated below.
FINDINGS
In the material, refugees' encounter with the sudden and brutal possibility of "the end of time, " or a time experience as an unclear, unpredictable, and even feared dimension, seems to represent an analytical way of understanding the emergence of time as a recurrent theme. The extreme experience of losing a sense of future, characteristic for many inhabitants of asylum centers, must be understood partly on the background of the refugees' near encounter with the possibility of annihilation and death and also on the background of the inherent passivity and undefined waiting in the centers.
Loss of Future Directedness
One of the most common issues expressed by the participants was how an insecure and undefined future left them unable to visualize their future life and also to integrate the future in the experience of the present.
Adem, a well-educated Syrian man in his late 20s, waiting for a response to his asylum application, used the metaphor of being frozen in time. He combined this metaphor with animal metaphors to illustrate inactivity and thus the inhuman character of his present situation: Adem: Before, I was thinking about my future, right now I am not. Why do I have to think? I feel like I am a cow. Just eating and sleeping. I feel like a chicken, on the freezer. I am frozen. You know what they did-they (UDI) killed me, I am not kidding. UDI. . . UDI, they are not coming for me, like they say "okay, we will kill Adem" because they have something personal with me, no, but because they are really slow. Really slow. . . [. . . ] I will just say to you, what would happen to you if you would stay without anything (to do) for ten months? What would happen? You can? You can't? I mean, like, maybe, but for me, I can't. "Cause for the first time, I stayed this really long time without doing anything." Like many other participants, Adem turned his frustration toward the rather abstract body of UDI (The Norwegian Directorate for Immigration) and claimed they had "killed him, " in the sense of processing his application too slowly and at the same time preventing him from doing meaningful activities. This passivity made Adem and many other participants turn their days into night, a sign of depression, according to Adem: Adem: How do you know if someone is really depressed? I go to sleep at five o'clock in the morning and I wake at four o'clock in the evening. Do you think that is a life? It's not! I never used to be like this. I'm always working. Studying. [. . . ] I don't want them to just give me a job tomorrow, no, but at least finish my papers (residency). . . let me go to the school, at least. Let me do something.
Helen, a woman from Eritrea in her mid-20s, also waiting for response on her asylum application, said the waiting time and the lack of activity made her feel mentally ill. She compared the situation at home with the present situation, and even though she faced severe challenges in her home country, she still felt her day-to-day activities in Eritrea were meaningful: Helen: I don't know how long I have to wait, and I cannot realize my plans (for the future). Because, just now it's like I am drifting in the air. I am doing nothing. INT: How does it influence you that you are drifting in the air? I feel ill. It influences my everyday life. Even if I had great challenges in my home country, I was still very active. I was doing activities, and I had two jobs, and besides that I did some teaching. [. . . ] It would have been nice if we were enabled to give something to the society, or go to school, rather than just sitting here.
As described above, a recurrent theme was the need for doing something meaningful: activities, going to school, or working. The participants found being young and healthy, and still just sit passively and wait without contributing to society in any way, very difficult. Life tended to be described as "on hold, " and although some activities were offered in most reception centers, many experienced these as without meaning or direction. The resulting passivity was described as a large burden by many participants and may be linked to the near-death experiences of Adem, Helen, and many others. Thus, both Helen and Adam are in a system that, rather than alleviating fears, actualizes fears connected with loss of a future perspective.
Entrapment
The prison metaphor was frequently used to describe the participants' feelings while waiting for a response to their application. Many compared this feeling to feelings they had in their home country or in camps in, e.g., Greece, of being trapped or imprisoned with none or very limited possibilities to act. Abdi, a Syrian man in his mid-30s, elaborated on how time spent in the asylum center compares to the feeling of being trapped he and other fellow asylum seekers experienced in the country they fled: Abdi: The waiting is the worst for refugees. Some want to go to another country, being so tired of waiting. [. . . ] [Some are thinking about going back to their home country, or find a country where they can work. . . . We were trapped in Syria. We did not dare to go out due to the situation there. [ Abdi seemed to say that, for him, being able to identify some point in time when he would be able to decide to pursue some kind of act helped him endure the present. Without really reflecting on the consequences, he seemed determined to do something soon with his own situation, something involving some kind of change to give him a feeling of being free.
Nizar, a Syrian man in his early 20s, was generally positive to the asylum seeker system and focused throughout the interview on the many fortunate and good aspects of his situation. However, when asked to describe the daily life of an asylum seeker, he pointed to the paradox of feeling imprisoned, although he was not: Nizar: You (thinking). . . You think that you are imprisoned, imprisoned in a very large prison. But you are not a prisoner, you are free. But you cannot do anything. . . Thus, the feeling of being trapped or imprisoned was strongly tied to the feeling of being unable to change your own situation, implying a strong sense of loss of autonomy.
Disempowerment and Unpredictability
A frequently recurring theme regarding time was the disempowerment inherent in having minimal influence on one's future, as well as one's present. The length of time spent prior to their potential recognition as refugees was experienced as a period that contained no predictability and allowed little or no influence in relation to decisions regarding the daily or future life. Firash, an Afghan man in his early 40s, provided the following perspective on his daily life: Firash: It is very tiresome (daily life). There is no specific plan for the days because we are not in a position to make plans; it is other people that are making the plans. INT: I see Firash: Five months ago I applied for a table and a chair for me and my daughter. . . I have still not heard anything from the reception. [. . . ] Firash continued to talk about his 10-years-old daughter, who he believes is depressed due to the situation. He said he had repeatedly asked for help but nothing had happened: Firash: My daughter has become depressed. . . She (a person in the reception centre) said that we should make an appointment and meet (to talk about the daughter), after that nothing happened. A couple of times she repeated that we were to have a meeting, but nothing happened. [. . . ] So everything has to wait. . . wait. . . wait. . . .
Another dimension of the disempowerment experience was the total unpredictability of the waiting time. Participants said they would always get the same answer when calling to inquire about the progress; no one could say anything about time, not even whether it would involve weeks, months, or a year. Berat, a Turkish political refugee in his early 40s, who came to Norway with his family, said he had tried to phone several times without getting any answers. At the time of the interview, they had moved between four different asylum centers over a period of 6 months, their kids changing schools each time. At the time of the interview, they worried that the center they were staying in would close, forcing them to move again before the application was processed: A large number of participants emphasized that waiting without any known timeframe, knowing nothing about the application process, about any potential progress, were sources of extreme frustration.
Passivity and Relation to Self
Some participants, both young and old, linked the lengthy wait to changes in the way they perceived themselves. Strong, emotional words and descriptions were used, like "hating myself, " "feeling hate toward myself, " "feeling unsuccessful, " "feeling like a loser." These feelings were sometimes linked to suicidal thoughts. Adnan, an 18-years-old Syrian boy, who came as a minor but was moved to a center for adults, said he had waited 1 year and 2 months for the application to be processed. He described how he gradually started to view himself: Adnan: You start to hate yourself by being here. You get tired of the clothes you are wearing. . . you get all these different thoughts. . . You want to kill yourself. You don't want to be here anymore. There is no school. I am new here. There is nothing to do. . . Like many others, he continued to describe his present life using animal metaphors. Adan used first person "I" when he was active, but second person "you" when describing his passive position: Adnan: I have told UDI to either give me a denial (the application) or give me a residence permit now. I have been here in one year and two months and that is a long time, and I could have done a lot during this time. And I have lived in many different reception centres, and living in reception centres is like living as sheep. Eating, drinking and sleeping, eating, drinking and sleeping. . . and then you get tired in the head of it. Sometimes it seems like my head is about to burst. Yes, quite simply explode. . .
As described by Adnan and others, living in a reception center may evoke a feeling of reduced worth in the sense that your life is comparable to the way animals are being kept. This comparison must be seen on the background of being exiled from one's country, implying a loss of protection and basic human rights. Many described extreme conditions in their homeland before fleeing, and many had experienced hardship and dangers during flight.
Difficulties in Placing Oneself in Time and Space
Another topic related to time emerged in several participants' descriptions of confusion and lack of memory regarding both time and space. Awira, a Kurdish woman in her 30s, and her minor child had waited for their application to be treated for one and a half years. She struggled to place herself in time and space conveying her flight narrative. She described the long journey, spending time in different places, experiencing many hardships, saying this made her confused as to when and where events took place, her head being "very busy." She could not remember when she left Kurdistan, but according to a date in her passport, it was about 3 years ago. She had problems describing where she had been at what point in time but elaborated on her time spent waiting in Greece, where she believed she had spent 1 year, but was not sure. She did, however, have a clear memory of the cold nights there, which reminded her of her present experiences. The similarities seemed to create a feeling of disintegration of time, and the continuation of the same experience was somehow transformed into a feeling of tiredness and self-hatred, similar to the feelings described by Adnan above: Awira: It is a very, very heavy strain, and very, very huge strain, and heavy (the waiting). Because, when we came to Norway, we got the message "You are to be in the transit reception for one week, and after some three months you will get an answer." But it became long. . . I think almost three months at the transit reception and. . . The inability to visualize a complete narrative also related to many participants' descriptions of forgetting the past. While some described not thinking about the past as an intentional strategy of forgetting the past, many talked about memory loss in the sense of not being able to recall their home, past events, or skills and competencies related to their previous occupation. Damir, a young Syrian man in his early 20s, elaborated on this in an interview where we talked about his home in Syria: This may be interpreted as a description of a past too painful to remember at the same time as the experience of loss is painfully present. One may call this "known unknown, " typical for the many who have experienced unbearable losses, which they are incapable to contain mentally and express in a meaningful way to give a sense of something past. Thus, this kind of unbearable losses may become frozen and often wordless images in the mind. Past painful experiences are, however, represented in implicit/procedural memory as shown through bodily pains, confusion, and sensations connected with their present situation, e.g., the coldness, as procedural memory has no time dimension.
Disrupted Time and Relations
Yet, another dimension of time, outlined by young people in particular, was that even the social part of the time spent in reception centers did not necessarily improve the experience of the waiting time. The majority of asylum seekers we met had moved between reception centers up to three to four times a year, making it difficult to establish social relations. The possibility of continuous friendship, meeting regularly and expecting that one will meet again soon, was lost. Thus, this time-preserving part of everyday life was disrupted by constant relocation.
Nizar, a Syrian man in his early 20s, described the situation for his younger siblings: Nizar: My brothers are [. . . ] 12, 15, and 16 years old. They are mentally tired of being in a reception centre. They don't want to talk with other people, because they are afraid that when they get to know them and get friends here, they have to move somewhere else. So, it is very difficult for children to be here.
We also heard the stories of family members being sent to different parts of the country, although they begged to be allowed to stay together. One family chose to go to Norway because they already had close relatives living here. Upon arrival, they were told that they would be sent far north while their relatives, who already had residence permits, were living in the south. The family member participating in our study, Hassan, a welleducated man from Syria in his 40s, felt so frustrated and humiliated that he had decided to refuse any relocation: Hassan: I will never go (up north). Because of how they treat us. I am educated, and so are the rest of my family. We can do anything (work) here. We have a priority to stay here (coming from Syria). . . so why do they want to separate us?
He talked a lot about the response from the authorities to his decision not to move: Hassan: Everyone is saying that "you will be out of the system then, you will be cut, your money will be cut." But do you think I came here for the money? I did not! I am not here because of your money, I am not here because of your food. [. . . ] I am here because I am a human, and I want to be a human again. . . (referring to the war experiences) Being "human again" related to many aspects of his situation, but Hassan related it first to the possibility of reuniting with his (extended) family, not having to move far away and thus lengthen the time until the family could be reunited. Being treated in such an inhumane way also prolonged his experience of not being human.
Loss of Time and Development
Many participants described being unable to do anything, or to influence anything of importance, such as the application process, or work, as being similar to a "slow death." Orhan, another Syrian man in his 30s, described the waiting process as particularly difficult due to the combination of doing nothing and the insecurity and fear that all that waiting would not be rewarded with a residence permit. He and others described how similar situations in the past, including waiting and insecurity (during flight), tended to melt in with the feelings in the present: Orhan: To live in this uncertainty is killing, it is extremely painful. . . And not being of any use. Get up in the morning, drink coffee, and then you wait for a whole day, and then you go to bed. . . The loss of time, through long waits and the postponement of future plans and dreams, was described as a source of anger and sorrow. Young people in particular often described time as a valuable asset to be protected, as something that could be "lost" and impossible to regain. Adnan, the 18-years-old Syrian boy who had waited for 14 months, expressed how time as a dimension in itself was perceived as valuable and how the loss of time was experienced as a loss of a part of life: Adnan: Before you come here (final destination) you think that life will change a lot. But when you come you become surprised that it is the opposite. . . that you continue to lose time. And that time is valuable, it is a part of our life.
Many refugees, from Syria in particular, categorized themselves as "legal refugees." They said they had expected to be processed more quickly due to their well-known refugee status and were disappointed by the extensive time used to clarify their status. Jamal, a Syrian man in his late 20s, said he gradually lost his feelings of motivation and happiness during the wait: Jamal: When we got to know that we were to go to Norway, we became very happy. And I started to rest a bit. . . watching different things about Norway on YouTube. And I started to learn some Norwegian on my own. [. . . ] But then we came to Norway, and everything is so slow here. The whole system in Europe is slow. So, our lives are put on hold and I have lost my courage. It's like I am not motivated to learn Norwegian now; I have become so disappointed over the situation. . . Some said they felt treated like "criminals" when the application process took so long even though all their papers "were in order" and the situation in their home country "well-known." Having to put life on hold this way was described as a provocation and a humiliation; to Adem, this justified his unwillingness to participate in activities at the reception center: m just like, they are cheating on me, okay? "We will keep him silent to just go to these activities and he will be happy." I feel like that. . . it doesn't mean that it's true, but I feel it's like that. . . Adem said he would not give in. He felt participation meant giving up his personal freedom, giving in to those he felt cheated him. As he was treated like someone undeserving of refugee status, as a potential criminal, he might as well behave like one by not complying with the expectations of the system. His reaction did, however, seem to represent a deeper form of resistance; he could not comply with a system that he perceived as suppressive and which he felt was destroying his possibility to be a person.
His strategy may be seen as counteracting the experience of being dehumanized.
Strategies to Cope With Temporal Uncertainties
There were several ways of dealing with the dimension of time and related feelings of insecurity, waste, passivity, sorrow, humiliation, and anger. Some emphasized preventing meaninglessness by maintaining a daily rhythm; others pointed to activities that facilitated group solidarity. Some underlined the need to make sense of the present, while others told how thinking of the (better) future was the only way to endure the present.
Adnan, the 18-years-old Syrian boy who came as a minor and had developed a negative view of himself and his life, told how he tried to handle all the difficult thoughts that had accumulated. Focusing on the future, thinking that everything might change tomorrow, he somehow managed to cope: Adnan: It is not like I am tired of my life all the time, I give myself a chance. I think about tomorrow; "I will get a residence permit tomorrow, " and then the next day is coming and then I say "ok, I must give it another chance." Then (the next day) I will get a residence permit. . . While Adnan focused on tomorrow, Damir, the young Syrian man unable to remember Syria, also waiting for an answer on his application, exemplified another strategy. He emphasized the difference between him and his friends in trying to make sense of daily life by engaging in activities that connected him to the present as well as the future: Damir continued to talk about the importance of sleep; of creating a rhythm enabling him to be active during daytime; this enabled him to pursue activities that gave meaning to his everyday life, investing in the future by going to school and learning the language. He told how he worried about the passivity he saw not only among his friends but also within his own family, him being the only one who was active during daytime and the only one who had learned the language (the interview was done in Norwegian).
Karam, a Syrian man in his early 20s, developed a similar strategy. He had just received his residence permit and was engaged in different activities, such as a local sports club. He told how he had been thorough with his schoolwork, investing in his future, a future that recently had become more clarified. He had, however, long before he got his residence permit, engaged in activities: As illustrated by Karam, even with significant variances between the reception centers in type and amount of activities being offered, some participants always seemed to find things to do, even when alone or different from the group they were somehow associated with.
Like Karam, many emphasized the need to acknowledge the temporality of the situation, and ambitions and hope of a better future helped preserve a positive focus while waiting.
Yaser, a Syrian man in his early 30s, waiting for his application to be responded to, underlined the importance of enduring and accepting the situation, even if you suffer at the present: Yaser: When you have identified a goal you also have to endure a lot, and you cannot complain too much. You know it is a bad situation, but it is temporary.
Some developed strategies of reinterpreting time, aiming to experience the present as meaningful or less insecure and threatening. Carlito, a well-educated man from Syria in his late 20s, provided this perspective: Carlito: I remember that I stayed at one place, at the border between Greece and Macedonia, and there was rain. It did not stop for many days [. . . ] and we woke up all the time, the tent being full of water, and we had to leave the tent and try to get rid of the water. [. . . ] The conditions were very difficult, very difficult. It was something that I had not experienced before, but mentally I still felt very good there, actually, and the things we contributed with was very good. [. . . ] We organized many different activities, and we took responsibility for the cleaning for example. [. . . ] We identified those speaking good English and they taught refugees English. And, then some would teach Kurdish for example. Because the Kurds live in four different countries and are not allowed to go to schools that teach in Kurdish, and then we thought that it was good that we were teaching Kurdish.
[. . . ] Personally, every day I interpreted for people going to the doctor and followed two to three persons every day to the doctor.
[. . . ] There were also refugees that could play an instrument, that could sing, and then they would teach the others in singing, in dance and in music and playing instruments. And all these activities were for free. The reason why we did this was that we were told that the border was closed. So instead of just waiting for the border to reopen, we thought that life must go on. We cannot just sit here and wait for the borders to reopen, because that may take time. It can take one year, it can take two years, and if one start to think like this, you will have mental difficulties, and that is why we thought that we had to do something. We had to fill our time. We had a lot of time. We have to use our time in a sensible way, and that is what we did.
Carlito described trying to make people think differently about the situation, not only as a coping strategy but also as a way of viewing the present time and the present opportunities: Carlito: Ok, let's think that this is a tourist trip. We are on an adventure [. . . ], we have to learn something, we cannot sit here and wait for the borders to reopen [. . . ]. I went to a Greek school for four months and I learned to speak a bit Greek. And learning something new, it does something with a human being; that you have not "used up" (misused) your time, but that you have learned something. That is a very good feeling. And then I came here (Norway), and I think about all the good days there, even though there was little food, little money, suffering, and bad conditions. . . Sometimes I slept on an empty stomach. . . . But mentally I felt fine. . . .
Cemil, a Turkish political refugee in his late 30s, described a similar strategy; he and his wife changed the interpretation of time and space by pretending to be in some sort of holiday camp. This way they were able to fill the present with meaning and joy while gradually revealing the part of reality that included the future for their children: Cemil: We were playing a game with our children actually. Even while we are asking for asylum they were thinking it is a bank office or something. [. . . ] As a family, we encourage each other. We try to make each other strong [. . . ] so maybe it was a good thing we said we were camping (laughing). INT: And did they believe that? Cemil: Yes, they believed it is a kind of camping. After a time, they see the reality. . . . They speak with each other, what is happening, they ask questions. We try to explain them step-by-step.
Keeping focus on the here and now, in a temporary (holiday) camp setting, seemed to enable a focus on the positive aspects of the present, avoiding a continuous negative focus on an insecure and unpredictable future.
Carlito continued to elaborate on how he tried to encourage people to reinterpret their focus while spending time waiting for the application to be processed: Carlito: I came to this reception, and here there are many depressed asylum seekers and refugees. Many of them have waited long for a residence permit. [. . . ] I tried to encourage them by saying that residence permit is just a paper. It is nothing more than a paper. What will happen if you do not get a residence permit? Are life going to end? You have to live your life, you cannot just sit here and become depressed and wait for the permit.
Carlito's own coping mechanism was to focus on life, outside linear time, and realizing the future might not bring what he expected.
Perceptions of the Past, the Present, and the Future
The interviews showed time as an important dimension in how the participants viewed and interpreted their present situation, crucial for their well-being and mental health. The participants frequently expressed how an insecure and undefined present made them unable to visualize their future and to integrate the future in their experience of the present. The unpredictability in the post-migration phase clearly disturbed their ability to tie the past and the present to the future (Haberfeld et al., 2019). This must be understood not only on the background of the inherent passivity and undefined waiting in the centers but also on background of refugees' previous near encounters with annihilation and death (Kovras and Robins, 2016;UNHCR, 2018). Such dangers evoke deep and overwhelming anxieties, which may or may not develop into a post-traumatic condition but will be stored in memory and may emerge later. Such anxieties force the person to focus on the present to remain attentive to possible dangers, causing emotional dysregulation, frequently seen in post-traumatic conditions (Nickerson et al., 2015a).
As a rule, such situations of imminent danger are radically different from earlier experiences. Our participants may not easily recognize these dangers as part of something known. Boat trips were, for example, described as utterly new and terrifying experiences, for which they had not developed coping strategies. Such situations were somehow similar to waiting in receptions centers, with little or often no available mental guidance for evaluating the present situation. The focus on the present tended to dominate, detached from earlier experiences. Without guidance, it was difficult to know what may happen next, disturbing the sense of a reasonably secure future. This loss of anchoring in time and space, expressed by many of our participants, may signal mental disintegration in people on the verge of mental breakdown (Rosenbaum, 2000;Varvin, 2003).
Many described the future as somehow non-existing or unclear, making them unable to visualize a complete narrative of the past, present, and future (Varvin, 2003), like the Kurdish woman unable to remember any timeline for her flight or the Syrian boy insisting he had "forgotten" his home.
Being unable to tie the past to the present, and the present to the future, seemed to create a sense of uncertainty and meaninglessness, which may cause impairment in the sense of reality; cognitive organization of perceptual impulses becomes extremely demanding as it becomes difficult to distinguish important from unimportant (Puvimanasinghe et al., 2014). The material shows that being repeatedly trapped in similar situations-enduring long waits in a refugee camp, then in an asylum enter-being as cold in the host country as in the refugee camp-seems to bring the past into the present, and the present into the past in a way that melts and reinforces previous experiences (Palic et al., 2015). In other words, these multilayered experiences, connected to different time points in a linear time, melt together in a way that disturbs the possibility to scatter the flow of actions both on the abstract, chronological level and on the concrete, action level.
Prisoners of an Unknown Time
A response to the confusing intrusions of past difficult experiences was often withdrawal into passivity, giving an impression of hopelessness. We saw how this condition was reinforced by inactivity and waiting, connected both with the flight experience and with life as an asylum seeker. Studies among asylum seekers show that high levels of depression and trauma are linked to the indefinite and temporary nature of the asylum process (Mansouri and Cauchi, 2007), and research shows, not only among refugees, that experiencing time as passing slowly is linked to mental suffering (Flaherty et al., 2005;Biehl, 2015;Horst and Grabska, 2015).
Some participants described life in a reception center as inducing a feeling of reduced worth, comparing their lives to that of animals. This must be seen on the background of being exiled from one's country, losing autonomy, protection, and basic human rights. Studies and reports show that refugees are regularly treated in inhuman ways and that such dehumanizing experiences may set their imprint on feelings of self-worth (Nickerson et al., 2015b;Grande, 2016;Varvin, 2017;Kingsley, 2018). Based on this, the use of animal metaphors to describe the passivity and hopelessness in asylum centers becomes more comprehensible, a continuing feeling of dehumanization, without citizenship, no autonomy or control after arrival in the host country, may reinforce the negative feelings with regard to self-worth.
The prison metaphor was also commonly used when describing time spent waiting in different reception centers. This metaphor, like the animal metaphor, can be understood as illustrating a feeling of loss of autonomy and subsequent disempowerment, reduced possibilities to plan, to act, and make decisions regarding the future. The metaphor also illustrates a feeling of unjustifiably being "criminalized." Studies on prison inmates show that prisoners experience time as a source of suffering in itself (Medlicott, 1999;Wilson, 2004), and prisoners' perceptions of the present, past, and the future show that the present tends to be extended and the past and the future tend to be distorted (Medlicott, 1999). As pointed out by Griffiths et al. (2013), asylum seekers differ from prisoners in that their future destiny, including when and if they are granted asylum, and in which context they are going to live, is highly uncertain. Additionally, the legal framework and related practice may suddenly change-the immigration system in Norway and other countries being in flux. In other words, in contrast to prisoners, familiar with both space and time, asylum seekers lack the privilege of having a "sentence" with time and space defined (Griffiths et al., 2013). Not having any idea of how long or under which conditions they are going to be detained significantly shapes the way they experience the waiting. An overwhelming majority in our material described the waiting time as painful, sometimes incomprehensible due to the well-known situation in their home country (e.g., Syria) and sometimes totally unbearable. These findings are consistent with other studies among refugees. For example, in a mixed method study from Attica, Epirus, and Samos in Greece, the epidemiological survey showed that between 73 and 100% of the refugees suffered from anxiety disorder. The qualitative part of the study showed how the refugees overwhelmingly reported experiencing uncertainty and lack of control over their current life and future, causing psychosocial distress and suffering (Bjertrup et al., 2018).
Similarly, in an ethnographic study on mental health implications connected to the process of resettlement among Iraqi refugees in Cairo, the refugees spoke of exile as "living in transit"-a condition that led to an altered experience of time in which the future became particularly uncertain, and where life in general was perceived as highly unstable (El-Shaarawi, 2015).
Research on temporary protection mechanisms among refugees shows that immigration policies can restrict people by creating temporary migration stages (Gammeltoft-Hansen, 2016). All stages include waiting for decisions, be it when and to which asylum center one will move next, interviews with the authorities, needed technical aid, or an appointment with a doctor. The temporal uncertainty is reinforced when waiting for decisions related to the present or near future is added to the waiting for major decisions related to the future, such as whether the application is granted, when decisions are to be enacted, when identity documents are to be sent, if one are to have a future in the present country, and if so, where and under what circumstances (Griffiths et al., 2013). These latter time stages, however, are often so long and unpredictable that they may be characterized as permanent temporariness (Bailey et al., 2002;Simmelink, 2011). As outlined by Griffiths, a significant source of powerlessness and insecurity for the asylum center residents is the constant tension between anticipating time ruptures through constant changes (as having to move to a new asylum center) and fearing indefinite and long periods of inactivity, uncertainty, and feeling unsafe. Both "suspended time" (lack of change/waiting) and "temporal rupture" (dramatically shifts) can separately and combined create chaos (Griffiths, 2014).
Daily routines and activities give a person a sense of continuity and safety, while disruptions of routines, as in situations of sensory deprivation, conversely create loss of contact with reality, a tendency toward autistic thinking and loss of continuity (Harrison and Newirth, 1990). There is thus ample evidence for the importance for mental health of being able to maintain the "normality" of daily routines. Thus, when many refugees are trapped or "imprisoned" in a situation with some sort of suspension of time as the world around them continues forward (Griffiths et al., 2013), this may cause serious disturbances on aspects of mental functioning. In this trapped situation, the priority for many will be maintaining a daily rhythm, giving an experience of synchronic time. This will, however, prove extremely difficult as some sense of linear time with a feeling of a past and a future contextualizing the present are necessary for the ability to uphold synchronically anchored time activities (Johansen, 2001).
Responses to Temporal Uncertainty
Our study shows the variety of ways in which the participants responded to their situation. Gasparini's (1995) distinctions between three different types of waiting may be useful in shedding light on these different responses (Gasparini, 1995). He describes how people may view waiting more or less purely as an obstacle to action; waiting may constitute an experience filled with substitute meaning, and waiting may be perceived as a meaningful experience. Waiting perceived as an obstacle to action, as a loss of time and a difficulty in maintaining a feeling of synchronic/cyclical time, as if life itself stopped, was by far the most common expression among the participants in our study. This loss was expressed through feelings of powerlessness, insecurity and fear, humiliation, as well as provocation. Lack of meaningful activities and enforced idleness seemed to reinforce these difficult feelings; many participants expressed what can be described as desperation in regard to not being allowed to do something, paid or unpaid, while waiting.
Being unable to relate to the future, and consciously or unconsciously suppressing the past, many seemed to enter into some sort of empty synchronic (cyclic) time-time was repetitively empty, with no future-oriented activities. The social aspects of daily life also become difficult when moving frequently, causing unpredictable interruptions to social relations. Several participants described an intentional strategy of avoiding attachment or socializing with others at the reception centers, being afraid that such relations might soon end. Research has shown that inhabitants in asylum centers initially form familylike groups with brothers, sisters, and even some parental figures. The breaking up of these family groups is regularly experienced as a serious loss, which again may evoke experiences of earlier losses (Goosen et al., 2014). In the study from Attica, Epirus, and Samos in Greece, it was found that disruption of key social networks in addition to an absence of interactions with the surrounding Greek society led to psychosocial suffering. Feeling isolated combined with the general passivity of life in the refugee camps aggravated feelings of meaninglessness and powerlessness (Bjertrup et al., 2018). Thus, instead of social relations representing a meaningful and time-preserving part of everyday life, they may instead represent a threat of loss, discontinuity, disruption, and feelings of not belonging.
Being hindered in living with a normal linear time perspective, establishing social relations and some sort of production in accordance with time, may cause not only a feeling of discontinuity and being "in transit" but also represent transformation into an abnormal liminal state. Such a liminal phase may influence a person's identity; the identity may partly be lost or disrupted and may remain unclear for an indefinite time (Ball and Moselle, 2016). An ethnographic study from Texas, USA, exploring Syrian refugees' narratives of forced displacement and resettlement, found that refugees continued to suffer while waiting for permanent resettlement. Many waited for years, as the application process was long and complex, and entered into a period of liminality. The concept is used to describe how the identity and well-being of a person suffer from having no ties to a place for an unpredictable period of time, and how the ambiguous state of the person causes feelings of loss and confusion regarding their identity markers (Mzayek, 2019).
If we move back to Gasparine's three different ways of handling waiting, we see examples from the material that some participants somehow managed to accept being in a liminal phase, partly by acknowledging the temporality of the situation and partly because they adjusted by identifying activities that kept the present from being empty. Some were also, irrespective of their present status regarding the application, conscious of focusing on activities related to the future, such as learning the local language and prioritizing schoolwork. Thus, the waiting was a situation filled with substitute meaning, in combination with meaning that partly integrated a future perspective. This way of handling waiting was also illustrated in the study of Syrian refuges in Texas, where the long resettlement process gave several opportunities to develop what the author call "resilient tactics." Many men would, for example, help sustain their families by continuously trying to engage in income-generating activities, while women supported their children by teaching them at home when they could not attend school. Some actively sought to establish friendship with local community members trying to reconstruct their identities by incorporating identity markers from their temporary host countries (Mzayek, 2019).
Yet another approach, illustrated in the findings by some participants, was taken by those who described the waiting as a meaningful experience. A few participants told how they managed to find meaning being immersed in the present, focusing on filling the present time with action, some even consciously separating themselves from thoughts oriented toward the past and the future. To understand this, and why it seemed to provide comfort and meaning, we can look at George H. Mead's (1863Mead's ( -1931 understanding of the relationship between action and time. He underlines that the present time can only provide meaning if it consists of temporal differences (past and future) and that human action can represent a tool that can achieve this temporal difference. Action is not to be understood as a movement in time per se, but rather as a means to manage time, develop time, and experience time. Any action produces time in the sense that it develops temporal differences between the present, the past, and the future; all actions, even though appearing in the present, immediately become an event and a reference point for oneself and for others. Such an event creates temporal differences, such as before/after, cause/effect, objective/agent, past/future, and the like, which encourages responses, reflections, and thus some kind of development of a personal narrative. Mead's main point is that without managing the present time, by creating these temporal differences, one may become pacified and paralyzed in the sense that one does not manage to connect events in daily life in a way that provides direction or meaning. This is somehow similar to the point made by Nielsen (unpublished), who underlines the importance of connecting concrete time to actions and events. Time, in this context, does not consist of points on a time arrow but of situations that have its own temporality. Thus, time becomes not only a dimension that continuously moves between the past and the future but also tied to the meaning of the action the person is conducting. In other words, the present is an event or a social situation that can be used to create temporality. We may say that some of the participants that seemed to manage well also seemed to be able to redefine time, or their understanding of (the value of) time, by using the present time actively. In this way, they seemed more capable to tie concrete and abstract time, thus reducing some of the pain involved when synchronic time was interpreted as "lost." Norbert Elias (1887-1990) points to time as a tool to ensure a steady point for orientation in modern society. Time helps provide logic and meaningful contexts, for example in situations of suffering and lack of meaning in the present, converting these into an expectation of future happiness and meaning. In other words, time becomes a means through which we can achieve our goals; thus, time is also an important component in a (continuous) production of a personal narrative/identity. Elias sees time as "mentally civilizing" by its ability to structure and discipline modern human beings' orientation and self-regulation (in the Freudian sense). Thus, feeling like one is "drifting in the air, " without managing to see or establish temporalities in present situations and to identify achievements or goals within a certain timeline, could help understanding the mental state of many of the participants. This state was sometimes characterized by strong and emotional descriptions like "hating myself, " "feeling hate toward myself, " "feeling like a looser, " and sometimes also linked with suicidal thoughts; this is in contrast to those who managed to find meaning and happiness and feel "mentally fine" by orienting themselves toward activities in the present.
As there are few studies on refugees' experience of time in relation to mental health/suffering, insights from studies of prisoners may throw light on refugee experiences. A qualitative study from the UK (Medlicott, 1999), exploring suicidal prisoners in the light of the pains of prison time, found that suicidal prisoners experience time as an acute source of suffering and strongly associated with the deterioration of their sense of self. Participants referred to the "pains of empty time" including the expectation of more of the same pain in the time to follow and "without any chronology of events" to mark or distinguish the time to come. The author underlines that the difficulties in managing time has no necessary relationship with the length of sentence nor the time spent in prison; this is because the ability to handle time varies significantly between individuals.
Medlicott refers to suicide statistics from the 1990s in which 40% of those that took their own lives in prison were on remand, meaning that they had not yet been convicted and that they may be acquitted or receive a non-custodial sentence. Those who managed to cope with the prison time, including those serving life sentences, were those who did not let time become an overruling dimension in their life, but who managed to exercise autonomy over their own time. They achieved this by accepting the timeframe as well as actively acquiring knowledge of what the frames, shaped by the time and space, allowed in terms of self-development. This strategy is similar to the participants in our study that seemed to cope better than the others. Instead of allowing all influence and autonomy to be taken away, these individuals found creative and autonomy-preserving ways of still influencing the experience of their time and their situation, thus showing a capacity for future directness and a way of dealing with temporal uncertainties. Even though patterns of strategies were identified among our participants, it was difficult to see any relation to age, gender, or other markers explaining why some managed better than others. In a meta-ethnography of sources of resilience in young refugees, including 26 empirical studies, it was found that even though refugee groups from different countries show many similarities in their sources of resilience, the resilience processes have several individual and culturally based components. For example, for some of the young refugees, access to school provided both hope and social support, and they were positively challenged to succeed their education despite adverse challenges, while others became discouraged as adversity increased over time; these findings that are in line with our study (Sleijpen et al., 2016).
Structural Power Relations
Another lens for understanding the different interpretations and strategies unfolded in this material is viewing time as a tool in structural power relations (Foucault, 1975). Foucault's description of "states of emergencies, " including strict physical regulations and loss of temporal autonomy in certain circumstances (e.g., outbreaks of contagious diseases, political riots), has similarities to what can be interpreted as space as well as time discipline in asylum centers: structural forces seeking to discipline people that represent a threat to order and normality in a society within the frames of segregated institutions that limit people's possibilities to maintain their autonomy, to act and to invest in personal development, all exercised within a timeframe that one does not have any influence over and within a bewildering immigration system constantly in flux (Griffiths et al., 2013). Historically, such disciplinary measures have seldom been directed toward the privileged members of society, but rather imposed on groups without power, such as poor people, ethnic minorities, and refugees (Foucault, 1975;Lupton, 1995).
An example of the suppressive and disciplinary nature of the asylum system is addressed in the autobiographical account of Behrouz Boochani's, No Friend But the Mountain. Being a Kurdish journalist, Boochani sought asylum in Australia but was instead illegally imprisoned for 5 years in the country's most notorious detention center on Manus Island. He describes guards and nurses wearing uniforms as reminiscent of dystopian regimes, and how the fenced and isolated detention center serves as disciplinary measures, "pacifying even the most violent person. . . " Similarly, to a prison, the detention center maintains its power over time as well as the power to keep people in line. In his analysis, he draws attention to interconnected social systems of domination and oppression, illuminating how subjugation and degradation of refugees is facilitated by a more comprehensive idea of Australia's colonial imaginary and the prevailing xenophobia in the society (Boochani, 2018).
In a study of refugee accommodation in Athens, Berlin, and Copenhagen, the term "Campization" was used to capture the structural forces embedded in the design of the accommodations (Kreichlauf, 2018). The description highlights that refugee migration is deeply related to discourses on crime, terror, and a general view that refugees represent a threat. Refugee accommodations are sites through which these logics materialize, and the secluded and segregated organization produces limited possibilities for life. As the length of stay is mostly unknown, the camps are described as existing between the temporary and the permanent (Kreichlauf, 2018).
Thus, time as a perceived tool of power can be used to understand the degradation and humiliation felt by many of the participants having to comply with the rules of the system, including long waits to have their application treated, lack of information about the progress, a context of forced idleness, having to move from place to place at short notice, often involuntary separated from extended family members while waiting. Furthermore, like prisons, the time discipline and regulation of bodies are combined with a ruthless emptying of time, the time being slow and relentless, representing a potent tool for what can be perceived as a measure of discipline or punishment by a powerful agent (Medlicott, 1999). This same authority has the power to fill or not fill the time with meaningful activities, to speed the process of providing answers, and to help family members remain together.
As we have seen in this and other empirical studies, the possibilities for counteracting powerlessness and mobilize resilience in the sense of "navigate the resources necessary to sustain positive functioning under stress" (Ungar, 2019, p. 2) are present, even during the most suppressive circumstances. Boochani in Manus Island demonstrated "resilient tactics" by refusing the role of being a supplicant in a suppressive system. Instead, he uses art to secure a connection with the future, as art will have the chance of living beyond its contemporary moment. He secures his identity as a journalist by reporting about the conditions on Manus Island on smuggled mobile phones. Similarly, as found in our study, resilience is expressed through people resisting the system by opposing decisions of movement in time and space, by creating alternative or imagined realities, and by people actively searching for meaningful relations and activities related to the present and the future.
CONCLUDING REMARKS
Migration control authorities and related control policies can play a direct role in sustaining or creating temporal uncertainties and ruptures in the present as well as in the future. As argued by Griffiths et al. (2013), this power is masked as well as exacerbated by time-consuming bureaucratic procedures, which tend to conduct strict categorization and evaluation of a person's immigrant status, while at the same time encouraging the picture of the "ideal immigrant" who patiently, submissively, and disciplined comply with a sequence of events from arrival to settlement. However, as noticed among the asylum seekers in our material, some wait for years to receive a final answer while others get their answer quickly, again in the context of a bewildering system in which one has no influence and in which one's future destiny lies in the hands of others. In other words, the troubled and emphasized relation to time must partly be understood as a continuation and reinforcement of unsafe and dehumanizing preflight and flight experiences. It must partly be understood as the result of an insecure and undefined present situation, making people lose the ability to visualize their future life, thus integrating the future in the experience of the present: Furthermore, it must partly be understood on the basis of timeframes being totally unpredictable and beyond individual control, implemented in a context that may be seen as a system with excessive use of disciplinary measures.
Further research is needed, however, to find what conditions and circumstances that can make more personal control and resilient developments possible under the restricted conditions asylum seekers and refugees live under.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Regional Committees for Medical and Health Research Ethics (REC). REC South East-Secretariat: 2016/651 Mental health and quality of life among asylum seekers and refugees. The participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
|
v3-fos-license
|
2018-04-03T03:26:13.772Z
|
2016-12-08T00:00:00.000
|
1382862
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0167903&type=printable",
"pdf_hash": "3504f14ddc773ca9b93e77156cafee6314f96be6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:645",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "3504f14ddc773ca9b93e77156cafee6314f96be6",
"year": 2016
}
|
pes2o/s2orc
|
In Vitro Antifungal Activity of Sertraline and Synergistic Effects in Combination with Antifungal Drugs against Planktonic Forms and Biofilms of Clinical Trichosporon asahii Isolates
Trichosporon asahii (T. asahii) is the major pathogen of invasive trichosporonosis which occurred mostly in immunocompromised patients. The biofilms formation ability of T. asahii may account for resistance to antifungal drugs and results a high mortality rate. Sertraline, a commonly prescribed antidepressant, has been demonstrated to show in vitro and in vivo antifungal activities against many kinds of pathogenic fungi, especially Cryptococcus species. In the present study, the in vitro activities of sertraline alone or combined with fluconazole, voriconazole, itraconazole, caspofungin and amphotericin B against planktonic forms and biofilms of 21 clinical T. asahii isolates were evaluated using broth microdilution checkerboard method and XTT reduction assay, respectively. The fractional inhibitory concentration index (FICI) was used to interpret drug interactions. Sertraline alone exhibited antifungal activities against both T. asahii planktonic cells (MICs, 4–8 μg/ml) and T. asahii biofilms (SMICs, 16–32 μg/ml). Furthermore, SRT exhibited synergistic effects against T. asahii planktonic cells in combination with amphotericin B, caspofungin or fluconazole (FICI≤0.5) and exhibited synergistic effects against T. asahii biofilms in combination with amphotericin B (FICI≤0.5). SRT exhibited mostly indifferent interactions against T. asahii biofilms in combination with three azoles in this study. Sertraline-amphotericin B combination showed the highest percentage of synergistic effects against both T. asahii planktonic cells (90.5%) and T. asahii biofilms (81.0%). No antagonistic interaction was observed. Our study suggests the therapeutic potential of sertraline against invasive T. asahii infection, especially catheter-related T. asahii infection. Further in vivo studies are needed to validate our findings.
Introduction
Trichosporon asahii (T. asahii) is an opportunistic pathogen which belongs to the member of basidiomycete yeast-like fungi and can cause invasive trichosporonosis in immunocompromised patients [1].
The incidence of invasive trichosporonosis has been increased over the past 4 decades with the increased immunocompromised population, mainly those with hematological malignant diseases, AIDS patients and organ transplant recipients [2]. Additional risk factors include the use of corticosteroid, chemotherapy, as well as the use of medical implanted devices [2].
Various antifungal drugs have been used in the treatment of invasive trichosporonosis, including polyenes (such as amphotericin B), echinocandins (such as caspofungin) and the azoles (such as fluconazole, itraconazole and voriconazole). However, T. asahii often cause breakthrough infections in patients treated with AMB or echinocandins [3][4][5].
In a clinical guideline for the diagnosis and management of rare invasive yeast infection (including Trichosporon species), amphotericin B monotherapy is not recommended for invasive trichosporonosis, because of its limited in vitro activity against T. asahii (MICs!2 mg/L) and poor response rates (between 16% and 24%) to trichosporonosis [6]. Echinocandins are also not recommended for treating invasive trichosporonosis since Trichosporon spp. is intrinsic resistant to this antifungal drug class [1,6].
The newer triazoles (such as voriconazole) are now considered to be the most effective drugs class for invasive trichosporonosis treatment because they exhibit good in vitro and in vivo activity against Trichosporon spp. and result good clinical outcome [1,6]. However, the high cost of new triazoles impedes their widespread use in China. Furthermore, since azoles are all fungistatic, the sustained use of azoles antifungal drugs may result in drug-resistance, especially when used as low-dose prophylactic/empirical therapy. Actually, decreased susceptibility of T. asahii to azoles has been reported and multidrug-resistant Trichosporon strains have already been isolated [7,8].
Invasive T. asahii infections are usually associated with the use of medical implanted devices (such as central venous catheters, vesical catheters, and peritoneal catheter-related devices) [1]. The ability of T. asahii to form biofilms on medical implanted devices may account for the clinical resistance to antifungal drugs and results a high mortality rate. Although the newer triazoles have been demonstrated to show excellent in vitro activity against T. asahii planktonic cells, they have been reported failing to eradicate T. asahii biofilms and may result in treatment failure [9,10]. Thus, in the views of drug-resistance and pharmacoeconomics, it is necessary to develop new therapeutic approach against T. asahii infection. To our knowledge, combination of traditional antifungal drugs with non-antifungal agents has been proposed to be a promising strategy to cope with resistant fungal infections [11,12], this antifungal strategy may also be beneficial to cope with resistant T. asahii infections.
Sertraline (SRT) is a commonly prescribed antidepressant that belongs to the group of selective serotonin reuptake inhibitors [13]. It has been demonstrated that SRT exhibit antifungal activities against Candida spp., Aspergillus spp. and Cryptococcus species [14][15][16][17][18]. SRT has also been demonstrated to show in vitro synergistic effects in combination with antifungal drugs against Aspergillus spp. and Cryptococcus neoformans (C. neoformans) [19][20][21]. Furthermore, SRT was demonstrated to exhibit adjunctive antifungal effect against HIV-associated cryptococcal meningitis clinically [22]. Considering that Trichosporon spp. is phylogenetically closed to Cryptococcus species, we wonder if SRT has similar antifungal activity and synergistic effect against T. asahii. To our knowledge, no studies have been conducted on the antifungal activity of SRT against Trichosporon species.
In the present study, the in vitro antifungal activities of SRT alone or in combination with clinical commonly used antifungal drugs against planktonic forms of 21 clinical T. asahii isolates were examined by a broth microdilution checkerboard method based on M27-A3 reference method documented by Clinical and Laboratory Standards Institute (CLSI) [23]. In vitro anti-biofilms activities of SRT alone or in combination with antifungal drugs were examined by a XTT reduction assay. The results of our in vitro antifungal susceptibility testing against T. asahii may be helpful to evaluate the possible application of SRT in treating T. asahii infections.
All strains were removed from -80˚C freezer, and then subcultured twice on Sabouraud dextrose agar (SDA, Merck KGaA, Darmstadt, Germany) at 35˚C for 24 to 48 h to ensure purity and viability. The subcultures were further cultured overnight in yeast peptone dextrose (YPD, Oxoid Limited, England) liquid medium at 37˚C in a rotating incubator at 130 rpm. Following growth, the cells were harvested by centrifugation and washed twice with sterile phosphate-buffered saline (PBS). The cells were resuspended in RPMI 1640 medium, which has been adjusted to pH 7.0 with 0.165 M morpholinepropanesulfonic acid (MOPS) (Sigma-Aldrich, St Louis, MO, USA) to the densities of 10 3 CFU/ml for the in vitro susceptibility testing against planktonic cells and 10 6 CFU/ml for the in vitro anti-biofilms susceptibility testing. Candida parapsilosis ATCC 22019 was included as quality control strain for our in vitro susceptibility testing.
In vitro antifungal susceptibility testing against T. asahii planktonic cells
In vitro activities of FLC, ITC, VRC, CAS, AMB or SRT alone, and combinations of SRT with antifungal drugs against T. asahii planktonic cells were evaluated by using the broth microdilution checkerboard method based on the M27-A3 reference method (CLSI, USA) [23]. All tested drugs were distributed in 96-well microtitre plates. The final drug concentrations ranged from 0.062 to 64 μg/ml for FLC, from 0.008 to 8 μg/ml for ITC and AMB, from 0.001 to 1 μg/ml for VRC, from 0.031 to 32 μg/ml for CAS, and from 0.5 to 32 μg/ml for SRT. T. asahii cells suspensions were adjusted to a 0.5 McFarland standard transmittance at 530 nm wavelength. After that, the final inoculums of T. asahii were approximately 1.0×10 3 −3.0×10 3 CFU/ ml in each well after a serial dilution with RPMI 1640 broth medium. The plates were then incubated at 35˚C for 48 h. Thereafter, the minimum inhibitory concentrations (MICs) were recorded according to M27-A3 guideline [23]. The MICs of FLC, ITC VRC and CAS were defined as 50% reduction in turbidity compared to the growth control wells and the MIC for AMB was defined as complete inhibition of growth. To investigate the possible fungicidal activity of SRT, both MIC-2 (50% reduction in turbidity compared to the growth control well) and MIC-0 (complete inhibition of growth) endpoints were used for SRT in this study. The MIC-2 endpoint was also used for AMB to allow the antifungal combinations susceptibility testing to be comparable between all tested drugs. The MICs that inhibited 50 and 90% of the total isolates were defined as MIC 50 and MIC 90 , respectively. RPMI 1640 medium without T. asahii cells, as well as drug-free medium containing T. asahii cells, was used as negative and positive controls respectively. Experiments were repeated three times on different days.
Biofilms formation of T. asahii and in vitro anti-biofilms susceptibility testing
The biofilms formation of T. asahii was performed by a simple and reproducible 96-well plates-based method as previously described [24]. Briefly, 100 μl adjusted T. asahii suspension (10 6 CFU/ml) was added to 96-well plates. The wells containing RPMI 1640-MOPS medium without T. asahii cells were included as background controls. After 1 h of incubation (adhesion phase) at 37˚C, each well was washed twice gently with sterile PBS to remove non-adherent cells, 200 μl of fresh RPMI 1640-MOPS medium was then added to each well and the plates were further incubated at 37˚C for 24 h.
According to previously studies [10,23], the sessile MICs (SMICs) were defined as the lowest concentration capable of decreasing 50% in absorbance compared to the growth control wells measured by XTT reduction assay. Experiments were repeated three times on different days.
Drug interaction analysis
Drug combination interaction was evaluated on the basis of the fractional inhibitory concentration index (FICI) which is the sum of the fractional inhibitory concentration (FIC) of each drug [24]. The drug interaction was defined as the following: FICI 0.5, synergism; FICI > 0.5 to 4.0, indifference; FICI > 4.0, antagonism. The FICI values were calculated based on the MIC-2 endpoint for all drugs.
Results
To our knowledge, no standard interpretive breakpoints are available for in vitro antifungal susceptibility testing against Trichosporon species. However, the breakpoints for Candida species have been used in the in vitro antifungal susceptibility testing against Trichosporon isolates previously [25,26]. Thus, the reference breakpoints for Candida species were cautiously used to interpret results obtained in this study. To evaluate drug interactions, we compared the MICs of each antifungal drug alone to the MICs of antifungal combinations with SRT and calculated the FICIs of different antifungal combination.
The MIC range, geometric mean (GM) and the MICs for 50% or 90% of the isolates (MIC 50 /MIC 90 ) for T. asahii planktonic cells are present in Table 1. The in vitro antifungal susceptibility results by using both MIC-0 and MIC-2 endpoints on the anti-T. asahii activities of AMB and SRT are present in Table 2. The interactions of each antifungal combination against T. asahii planktonic cells are present in Table 3. The SMIC range, GM and the interactions of each antifungal combination against T. asahii biofilms are present in Table 4.
The MIC-2 and MIC-0 ranges for SRT were 4-8 μg/ml and 8-32 μg/ml, respectively. The MIC 50 /MIC 90 for SRT by using MIC-2 endpoint and MIC-0 endpoint were 4/8 μg/ml and The results of our in vitro anti-biofilms susceptibility testing showed that the SMICs of most antifungal drugs against T. asahii biofilms increased up to 1000 times compared to the MICs against planktonic cells except for CAS, whose SMICs increased only 2-4 folds. The SMICs (16-32 μg/ml) of SRT against T. asahii biofilms increased 4 folds compared to the MICs (MIC-2, 4-8 μg/ml) against planktonic cells, which indicated the decreased susceptibility of SRT to T. asahii biofilms.
Discussion
T. asahii is the major pathogen of invasive trichosporonosis which occurred mainly in immunocompromised patients. Compared to the low incidence rates of invasive trichosporonosis, invasive Trichosporon infections leads to high mortality rates up to 80% despite treated with antifungal drugs [1].
To date, treatment of invasive trichosporonosis remains a challenge. Most of the in vitro antifungal susceptibility tests demonstrated high MICs of AMB and echinocandins to T. asahii and indicated drug resistance. In contrast, azoles antifungal drugs, especially the newer triazoles, are the primary drug class for the treatment of invasive trichosporonosis based on available data. However, decreases of the susceptibility of T. asahii to azoles have been reported, including the newer triazoles [7,8].
As expected, VRC (MICs, 0.031-0.25 μg/ml) was the most effective drug against T. asahii planktonic cells in this study. No significant high MICs of three azoles were observed against most T. asahii isolates in this study. ITC (MICs, 0.25-1 μg/ml) and FLC (MICs, 1-16 μg/ml) were still sensitive to most T. asahii isolates. All tested T. asahii isolates were resistant to AMB (MICs ! 2 μg/ml) and CAS (MICs ! 8 μg/ml). Our results were in agreement with previous data in China of the in vitro antifungal susceptibility of VRC, CAS and AMB against clinical T. asahii isolates [27].
Previous antifungal susceptibility assay demonstrated a remarkable rise in the sessile MICs of azoles against T. asahii biofilms (SMIC>1024 μg/ml) compared to the MICs of planktonic cells and T. asahii biofilms were up to 16000 times more resistant to VRC than planktonic cells [28]. In agreement with previous reports, T. asahii biofilms were resistant to all three azoles tested in this study, as the SMICs were up to 1000 times higher than the MICs of T. asahii planktonic cells. T. asahii biofilms were more resistant to AMB than planktonic cells.
As is well known, T. asahii is intrinsic resistant to echinocandins [1,2]. However, CAS was observed to inhibit T. asahii biofilms at the final concentrations from 16 to 64 μg/ml in this study, and the SMICs of CAS increased only 2-4 folds compared to the MICs of planktonic cells. The inhibitory effect on the synthesis of β-(1,3)-glucan of the fungal cell wall is believed to be one of the mechanisms of CAS to exert anti-biofilms effects against C. albicans biofilms, since β-(1,3)-glucan is considered to be a major component of fungal biofilms [29,30]. Thus, the inhibitory effect on the synthesis of β-(1,3)-glucan may also account for the anti-biofilms effects of CAS against T. asahii in this study. The possible inhibitory effects of CAS against T. asahii biofilms needs further studies to be validated.
Based on a review of 185 reported cases from 1975 to 2014, Trichosporon fungemia, including catheter-related fungemia, represents the main type of invasive Trichosporon infection [2]. However, disseminated trichosporonosis can involve most human organs and results in pneumonia, endocarditis, brain abscess, meningitis, arthritis, esophagitis, lymphadenopathy, liver infection, splenic abscess, uterine infection and soft tissue infection [1]. Invasive trichosporonosis are usually associated with the use of medical implanted devices [1,2]. Peritoneal dialysis can cause fungal peritonitis due to Trichosporon species [1]. Endocarditis due to Trichosporon spp. in cardiac valve replacement patients has been increasingly reported [1]. Urinary tract infections and renal dysfunction caused by Trichosporon spp. have also been reported, especially in patients with vesical catheterization [1]. Removal of infected catheters may increase the efficacy of antifungal drugs and improve clinical outcomes. Unfortunately, most severe patients are catheter-dependent and it is a life-threaten matter to remove medical implanted catheters [31]. Thus, the development of new antifungal agents with antifungal activity against T. asahii biolfilms is necessary.
SRT, a commonly prescribed psychotropic drug, was selected as a potential antifungal agent against T. asahii in this study based on its reported in vitro and in vivo fungicidal activity, low toxicity and lack of drug interactions [14][15][16][17][18][19][20][21][22]32]. SRT was firstly reported to show antifungal activity against Candida species in 2001 [14]. Lass-Flörl et al found that patients with recurrent vulvovaginal candidiasis were cured when treated with SRT for accompanying premenstrual dysphoric disorder [14]. From then on, the antifungal activities of SRT against Candida spp., Aspergillus spp. and Cryptococcus spp. have been extensively discussed [14][15][16][17][18][19][20][21]. The antifungal activity of SRT against C. neoformans has also been demonstrated in animal model studies [16,17]. SRT was demonstrated to reduce fungal burden in the brain, kidney and spleen in murine models of systemic cryptococcosis at clinically relevant concentrations [16,17]. More important, the antifungal effect of SRT against Cryptococcus infection has been demonstrated clinically [22]. Thus, SRT was speculated to have similar antifungal effect against T. asahii, which is phylogenetically closed to C. neoformans. Considering the important role of fungal biofilms in drug-resistance, the in vitro antifungal activities of SRT against both T. asahii planktonic cells and biofilms were evaluated in this study.
Our study demonstrated that SRT was fungicidal in high concentrations (MIC-0, 8-32 μg/ ml). The MIC 90 for SRT by using both MIC-2 and MIC-0 endpoints indicated that SRT could inhibit 90% of the total T. asahii isolates in concentration of 8 μg/ml and kill 90% of the total isolates in concentration of 32 μg/ml. The SMICs (16-32 μg/ml) of SRT demonstrated its inhibitory activity against T. asahii biofilms in concentrations higher than 16 μg/ml. Our results demonstrated that SRT exhibited fungicidal activity against T. asahii planktonic cells and inhibitory activity against T. asahii biofilms in relative high concentrations.
To evaluate the clinical therapeutic potential of SRT on invasive T. asahii infections, we compared the pharmacokinetic data of SRT with our in vitro antifungal susceptibility data. The MICs (MIC-2 ranges, 4-8 μg/ml; MIC-0 ranges, 8-32 μg/ml) against T. asahii are much higher than the reported blood concentrations of SRT (55-250 ng/ml) [13,33]. Pharmacokinetic studies of SRT also demonstrated that the concentrations of SRT in the brain were 20-50 times higher than blood concentrations [33]. Furthermore, the concentrations of SRT in the eyes, heart, lung, spleen, liver, kidney, stomach, small intestine, muscle and skin were also demonstrated to be much higher than the blood concentrations, although the organ/blood concentration ratios for these organs have not been assayed statistically [33,34]. Thus, the much higher concentrations in tissues and organs of SRT may be beneficial for treating disseminated trichosporonosis since T. asahii often disseminated to most organs of patients [1].
Although it is difficult to achieve high blood concentrations of SRT administrated orally, it is possible that the high concentrations may be attainable in catheters by way of intra-luminal lock therapy. Antifungal lock therapy is to use high local concentrations of antimicrobial agents within an infected catheter in attempt to sterilize the catheters [35]. The therapeutic potential of ethanol as lock therapy against T. asahii infection has been demonstrated by our research group previously [36]. Based on the anti-biofilm activity of SRT against T. asahii observed in this study, SRT may be used as a lock strategy with high local concentrations acting on infected catheters and may facilitate the clearance of T. asahii biofilms and improve clinical outcomes.
In addition to the antifungal activity of SRT alone, it has also been demonstrated that SRT exhibited in vitro synergistic effects in combination with antifungal drugs against Aspergillus spp. and C. neoformans. SRT was demonstrated to enhance the activity of AMB against Aspergillus spp. [19] and was also demonstrated to exhibit in vitro and in vivo synergistic effect in combination with FLC against C. neoformans [16,20]. SRT also exhibited in vitro synergistic effects combined with AMB against C. neoformans in another study [21]. Based on these previous studies, the possible synergistic effect of SRT in combination with antifungal drugs against both T. asahii planktonic cells and biofilms were further evaluated in this study.
Our results demonstrated that SRT indeed exhibited synergistic effects when combined with AMB, CAS or FLC against T. asahii planktonic cells. The combinations of SRT-AMB (90.5%), SRT-CAS (81.0%) and SRT-FLC (61.9%) yielded potent synergistic effects. In our anti-biolilms combinations study, the combination of SRT-AMB also showed the highest percentage of synergistic effects (81.0%). In contrast, SRT exhibited mostly indifferent interactions in combinations with three azoles. The SRT-CAS combination (47.6%) yielded relative lower synergistic effects against T. asahii biofilms compared to that of SRT-AMB.
Our antifungal combinations study highlights the therapeutic potential of SRT-AMB combination for T. asahii infection, since the SRT-AMB combination yielded highest percentage of synergistic effects against both T. asahii planktonic cells and biofilms. As is well known, AMB is a fungicidal drug with high toxic effect. Based on our results and previous data [14][15][16][17][18], SRT is also fungicidal. Thus, the SRT-AMB combination therapy may result a better therapeutic efficacy with reduced toxicity of AMB. Furthermore, The SRT-AMB combination may be beneficial for reducing the emergence of drug-resistance. Thus, the anti-biofilms effect of SRT alone and SRT-AMB combination on T. asahii highlights the potential utility of SRT or SRT-AMB combination on invasive T. asahii infections, especially suitable for the patients with medical implanted devices.
The antifungal mechanisms of SRT are not investigated in this study. However, some possible antifungal mechanisms of SRT have been discussed by different research group. A genetic study suggests that SRT may exert antifungal effect by perturbing translation and inhibiting protein synthesis of fungi [16]. Rainey et al. demonstrated that SRT may exhibit antifungal activity by targeting intracellular vesiculogenic phospholipid membranes in fungi [37]. Another study demonstrated that SRT can perturb membrane permeability and inhibit sphingolipid biosynthesis in fungi [38]. These reported antifungal mechanisms of SRT may also account for the antifungal activity against T. asahii in this study.
The antifungal synergistic mechanisms have also been discussed previously [22]. The different antifungal mechanisms of FLC (inhibiting ergosterol synthesis) and SRT (inhibiting mRNA translation into protein synthesis) may account for their synergistic antifungal effects [22]. As is well known, AMB exerts antifungal effect by binding with ergosterol, forming channels in fungal cell membranes that cause rapid leakage of cell contents and subsequent fungal cell death. The different antifungal mechanisms of AMB and SRT may also account for their synergistic anti-T. asahii effects observed in this study.
In summary, our study demonstrates the in vitro antifungal activities of SRT on both T. asahii planktonic cells and biofilms and highlights the therapeutic potential of SRT against invasive T. asahii infections, especially suitable for the patients with catheter-related fungal infections. The use of the SRT-AMB combination therapy may be advantageous in treating T. asahii infection based on their obviously synergistic effects. The anti-biofilms activity of SRT against T. asahii may be helpful to control biofilms-related fungal infection, not only for T. asahii, but also for other pathogenic fungi (such as C. neoformans). Considering the clinical T. asahii isolates used in this study were mainly from China, the in vitro fungicidal activity and anti-biofilms activity of SRT against T. asahii should be confirmed by researchers outside China. Lack of in vivo anti-T. asahii data of SRT is a limitation of this study. Further animal models and clinical trials are needed to validate the correlation of our findings. The precise antifungal mechanisms of SRT are also worthy to be investigated.
|
v3-fos-license
|
2021-07-03T06:17:01.242Z
|
2021-06-22T00:00:00.000
|
235708310
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.mdpi.com/1660-4601/18/13/6725/pdf",
"pdf_hash": "7556fc9087d1c2ead5fd42c3d2b5519737cea4b3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:646",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"sha1": "e876407bb9f2e7b339271e55221cc44651bc556b",
"year": 2021
}
|
pes2o/s2orc
|
Adverse Childhood Experiences and Problematic Media Use: Perceptions of Caregivers of High-Risk Youth
Youth with a history of adverse childhood experiences (ACEs) are more likely to develop risky health behaviors. With the increase of media use in the general population, it is likely that these high-risk youth are developing maladaptive behaviors associated with media use (i.e., problematic media use). The goals of this article are (1) to describe symptoms of problematic media use in high-risk youth and (2) to determine whether ACEs are related to problematic media use in this population. Data were collected through online questionnaires from 348 parents or legal guardians of children ages 5 to 18 years, the majority of whom had been adopted. Parents and guardians reported on the child’s history of ACEs and completed the Problematic Media Use Measure-Short Form (PMUM-SF). Almost half of the participants reported that their child had a history of four or more ACEs (48.9%). Caregivers of foster or adopted children reported more symptoms of problematic media use than those reporting on their biological children. After adjusting for covariates, the number of ACEs predicted problematic media use above and beyond variance explained by demographic factors or screen time amount. Children with a history of ACEs had higher problematic media use compared to children without ACEs.
Introduction
The influx of mobile devices over the past decade has modified daily life across generations, with their presence affecting almost all domains of functioning. As individuals increasingly own multiple screen media devices, the amount of time spent on the devices also continues to grow. Ownership of smartphones has steeply increased over the past decade, rising from merely 35% of Americans owning a device in 2011 to 81% currently owning at least one smartphone [1]. Due to the convenience of these handheld media devices, 48% of young adults report that they are online constantly [2]. These effects are consistent for younger demographics as well: 53% of 11-year-olds own a smartphone, increasing to 69% by the age of 12 [3]. Over a fourth of adolescents use screen media for more than 8 h each day, while 15% of 'tweens' report similar levels of use [3]. This is a significant increase in use over the past decade: In 2016, adolescents spent at least twice as much time online compared to 2006 [4]. Furthermore, this pattern was consistent across socioeconomic status, race/ethnicity, and gender [4].
With this increase in device ownership and usage, problematic media use among youth has become a prevalent concern [5]. As a whole, problematic media use is defined as using devices in such a way that it interferes with important daily living activities [6]. Increased mobile device use has been found to decrease quantity and quality of sleep, frequency of physical activity, and academic performance [7,8]. When media use interferes with these important domains of child and adolescent life, it can result in poor mental health and symptomatic behaviors such as difficulty in stopping media use, frustration when not able to use screen media, and thoughts focusing on use of devices [6]. Despite a societal focus on decreasing hours spent with screens, these problematic media use behaviors are more predictive of psychosocial difficulties than amount of screen time [6].
The majority of research about problematic media use has focused on typically developing youth or convenience samples (indeed, most research on children's media use relies on samples of typically developing children) [9]. However, research has identified a number of risk factors that are linked to problematic use. Being male and growing up in a single-parent household are both risk factors for developing video game addiction, while higher parent-adolescent conflict and overall family dysfunction are predictive of problematic internet use [10,11].
Similarly, another possible risk factor is a history of ACEs. ACEs, or adverse childhood experiences, are defined as traumatic events, such as sexual, physical, or emotional abuse, that can have negative and long-term effects on an individual's well-being [12]. Research suggests that adolescents who have experienced a higher number of ACEs have poorer self-regulatory abilities compared to those who have experienced fewer [13]. Individuals who have experienced ACEs are more likely to exhibit a variety of mental health concerns, including depressive symptoms, antisocial behavior, and suicidal ideation/attempts [14]. In addition to psychological consequences, ACEs are also related to negative health behaviors, such as smoking, binge drinking, and the use of other substances [15]. Similarly, a recent study using a national sample of youth demonstrated that those experiencing more ACEs were at a higher risk for heavy digital media use [16]. Specifically, youth who were reported to experience four or more ACEs were at least three times more likely to also have high levels of media use reported by caregivers (more than four hours of use on a typical weekday) [16]. However, reported amount of time with media devices is only one aspect of problematic media use, and does not capture the nuances of how devices interfere with daily life [6].
The current study seeks to determine if, similar to other negative health behaviors, there is an association between ACEs and problematic media use. Research has demonstrated that maltreated youth are at a disproportionate risk for problematic internet use and problematic social media use; however, the study of problematic media use as a whole is lacking for this age group [17,18]. With adults, evidence has been found, albeit retrospectively, that ACEs correlate with problematic media use [19]. Based on prior research, we thus predict that greater ACEs will associate with greater problematic media use. While the current study does not investigate mediators of this association, previous research has identified mechanisms that support this hypothesis. Poor self-regulation has been found to be associated both with number of ACEs and increased problematic media use [13,20]. Other potential mechanisms associated with problematic media use include lack of household structure, poor child self-efficacy, and strained parent-child relationships [21]. Additionally, parenting stress has been found to mediate the association between ACEs and increased media use [16].
A second gap in the literature that our study seeks to address regards sample characteristics. As we have mentioned, most research focuses on typically developing youth or samples of convenience [9]. We know that children involved in foster care or child welfare systems are more likely to have ACEs, yet this high-risk population has rarely been represented in research on children's media use [22]. As such, the goals of this article are (1) to describe the symptoms of problematic media use in high-risk youth and (2) to determine whether ACEs are related to problematic media use in this population. Exploring the links between adverse experiences during childhood and problematic media use could help inform intervention targets for agencies working with highly vulnerable youth.
Materials and Methods
From January 2019 through April 2019, recruitment notices were posted on the [blinded] institute's (coalition connecting more than 200 non-governmental organizations serving vulnerable children and families) website, distributed by email through parent and professional networks, and emailed to potential participants on [blinded] institute's distribution list. The recruitment information was further disseminated via snowball sampling. Parents could complete the questionnaires for any child between the ages of 5 and 18 years old who had been living with them for at least 6 months. All participants provided informed consent before completing the measures. The questionnaires were completed online and presented in random order. Surveys were taken via the Qualtrics website, with both desktop and smartphone versions available. Ethical approval was obtained from the author's Institutional Review Board.
Participants were 348 parents/legal guardians. Demographics for both parent/legal guardians and children can be found in Table 1. Children ranged in age from 5 to 18 years (M = 11.94; SD = 4.11). Over half the children were male (53.3%). The majority of the children were White (50.3%) with a minority of children being Black/African American (21.1%), Latino/a (15.0%), and Asian (11.8%; see Table 1). Nearly a quarter (22.1%; n = 77) were biological children of the parent/legal guardian. The remaining 271 children (77.9%) were adopted. Of those children who joined their family through adoption, 42.2% (n = 147) resided in US foster care prior to adoption, 25.0% (n = 87) resided in a non-US residential facility, and 10.6% (n = 37) were in non-US foster care. For children with a history of adoption, average age at entry into care was 14.75 months (SD = 29.31) and average amount of time in care was 25.61 months (SD = 29.54). All children had been residing with their adoptive caregiver for at least six months. The following demographic characteristics were assessed regarding the parent/ guardian: age, gender (coded as 1 = male, 2 = female), and race/ethnicity. Parents/guardians reported on child age, gender (coded as 1 = male, 2 = female), race/ethnicity (coded as 1 = White, 2 = other race/ethnicity), and the child's average weekly screen time. Additionally, for non-biological children, parents/legal guardians reported the child's type of placement prior to adoption, age of child when he/she entered alternative care (i.e., foster care, institutional care, etc.), and length of time in alternative care prior to adoption (see Table 1).
Parents/guardians reported on whether their child experienced any of 10 adverse childhood experience categories, including abuse, neglect, and loss of caregiver [12]. Parents were asked to complete the ACEs questionnaire to the best of their ability, regarding their child's history. Sample items include: "While your child was growing up, did they live with someone who had a substance abuse problem?" and "While your child was growing up, did they witness domestic violence (caregiver was pushed, grabbed, slapped)?" Pre-adoptive records for children may be incomplete, especially for children adopted internationally. See Table 1 for the frequency of ACEs experienced by children in this study. Approximately 49% of the sample had a history of four or more ACEs. See Table 2 for frequencies of different types of ACEs in this sample. The Problematic Media Use Measure-Short Form (PMUM-SF) was used [6]. The PMUM-SF consists of nine items (α = 0.94) derived from criteria for Internet Gaming Disorder as specified in the DSM-5. Parents answered questions about the frequency of their child's behavior on a 5-point Likert scale, ranging from 'never' to 'always'. Sample items include: "My child becomes frustrated when he/she cannot use screen media" and "My child's screen media use causes problems for the family." The mean score for this measure was used for analyses, with higher scores indicating more problematic media use.
Descriptive statistics were calculated to provide average problematic media use scores by type of ACE. Bivariate correlations were then conducted to examine associations among ACEs, alternative care placement length, total weekly screen time, and problematic media use. Next, linear regression analyses were conducted, with Step 1 including the covariates of child age, child race/ethnicity, length of time in care, gender, and weekly screen time; in Step 2, number of ACEs were entered.
Mean PMUM levels by type of ACE were first examined. Across each type of ACE, children with the ACE had higher problematic media use scores compared to youth without the ACE (see Table 2). Based on bivariate correlations, greater ACEs and greater length of time in foster/institutional care were associated with higher problematic media use (r = 0.35, p < 0.01 and r = 0.30, p < 0.01, respectively). Adjusting for child age, child race/ethnicity, length of time in foster care, gender, and amount of children's weekly screen time, we found that total ACEs predicted problematic media use (B = 0.24, p < 0.01), above and beyond variance explained by child demographic factors and weekly screen time (see Table 3). Table 3. The association between ACEs and problematic media use.
Discussion
This study sought to examine problematic media use among high-risk youth and examine whether ACEs were associated with greater problematic media use. Children with any type of ACE had higher problematic media use compared to children without ACEs. Additionally, we found that ACEs emerged as a risk factor for problematic media use among high-risk youth, over and above important confounders. As such, vulnerable youth may be a population who should be targeted to receive assistance around managing media use, which may be interfering with their functioning.
Although mechanisms linking ACEs to problematic media use were not examined in this study, disrupted child self-regulation may account for the findings. A common risk factor for problematic media use is inhibited ability to self-regulate. As with other types of addiction, individuals with poor self-regulation skills may be more likely to develop internet gaming disorder [20]. It is possible that self-regulation difficulties may explain the association between ACEs and problematic media use. For example, prior research suggests that adolescents who have experienced greater ACEs have poorer self-regulatory abilities than those who have experienced fewer [13]. Future research on youth with histories of trauma and comorbid psychiatric conditions is strongly encouraged to replicate these findings and to examine the role of self-regulation as a mediator in a longitudinal research design.
Additionally, future research should address some of the limitations of this study. In particular, future research should examine dyadic processes and include multiple caregiver reports (e.g., both mothers and fathers, as well as other caregivers). Assessing adolescents' perceptions of their own problematic use may also be informative. Examining the mechanisms of influence (i.e., self-regulation deficits related to early disrupted attachment as well as those proposed by Domoff et al.) is critical to inform intervention [21]. Including parental monitoring or other media parenting practices is also recommended as these variables may moderate the impact of ACEs on problematic media use. Finally, we suggest future research include covariates not measured in this study, such as family socioeconomic status and parental education.
Although there are limitations to this study (i.e., all parent report, retrospective accounting of child history of trauma, which may be limited given the few biological parents in this study, no objective measurement of screen time), a strength of this study was the recruitment of caregivers of youth with significant histories of trauma and involvement with child welfare systems. These youth are under-represented in the research and may be more at risk for problematic media use.
Conclusions
Overall, the findings suggest that problems with managing media use could be greater for youth with ACEs. Although preliminary, these findings do support querying about foster/adopted youths' media use when providing care. Parents of these vulnerable youth may be a particular group for whom pediatricians should provide supports/guidance around media use, such as discussing a family media plan and recommending other online resources (e.g., Common Sense Media) [23].
Regarding clinical implications, our study supports screening youth with histories of adverse childhood experiences (or other early stressful life events) for problematic media use. In particular, the Problematic Media Use Measure (PMUM) and the Problematic Media Use Measure-Short Form (PMUM-SF) are brief screeners that mental health clinicians can use to identify whether vulnerable youth are exhibiting early signs of problematic media use [6]. In addition to dysregulated use, clinicians may seek to screen for other types of problematic use (e.g., online victimization or risky use) and media parenting practices to address in treatment. As access to digital media continues to grow (particularly during the pandemic), understanding how to promote safe and regulated media use will be critical [24]. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to participant confidentiality.
Conflicts of Interest:
Domoff is on the Board of the SmartGen Society, and regularly receives honoraria for speaking invitations to different academic and non-profit institutions. Domoff has received funding from the National Institutes of Health.
|
v3-fos-license
|
2021-06-03T06:17:15.938Z
|
2021-05-21T00:00:00.000
|
235302436
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/21/11/3574/pdf",
"pdf_hash": "ba75df4592322c2051fe56bec8a1684536fbd0f4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:647",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "fb92edcefab7f38e2e79c57e3d5f8377fcae656a",
"year": 2021
}
|
pes2o/s2orc
|
Dynamic Nanohybrid-Polysaccharide Hydrogels for Soft Wearable Strain Sensing
Electroconductive hydrogels with stimuli-free self-healing and self-recovery (SELF) properties and high mechanical strength for wearable strain sensors is an area of intensive research activity at the moment. Most electroconductive hydrogels, however, consist of static bonds for mechanical strength and dynamic bonds for SELF performance, presenting a challenge to improve both properties into one single hydrogel. An alternative strategy to successfully incorporate both properties into one system is via the use of stiff or rigid, yet dynamic nano-materials. In this work, a nano-hybrid modifier derived from nano-chitin coated with ferric ions and tannic acid (TA/Fe@ChNFs) is blended into a starch/polyvinyl alcohol/polyacrylic acid (St/PVA/PAA) hydrogel. It is hypothesized that the TA/Fe@ChNFs nanohybrid imparts both mechanical strength and stimuli-free SELF properties to the hydrogel via dynamic catecholato-metal coordination bonds. Additionally, the catechol groups of TA provide mussel-inspired adhesion properties to the hydrogel. Due to its electroconductivity, toughness, stimuli-free SELF properties, and self-adhesiveness, a prototype soft wearable strain sensor is created using this hydrogel and subsequently tested.
Introduction
Hydrogels are three-dimensional (3D) cross-linked hydrophilic polymers capable of maintaining a large amount of water (or any biological fluids) [1][2][3], and widely used for the fabrication of sensors [4], scaffolds [5], wound healing substrates [6], actuators [7], and so on. However, a major disadvantage is that they tend to lose functionality under load due to the long-term damage inflicted on them. To increase their lifetime, therefore, and extend potential applications, introducing self-healing and self-recovery (SELF) properties into hydrogels is a promising new approach [8]. However, the requirement for increased molecular mobility to facilitate SELF properties is in opposition to the need for increased mechanical strength because mobile crosslinks do not have enough structural integrity to keep the 3D structure of hydrogels while under load. Therefore, there is an urgent need to fabricate hydrogels that exhibit both SELF properties and adequate mechanical strength [9]. To achieve SELF properties within hydrogels, different methods have been proposed thus far, such as loading healing agents to hydrogels or incorporating dynamic covalent and/or non-covalent bonds, such as metal-ligand bonds [2,8,10,11].
Thus far, many hydrophilic polymers have been evaluated, with polysaccharides being the most promising candidates primarily because of their low cost, cytocompatibility, biocompatibility, and biodegradability [1,12]. As an example, starch in particular, is an inexpensive and readily-available polysaccharide with branched (amylopectin) and linear (amylose) polymeric forms [1,13]. However, its brittleness and moisture-sensitivity render it difficult to employ in load-bearing applications [3,12,14,15], while its lack of electroconductivity makes it equally difficult for use in soft wearable strain sensors [16,17]. Found in the exoskeleton of arthropods and synthesized linearly by glucosamine monomers with β-(1-4)-N-acetyl glucosamine linkages, chitin nanofibers (ChNF) is the second most abundant polysaccharide in nature and contains highly crystallized fibrous structures with diameters usually between 2-20 nm [18][19][20]. Similar to nanofibers isolated from cellulose, they have a highly modifiable surface area capable of imparting mechanical strength to hydrogels. It is hypothesized therefore that ChNFs modified with tannic acid (TA) and ferric ions (Fe 3+ ) would give both SELF and mechanical properties to the hydrogel due to their stiff but dynamic structural motifs. Moreover, due to the presence of TA, the hydrogel may exhibit mussel-inspired adhesion and capable of attaching to almost any surface but still peel off without leaving any residue. This feature can make these hydrogels extremely versatile because most soft wearable strain sensors require external adhesives to eliminate interfacial delamination between the sensor and substrate, especially under repeated deformation [21,22].
A dynamic mussel-inspired hydrogel with SELF properties and high mechanical strength exhibiting electro-conductivity and self-adhesion for use in soft wearable strain sensors has been designed and synthesized in this work. To do this, a TA/Fe@ChNFs nanohybrid modifier has been synthesized and used as a modifier to a starch-based polymer network. The TA/Fe@ChNFs nanohybrid functions not only as a nanofiller but also as a dynamic cross-linking agent, providing both SELF properties and mechanical strength to the hydrogel. To date, there is little if any research available using TA/Fe@ChNFs nanohybrids to fabricate such hydrogels. Furthermore, polyvinyl alcohol (PVA) is used as a non-toxic, biodegradable, and water-soluble polymer additive to plasticize and increase the molecular mobility of the polymer. Acrylic acid (AA) monomers are grafted to the starch to increase binding affinity and enhance energy dissipation and, thus, avoid the formation of irreversible, covalent crosslinks. The SELF performance and mechanical strength of the hydrogel are enhanced by adding TA/Fe@ChNFs nanohybrid as dynamic motifs into the hydrogel.
Fe/TA@ChNFs Nanohybrid Synthesis
The Fe/TA@ChNF nanohybrid was synthesized using a facile two-step procedure. In brief, 134.0 g of ChNFs suspension (1.5 wt.%, containing about 2.0 g ChNFs) was diluted using 500 cc distilled water, followed by adjusting the pH to 8.5 by dropwise addition of a tris buffer solution (1 M) into the ChNF suspension. Afterward, 2.0 g of TA was added to the suspension while stirring at ambient room temperature for 12 h. Subsequently, excess TA was washed using centrifugation and re-dispersion in distilled water to concentrate the TA-ChNFs to 3.0 wt.%. In order to introduce Fe nanoparticle into the TA-ChNFs suspension, 0.5 g of FeCl 3 ·6H 2 O was dissolved in 100 mL of distilled water and stirred overnight. After that, 50.0 g of TA-ChNFs was introduced into the suspension and stirred for 6 h at ambient temperature during which time the color of the suspension changed from white to red. Finally, the suspension was centrifuged and washed with distilled water to extract the excess Fe ion solution, followed by homogeneously re-dispersing the synthesized Fe/TA@ChNFs nanohybrid in distilled water using ultrasonic treatment. The Fe/TA@ChNFs suspension was finally sealed and cryopreserved at 4 • C.
Fabrication of PVA-St-PAA-Fe/TA@ChNF Nanohybrid Hydrogels
The PVA-St-PAA hydrogels were synthesized based on the method developed by Hussain et al. for PVA-glycogen-PAA hydrogels [23]. Briefly, a certain content of St and PVA was dissolved in a 1:1 ratio in 100 mL distilled water at 90 • C for 2 h. After cooling at ambient temperature, 1.0 mL of APS (1.0 g/10 mL) was added to the 5 mL St-PVA solution and stirred for 10 min to activate the PVA and St functional groups. Next, an adequate amount of AA monomer was added to the solution and stirred for another 10 min to polymerize PAA. Finally, Fe/TA@ChNF nanohybrid was gradually added to the mixture at different concentrations (0.5. 1, 1.5, 2 wt.%) while stirring. To remove any formed bubbles, the mixture was sonicated for a few minutes then left for 30 min at ambient temperature to allow hydrogelation to occur. The synthesized nanocomposite hydrogels containing 0.0 wt.%, 0.
Hydrogel Characterization
The nanostructured morphologies of ChNFs and Fe/TA@ChNFs were evaluated using transmission electron microscopy, TEM (JEOL, 2100). To do this, each sample was diluted to about 0.001 wt.% using ethanol and sonicated to prepare a well-homogenized suspension. Next, the samples were cast onto perforated carbon-coated grids and the excess liquid was absorbed using filter paper. An image analyzer program (ImageJ) was used to analyze the resulting microscopic images. The microstructured morphologies of the freezedried hydrogels were evaluated using scanning electron microscopy, SEM (Zeiss, Supra, Oberkochen, Germany), before and after loading Fe/TA@ChNFs. ATR-FTIR was used to analyze the ChNFs, Fe/TA-ChNFs, and hydrogel using a Bruker Vertex 70 FTIR between 600-4000 cm −1 . Mechanical properties were determined using a universal tensile testing machine equipped with a 200 N load cell (Instron, Norwood, MA, USA). The mechanical properties of the silicone-coated rectangular-shaped specimens were determined using a constant stretching rate of 60-160 mm/min and sample dimensions of 10 mm wide, 6 mm thick, and 35 mm long. The initial distance between the two clamps was 15 mm. A visual inspection of the self-healing performance of the hydrogel was performed by cutting it in half and immediately splicing the two halves together again to form the original shape. The strength ratio between the healed and original hydrogel was defined as the healing efficiency. Rheological measurements of the hydrogels were conducted using an oscillatory rheometer equipped with a parallel plate geometry with a 40-mm diameter at a gap of 49 µm and a frequency of 1.0 Hz (TA Instruments, HR-3, New castle, DE, USA). The self-adhesive performance of the hydrogels was evaluated using different clean substrates with the same universal tensile testing machine at a crosshead speed of 10 mm/min until separation was achieved [22].
Design and Synthesis of PVA-St-PAA-Fe/TA@ChNF Nanohybrid Hydrogels
In this research, a tough, self-healing, self-adhesive, and conductive hydrogel via reversible catecholato-metal coordination bonds through the incorporation of Fe/TA@ChNF nanohybrid into the PVA-St-PAA network was synthesized. To do this, TA and Fe nanoparticles were first introduced onto the surface of ChNFs, which, by virtue of its high surface area and abundance of available hydroxyl groups, was able to interact with the catechol groups within the dendritic structure of the TA ( Figure 1A 1 ). The oxidative polymerization of TA monomers on ChNFs under alkaline conditions changed the color of ChNFs suspension from white to yellow and introduced ample catechol groups on ChNFs that were able to adsorb Fe 3+ ions onto the TA@ChNFs surface via chelation interactions. The presence of TA in TA@ChNFs also facilitated the in situ formation of metal nanoparticles on the TA@ChNFs surface because of its high reduction capability. Immediately after treating TA@ChNF with ferric chloride, the color of suspension changed from yellow to a dark red color ( Figure 1A 2 ). Based on the TEM images, the observed lengths of ChNFs and TA@ChNF were on the micrometer scale, while their diameters were on the nanoscale (62 ± 17 nm, Figure 1B and 69 ± 28 nm, Figure 1C). Successful deposition of Fe nanoparticles on ChNFs was confirmed using TEM. As seen in Figure 1D, dense but uniform Fe nanoparticles (15-25 nm) were observed due to the successful formation of Fe/TA@ChNF nanohybrid. According to the FTIR spectra ( Figure 1E), the peaks at 3434 cm −1 in the ChNFs, TA@ChNF and Fe/TA@ChNFs spectra could be attributed to O-H stretching, while the bands in the 2800-3000 cm −1 region to -CH 2 symmetrical and asymmetrical stretching vibrations of chitin. The amide I band of chitin was also observed at 1619 cm −1 and the CONH 2 group of chitin was observed at 1307 cm −1 . After polymerization of TA, stretching peaks at 813, 1531, and 1612 cm −1 were observed in TA@ChNFs, which are not present in pristine ChNFs and were attributed to the presence of stretching vibrations of C-C aromatic bonds. These findings complement those of similar studies on the deposition of TA onto the surface of cellulose nanocrystals [24]. A significant decrease in the intensity of the major peaks and broadening of the absorption peak at 3434 cm −1 for the Fe/TA@ChNFs nanohybrid was observed due to the consumption of hydroxyl groups by Fe nanoparticles. The effect of Fe nanoparticles on the thermal stability of TA@ChNFs was also studied ( Figure 1F). After deposition of Fe nanoparticles, the decomposition of TA@ChNFs became slower than without Fe nanoparticles at 250 • C, which can be due to the interactions between Fe nanoparticles and TA@ChNFs.
It was hypothesized that the presence of both TA and Fe nanoparticles in ChNFs would bestow strong cohesiveness in the bulk to PVA-St-PAA via reversible dynamic catecholato-metal coordination bonds. Moreover, it was expected that PVA-St-PAA would show a strong self-adhesion to different substrates via a mussel-inspired adhesion. After loading Fe/TA@ChNFs, we anticipated the formation of dynamic catecholato-metal coordination bonds between Fe/TA@ChNFs and PVA-St-PAA and ionic coordination bonding between Fe nanoparticles and carboxylic groups of PAA and hydroxyl groups of St and PVA. These coordination modes, together with hydrogen bonding between the hydroxyl groups of Fe/TA@ChNFs and the functional groups of St, PVA, and PAA, formed a strong reversible 3D dynamic network that instantly formed a hydrogel. The influence of loading Fe/TA@ChNFs on the morphology of PVA-St-PAA was studied using SEM images (Figure 2A-E). As can be seen, the fracture surface of the freeze-dried hydrogels showed a difference in morphology and pore size to that of the loaded and unloaded Fe/TA@ChNFs hydrogels. The unloaded sample showed a nonhomogeneous morphology (Figure 2A) while the Fe/TA@ChNFs loaded counterparts showed a small, dense, high surface area, and uniform pore size ( Figure 2B-E). Additionally, the pore sizes decreased by increasing the Fe/TA@ChNFs concentrations in the hydrogels. Hence, the incorporation of Fe/TA@ChNFs nanohybrids had a significant effect on the structural homogeneity of PVA-St-PAA due to the formation of reversible bonds between polymers chains. Figure 2F Figure 2G shows the 0.5 wt.% Fe/TA@ChNF-assisted PVA-St-PAA hydrogel.
Mechanical, Rheological, and SELF Properties
It is proposed, therefore, that inclusion of Fe/TA@ChNF nanohybrids with rigid but dynamic motifs not only improves the mechanical properties of PVA-St-PAA hydrogel but these also act as dynamic cross-linkers. Hence, they can be used instead of static cross-linkers like N,N -methylene bisacrylamide during the polymerization of PAA. To demonstrate this, we prepared rectangular shape hydrogels followed by stretching them using a universal tensile test. The stress-strain curves of the hydrogels containing different percentages of Fe/TA@ChNF nanohybrid (0, 0.5, 1, 1.5, and 2 wt.%) are shown in Figure 3A. The stretchability, toughness, and strength of the hydrogels reached 1503%, 184.1 kPa, and 2.27 MJ/m 3 by loading 1.5 wt.% Fe/TA@ChNF nanohybrid to the hydrogels, revealing that Fe/TA@ChNF nanohybrids can increase the mechanical properties of the hydrogels and create highly stretchable hydrogels. However, further increasing of Fe/TA@ChNF nanohybrid (2 wt.%) drastically reduced the mechanical properties of the hydrogels. This can be related to the formation of aggregations within the network of hydrogels acting as stress concentration points, thus adversely dropping the mechanical properties of hydrogel under stress. To evaluate the efficiency of the hydrogel against crack propagation while stretching, a notch was created in each hydrogel and it was observed that the Fe/TA@ChNF nanohybrid greatly improved blunting of any propagating crack with the highest improvement in toughness achieved for the 1.5 wt.% Fe/TA@ChNF nanohybrid-assisted hydrogel ( Figure 3B). To track the strain-rate dependency, mechanical tests were conducted on 1.5 wt.% Fe/TA@ChNF nanohybrid-assisted hydrogel at 60-160 mm/min stretching rates. It was found that higher strain rates lead to higher breaking strain, although the strain rate dependency disappeared after 120 mm/min because of the reduction in energy dissipation efficiency of the reversible dynamic catecholato-metal coordination bonds ( Figure 3C) [24]. The effect of Fe/TA@ChNF nanohybrid on self-healing of the hydrogels was then quantified using the tensile strength ratio between the healed and original hydrogels ( Figure 3D). As seen, by increasing Fe/TA@ChNF nanohybrid the self-healing performance of hydrogel increased from 80% for 0.5 wt.% Fe/TA@ChNF nanohybrid-assisted hydrogel to 96.5% for 1 wt.% Fe/TA@ChNF nanohybrid-assisted hydrogel after 60 min healing time and then showed little further increase despite a maximum healing of 98% for 1.5 wt.% Fe/TA@ChNF nanohybrid-assisted hydrogel. As seen, the healed 1.5 wt.% Fe/TA@ChNF hydrogel almost recovered to its initial mechanical properties after 60 min healing ( Figure 3E). To demonstrate the time-dependent self-healing of the hydrogels, self-healing was measured for the 1.5 wt.% Fe/TA@ChNF hydrogels at different time frames (20,40, and 60 min) ( Figure 3F). As seen, the healing efficiency reached 79% after 20 min healing time and then slowed down reaching 96.5% after 40 min, and finally 98% after 60 min healing.
Visual inspection of self-healing ( Figure 3G-J) was performed using two 1.5 wt.% Fe/TA@ChNF nanohybrid-assisted hydrogels, one of which had been dyed with methylene blue ( Figure 3H). The samples were cut in half and rejoined at the cutline using samples of different colors to clearly illustrate healing. The hydrogel showed a stimuli-free and rapid self-healing response with immediate stability and stretchability across the cutline thanks to the presence of strong dynamic bonds ( Figure 3I and Video S1). Interestingly, the blue in the sample started to fade ( Figure 3J) presumably a result of the molecular mobility and the dynamic covalent bonding, such as the formation of ionic and metal chelating bonds ( Figure 3G). To illustrate self-healing of the reinforced and unreinforced hydrogels at the microscopic level, the breakup and reformation of the hydrogel networks using an alternate step strain test (strain = 1% and 1000%) at a fixed time interval (100 s) ( Figure 3K,L) was explored. As seen, at low amplitude oscillatory shear (strain = 1%) the storage moduli (G ) of both reinforced and unreinforced hydrogels are higher than their loss moduli (G ) indicating that the 3D network of the hydrogel remains intact under small oscillatory strain over time. After subjecting the hydrogel to high amplitude oscillatory shear (strain = 1000%) over the next 100 s however, G and G were inverted meaning that the 3D hydrogel network had collapsed and transformed into a sol-state. Despite showing the formation of a hydrogel-state (G > G ) by returning from high strain to low strain at a fixed frequency (1.0 Hz) for the unreinforced hydrogel, it did not fully recover the disrupted 3D network of hydrogel due to lacking dynamic bonds ( Figure 3K). On the other hand, an instantaneous, repeatable, and complete self-recovery of the disrupted 3D network was observed for the reinforced hydrogel after disruption indicating the excellent selfhealing or recovery of the hydrogel due to the reversible bonds in the hydrogel structure ( Figure 3L). To explain this, it is proposed that at a very high strain, the strong catecholatometal coordination bonds start to unzip, causing the observed changes in G and G and transformation from a hydrogel to a sol state ( Figure 3M). Therefore, by combining SELF mechanical properties, the hydrogels formed here are clearly very versatile and can potentially be employed for many practical applications.
Self-Adhesiveness, Electrical Conductivity, and Strain Sensing Test
In addition to imparting SELF, toughness, and stretchability, the presence of catechol moieties of TA can also impart mussel-inspired self-adhesiveness to the hydrogel. These properties, when combined with the observed electronic properties, can make the hydrogels developed here particularly unique for soft wearable strain sensing applications [21,[24][25][26]. To reveal the remarkable self-adhesive capability of the hydrogel on a wide range of substrates, the hydrogel was adhered to a range of different surfaces, including stone, gold, silver, glass, salmon tissue, plastic, cotton, and tree leaf. As seen in Figure 4A, the hydrogel was used to attach all of these substrates to a nitrile rubber glove and bore the weight of the item. To quantify this qualitative observation, a cyclic peel-off tensile test was performed four times by attaching the hydrogel onto different substrates (rubber, plastic, glass, metal, and leather) and the adhesive load during peeling was determined ( Figure 4B). The hydrogel adhered to all of these surfaces while showing a repeatable self-adhesive performance with the highest adhesion strength between the hydrogel and metal interface that can be related to the existence of metal complexation at the interface of them ( Figure 4C). Furthermore, π-π stacking or CH-π interaction can be considered as the mechanism of the adhesion between plastic and hydrogel. Similarly, hydrogen bonding can be deemed as the main adhesion mechanism between glass, leather, and rubber to the hydrogel [21,24]. The hydrogel was also attached to a wooden mannequin's elbow to observe stretchability and self-adhesiveness of the hydrogel. As seen in Video S2, the hydrogel can be stretched without detaching, thus showing simultaneously self-adhesion and stretchability suggesting that this hydrogel may be suitable for soft wearable strain sensors.
Polymers like PAA can be cross-linked by ions (Fe 3+ , Ca 2+ , and Li + ). Hence, the presence of ample cross-linked ions can impart excellent electrical conductivity and high performance in the strain sensor to hydrogels. The electrical conductivity of the hydrogel was also evaluated by simply using a light-emitting diode (LED) indicator. As seen in Video S3, the LED bulb was immediately lit after connecting the hydrogel to the electric circuit and reversibly lit and darkened by stretching and relaxing the hydrogel because of local disconnections. As seen in Figure 4D, the LED bulb was lit after connecting the healed hydrogel to the electric circuit and restored the electric circuit at the molecular level after self-healing of the hydrogel. Next, the hydrogels resistance was measured with a multimeter at a probe distance of 1 cm and stretching the hydrogel up to 960%. As seen in Figure 4E, the resistance of the hydrogel increased with applying a tension due to the increasing distance between the conductive segments of the 3D network and increasing void content within the hydrogel network [21]. As a result, the electrical conductivity of the hydrogel together with its self-healing, self-adhesiveness, toughness, and stretchability increases the versatility of the hydrogel for many practical applications, including soft wearable strain sensors. Moreover, unlike most hydrogel-based strain sensors needing an external adhesion for attaching to the body, these hydrogels do not need any external adhesion thanks to their self-adhesiveness performance. Additionally, since static crosslinkers were not used during the fabrication of the hydrogel, it is, thus, not brittle and will not easily lose its functionality while under load [24]. To detect the strains of the body, the hydrogel was used as a soft wearable strain sensor by adhering to a human forefinger. Once attached, the original ( Figure 4F) and healed ( Figure 4G) sensors responded well to finger strain. It was observed that the resistance increased to higher values by bending the forefinger and stretching the sensor, in agreement with the results of Figure 4E. The resistance was repeatedly recoverable to its initial value when a human forefinger was straightened while displaying an identical resistance pattern. Importantly, the initial resistance was restorable. The relative resistance change of the hydrogel after 200 s was stable, albeit with some slight fluctuations ( Figure 4F). As a result, due to a stable, reproducible, and detective resistance change at different bending angles, the hydrogel was able to reliably monitor the real-time resistance displaying remarkable suitability as a soft wearable strain sensor with high mechanical strength, SELF-performance, and self-adhesion. The healed sensor was also attached to a human forefinger to evaluate the SELF-performance of the sensor under the same strains. The results showed that the healed sensor was able to successfully detect similar patterns comparable to those detected by the original hydrogel ( Figure 4G). This confirms that this hydrogel, thanks to its high healing efficiency, is suitable for long-term use in soft wearable strain sensors.
Conclusions
A nano-polysaccharide mixed hydrogel with excellent SELF, mechanical, electrical, and self-adhesiveness performance suitable for soft wearable strain sensors has been presented. For this purpose, ChNFs were used as the nano-polysaccharide and were coated with ferric ions and tannic acid (TA/Fe@ChNFs) to provide the dynamic covalent structural motifs. Using a facile method, the electroconductive polysaccharide-based hydrogel was synthesized from starch/polyvinyl alcohol/polyacrylic acid (St/PVA/PAA) hydrogels, followed by loading TA/Fe@ChNFs into the hydrogel network. Due to the incorporation of TA/Fe@ChNFs, thereby forming hydrogen bonding and chelation coordination bonds, it was observed to have excellent SELF ability-without needing any external stimuli-whilst having reliable mechanical, electrical, and self-adhesion properties. Hence, this hydrogel is anticipated to be ideally suited for use in soft wearable strain sensors. For the future work, extra attention can be paid to cost, toxicity, processability, and biocompatibility while designing such hydrogels to meet industrial and application requirements. Additionally, the mechanism of self-healing at nanoscale should be assessed.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10.3 390/s21113574/s1, Video S1: Rapid self-healing response with immediate stability and stretchability; Video S2: Stretchability and self-adhesiveness of the hydrogel to a wooden mannequin's elbow; Video S3: Evaluation of the electrical conductivity of the hydrogel using a light-emitting diode (LED) indicator.
Author Contributions: Conceptualization, methodology, acquisition of data, analysis and interpretation of data, drafting of the manuscript, P.H.; review and resources, H.Y.; supervision, discussions, analysis and interpretation of data, review and editing, A.K.; review and editing, M.P.; review and editing, S.G.; supervision, discussions, analysis and interpretation of data, review and editing, R.J.V.; project administration, supervision, resources, review and editing, A.Z.K. All authors have read and agreed to the published version of the manuscript.
|
v3-fos-license
|
2023-02-21T14:43:39.926Z
|
2021-01-13T00:00:00.000
|
257044308
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-020-79656-6.pdf",
"pdf_hash": "48368679249ab29d23dc164831cbe3d180f406ea",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:648",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "48368679249ab29d23dc164831cbe3d180f406ea",
"year": 2021
}
|
pes2o/s2orc
|
Altered hippocampal gene expression, glial cell population, and neuronal excitability in aminopeptidase P1 deficiency
Inborn errors of metabolism are often associated with neurodevelopmental disorders and brain injury. A deficiency of aminopeptidase P1, a proline-specific endopeptidase encoded by the Xpnpep1 gene, causes neurological complications in both humans and mice. In addition, aminopeptidase P1-deficient mice exhibit hippocampal neurodegeneration and impaired hippocampus-dependent learning and memory. However, the molecular and cellular changes associated with hippocampal pathology in aminopeptidase P1 deficiency are unclear. We show here that a deficiency of aminopeptidase P1 modifies the glial population and neuronal excitability in the hippocampus. Microarray and real-time quantitative reverse transcription-polymerase chain reaction analyses identified 14 differentially expressed genes (Casp1, Ccnd1, Myoc, Opalin, Aldh1a2, Aspa, Spp1, Gstm6, Serpinb1a, Pdlim1, Dsp, Tnfaip6, Slc6a20a, Slc22a2) in the Xpnpep1−/− hippocampus. In the hippocampus, aminopeptidase P1-expression signals were mainly detected in neurons. However, deficiency of aminopeptidase P1 resulted in fewer hippocampal astrocytes and increased density of microglia in the hippocampal CA3 area. In addition, Xpnpep1−/− CA3b pyramidal neurons were more excitable than wild-type neurons. These results indicate that insufficient astrocytic neuroprotection and enhanced neuronal excitability may underlie neurodegeneration and hippocampal dysfunction in aminopeptidase P1 deficiency.
Results
Gene expression profile of the hippocampus in mice with aminopeptidase P1 deficiency. Xpn-pep1 −/− mice exhibit hippocampal pathology and impaired hippocampus-dependent learning and memory 8 . To identify differentially expressed genes in the Xpnpep1 −/− hippocampus, we performed microarray analysis of gene expression profiles in the hippocampal samples from 5-week-old wild-type (WT; Xpnpep1 +/+ ) and Xpnpep1 −/− mice. To avoid possible false-positive interpretations caused by a variation in the individual sample contribution to the pooled sample, each sample was independently hybridized to a single microarray chip and the expression levels of each gene were compared among animals by statistical analyses 11 .
Real-time quantitative reverse transcription PCR (qRT-PCR) confirmed altered gene expression in hippocampal neurons and glial cells.
To validate the alterations in gene expression observed in the microarray, we further examined the mRNA levels of the 31 DEGs, except Xpnpep1, by real-time qRT-PCR, using a different cohort of mice. Primers for each gene were designed to generate 120-250 bp products at pre-determined annealing temperatures (Supplementary Table 1), and hippocampal cDNA was prepared from 5 independent WT and Xpnpep1 −/− mice (4-5 weeks of age). Expression levels of each target gene were determined by the ratio to the reference gene, Gapdh.
Expression patterns of aminopeptidase P1 in the hippocampus. Aminopeptidase P1 is widely expressed in brain neurons, including hippocampal principal neurons 7 . However, glial expression and subcellular distribution of aminopeptidase P1 in the hippocampus are unclear. Because our anti-aminopeptidase P1 anti- www.nature.com/scientificreports/ body was not suitable for immunohistochemical staining and Xpnpep1 mutant mice express a β-galactosidase (lacZ) reporter under the control of the Xpnpep1 promoter, we labeled aminopeptidase P1-expressing cells in the hippocampal sections from 4 to 5-week-old Xpnpep1 +/− and Xpnpep1 −/− mice by X-gal staining. The sections were then immunostained with the neuronal marker NeuN, the astrocyte markers GFAP (glial fibrillary acidic protein) and S100β (S100 calcium-binding protein beta chain), the microglial marker Iba1 (ionized calcium binding adapter molecule 1), or the oligodendrocyte marker O4. In confocal fluorescence microscopy, we observed that each marker labeled distinct cell populations ( Supplementary Fig. 1). Consistent with previous results 7 , the hippocampal principal cell layers, which contain mainly excitatory neurons, exhibited intense X-gal staining signals by light transmission microscopy ( Supplementary Fig. 2a-d). Strong punctate signals were mostly detected within the cell body of principal neurons in both Xpnpep1 +/− and Xpnpep1 −/− mice. In addition, we detected disperse X-gal-positive signals in the dentate gyrus (DG) hilus and outer molecular layer, CA3 stratum lucidum, and CA1 stratum radiatum. (Supplementary Fig. 2b-d). When we immunostained the X-gal stained section with NeuN antibodies, we observed X-gal precipitates within the NeuN-positive neuronal somata (Fig. 3a-d and Supplementary Fig. 2g-i). The merged transmittance and fluorescence images further revealed that X-gal signals were present in MAP2-positive dendrites of neurons in the hippocampus (Fig. 3a-d). However, X-gal signals scarcely overlapped with the astrocyte markers GFAP and S100β (Fig. 3e-h). In addition, X-gal signals were rarely detected in the CA3 stratum radiatum, CA1 stratum lacunosum-moleculare, and stratum oriens of CA1 and CA3, whereas astrocytes were abundant in these areas ( Supplementary Fig. 3). Similar to astrocytes, X-gal signals did not overlap with microglia (Fig. 3i,j) and oligodendrocytes (Fig. 3k,l) markers.
Previous RNA sequencing studies reported that Xpnpep1 transcripts were present in astrocytes, microglia, and neurons 18,25 . However, when we examined the protein expression levels of aminopeptidase P1 in hippocampal neuron cultures and glial cultures prepared from Xpnpep1 +/+ embryonic mice, the amount of aminopeptidase P1 was significantly higher (> 7 times) in the βIII-tubulin enriched neuronal lysates than in the GFAP-enriched www.nature.com/scientificreports/ glial lysates ( Supplementary Fig. 4). Collectively, these results suggest that aminopeptidase P1 is predominantly expressed in neurons in the hippocampus. X-gal signals in the hippocampal sections from Xpnpep1-mutant mice are produced by Xpnpep1-β-geo fusion proteins rather than endogenous aminopeptidase P1, indicating that the signal represents aminopeptidase P1-expressing cells but not the subcellular localization of aminopeptidase P1. As X-gal signals were mostly detected in the soma and dendrites of neurons, we further investigated the subcellular localization of aminopeptidase P1 in neurons by co-expressing FLAG-or HA-tagged aminopeptidase P1 with EGFP in cultured hippocampal neurons. To avoid nonspecific localization of aminopeptidase P1 caused by overexpression, we used the IRES (internal ribosome entry site)-containing bicistronic vector (Fig. 4a) because IRES-dependent gene expression is significantly weaker than cap-dependent expression 26 . In contrast to EGFP, which was distributed extensively throughout the transfected neurons, aminopeptidase P1 expressed by IRES-mediated translation was mainly detected in cell bodies and dendrites, but not in MAP2-negative axonal regions in the transfected neurons (Fig. 4b). Consistent with this result, endogenous aminopeptidase P1 was more distributed in the cytosolic fractions of hippocampal lysates (Fig. 4c). These results indicate that aminopeptidase P1 is a cytosolic protein that is mainly distributed in the somato-dendritic compartments of neurons.
Deficiency of aminopeptidase P1 modifies the population of glial cells in the hippocampus. Despite exhibiting predominant expression of aminopeptidase P1 in neurons, DEGs identified by microarray and qRT-PCR indicate that aminopeptidase P1 deficiency affects glial cells in the hippocampus. Since astrocyte alterations are frequently associated with brain disorders 27 , we first determined the density of astrocytes in the hippocampus of 5-week-old mice using the astrocyte markers GFAP and S100β. GFAP expression in astrocytes is known not to be uniform. It is strong in astrocytes compartmentalized to the hippocampus or reactive astrocytes, while astrocytes in other brain areas exhibit mild to weak GFAP expression in the normal state 28,29 . S100β is less specific than GFAP, but it is broadly expressed in astrocytes 30 . Despite that both genotypes showed similar cross-sectional areas in the hippocampus (Supplementary Fig. 5a and b), Xpnpep1 −/− www.nature.com/scientificreports/ mice exhibited fewer GFAP-positive cells in the hippocampus compared to WT mice ( Fig. 5a-f). Similarly, the number of S100β-positive cells was decreased in the Xpnpep1 −/− hippocampus. Quantification of the numbers of GFAP-or S100β-positive cells revealed significant differences in the density of hippocampal astrocytes between the two genotypes ( Fig. 5d-f). Notably, a reduction in astrocyte density was detected in the whole hippocampal subregions including CA3, CA1, and DG areas ( Fig. 5a-f). Considering that many neurodegenerative diseases accompany reactive astrogliosis and that reduced astrocyte density was observed in major depressive disorder and starvation 27,31 , neurodegeneration accompanying astrocyte reduction in the hippocampus is a unique pathological change in aminopeptidase P1 deficiency. In addition, the morphology of astrocytes in the Xpnpep1 −/− CA3 subfields, determined by GFAP-immunoreactive signals distributed in the cellular processes, was similar to that of control astrocytes (Fig. 5g), despite the presence of vacuoles of varying sizes in the Xpnpep1 −/− CA3 subfields ( Fig. 5a,g). Sholl analyses of astrocytic processes showed no difference between the two genotypes ( Fig. 5h). This observation also eliminates the possibility of astrocytic gliosis in the Xpnpep1 −/− hippocampus. Consistent with the reduced density of astrocytes, protein expression levels of GFAP were significantly decreased in the Xpnpep1 −/− hippocampus (Fig. 5i,j). However, there is a possibility that GFAP expression in each astrocyte, in addition to the reduced astrocyte density, was reduced by the deficiency of aminopeptidase P1. We next examined microglial populations in the hippocampus using Iba1 and CD68 antibodies. Because Xpnpep1 −/− mice exhibit microcephaly 7 , we selected coronal brain sections that displayed similar hippocampus www.nature.com/scientificreports/ morphology and size for both genotypes (Supplementary Fig. 5c and d). Iba1-positive signals were detected in the cell body and thin processes of microglia, whereas immunoreactive signals for CD68, the reactive microglia marker 32 , were mainly detected as tiny puncta near the cell bodies but not entire cell structures in both genotypes of mice ( Fig. 6a,b). When we induced microglial activation in WT mice by intraperitoneal injections of lipopolysaccharide (LPS, 1 µg/kg; once daily for 4 days), intense immunoreactivity for CD68 was detected in the cell body and processes of microglia ( Supplementary Fig. 6). Punctate CD68-positive signals in Xpnpep1 −/− microglia suggest that deficiency of aminopeptidase P1 does not induce activation of microglia in the hippocampus (Fig. 6a,b). However, the density of microglia in the CA3 subfields of Xpnpep1 −/− mice was significantly higher than those in WT mice (Fig. 6a,c). Interestingly, both genotypes showed similar numbers of Iba1-positive cells in the DG and CA1 subfields of the hippocampus ( Fig. 6d-g). This finding is consistent with the neuropathology of Xpnpep1 −/− mice in that neurodegeneration was exclusively observed in the CA3 subfield 8 . In addition, some microglia were present but did not accumulate around vacuoles in the CA3 area of Xpnpep1 −/− mice ( Fig. 6a and Supplementary Fig. 7) 33 . When we examined the expression levels of Iba1 and CD68 proteins in the whole hippocampal extracts, the expression levels of these proteins were not significantly different between the two genotypes ( Fig. 6h,i). Collectively, these results suggest that a deficiency of aminopeptidase P1 selectively increases the number of microglial cells in the hippocampal CA3 subfield.
Aminopeptidase P1-deficiency enhances excitability of CA3 pyramidal neurons. Xpnpep1 −/− mice exhibit neurodegeneration and vacuolation in the CA3 area, but the pattern of alteration in the glial population observed in the Xpnpep1 −/− hippocampus is quite different from common neurodegenerative disorders. To understand neuronal changes associated with neurodegeneration, we examined the intrinsic excitability of pyramidal neurons in the CA3b region in the presence of the AMPAR blocker NBQX (10 μM) and the GABA A R blocker picrotoxin (50 μM). When we measured membrane potentials by whole-cell current clamp recording, there was no difference in the resting membrane potentials of CA3b pyramidal neurons between genotypes (+/+, − 77.35 ± 1.30 mV; −/−, − 74.60 ± 1.72 mV; t (22) = − 1.273, p = 0.22 by Student's t-test). However, the amplitude of the action potential (AP) elicited by the short (3-5 ms) depolarizing pulse was significantly lower in Xpnpep1 −/− CA3b neurons than in the WT neurons, while action potential duration (2.63 ± 0.07 vs. 2.55 ± 0.18 ms, t (22) = 0.411, p = 0.69 by Student's t-test) was similar in the two genotypes (Fig. 7a,b). The phaseplane (dV/dt) trajectories of APs recorded from Xpnpep1 −/− CA3 neurons show apparent reduction of dV/ dt in the overshoot phase (> 0 mV) and peak, whereas the AP threshold (− 41.48 ± 0.86 vs. − 41.46 ± 1.39 mV, t (22) = − 0.014, p = 0.98 by Student's t-test) and dV/dt during the initial depolarization and repolarization phases did not change (Fig. 7c). We next determined the passive membrane properties of CA3b pyramidal neurons by measuring the voltage response to a hyperpolarizing current pulse. Unexpectedly, Xpnpep1 −/− CA3b neurons exhibited more hyperpolarized potentials than Xpnpep1 +/+ neurons in response to the same current (− 100 pA) injection (Fig. 7d). The values of input resistance (R in ) for control CA3b neurons were similar to a previous observation 34 , whereas those of Xpnpep1 −/− CA3b neurons were significantly increased (Fig. 7e). In line with this observation, the membrane time constant (τ m ) was significantly greater in Xpnpep1 −/− CA3b neurons than in the WT neurons (Fig. 7f,g). However, the voltage sag ratio, indicative of the activation of hyperpolarization-activated cyclic nucleotidegated (HCN) channels, in response to the hyperpolarizing current step, did not differ between Xpnpep1 +/+ and Xpnpep1 −/− CA3b neurons, ruling out the reduction of the HCN current.
Because a hallmark of CA3 pyramidal neurons is burst firing 35,36 , we examined repetitive firing of APs in response to long (500 ms) depolarizing current injections. The firing frequency increased with the intensity of the injected current in both genotypes (Fig. 7h). However, the same current injection produced more spikes in Xpnpep1 −/− CA3b neurons than WT neurons, and the relationship between the firing frequency and input current (F-I curve) shifted upward in the Xpnpep1 −/− CA3b neurons (Fig. 7i). The number of spikes elicited by current steps of 300, 400, and 500 pA were significantly different between the two genotypes. When the firing and CA3 (f) stratum radiatum from the X-gal (top) stained sections were identified by immunohistochemical staining with GFAP and S100β (bottom). X-gal signals are rarely found in the cell body and processes of Xpnpep1 +/− astrocytes (right). (g,h) Cell nuclei, neuronal dendrites, and astrocytic processes in the CA1 (g) and CA3 (h) areas from X-gal stained sections were labeled with DAPI, anti-MAP2, and anti-GFAP antibodies, respectively (left). Right, punctate X-gal signals in the CA1 stratum radiatum (g) and CA3 stratum lucidum (h) were predominantly localized in the MAP2-positive dendrites, but they were occasionally detected in GFAPpositive astrocyte processes. (i,j) Sections were stained with X-gal (left, top), and microglia located in the CA1 (i) and CA3 (j) stratum radiatum were immunostained with Iba1 (left, bottom). The merged images (right) show the absence of β-galactosidase activity in Xpnpep1 +/− microglia. (k,l) After X-gal staining (left, top), oligodendrocytes in the CA1 stratum radiatum (k) and stratum oriens (l) were immunolabeled with anti-O4 antibodies (left, bottom). Right, X-gal signals were not observed in the O4-positive cells (arrows). (a-l) DAPI was used to label the nuclei of neurons and glia. Scale bars: 10 µm (j), 20 µm (a-c,e-i,k,l), and 100 µm (d). www.nature.com/scientificreports/ frequencies were fitted with linear regression between 100 and 300 pA of input current, the gain (slope) of the F-I curve was significantly higher in Xpnpep1 −/− CA3b neurons (0.061 ± 0.007 vs. 0.086 ± 0.008, t (22) = − 3.397, p = 0.025 by Student's t-test). In addition, there was a significant correlation (r 2 = 0.525, p = 0.008) between the slope of the F-I curve and R in of CA3 neurons (Fig. 7j). These results suggest that the increased R in enhances the gain of the input-output relationship and spike firing in Xpnpep1 −/− CA3 neurons. Previous studies have shown that hippocampal CA3 pyramidal neurons receive frequent and large spontaneous excitatory synaptic transmissions at mossy fiber-CA3 synapses and associational/commissural-CA3 synapses 37,38 . When we measured miniature excitatory postsynaptic currents (mEPSCs) in the principal neurons in the DG, CA1, and CA3 subfields of WT mice, both the amplitude and frequency of mEPSCs were significantly higher in CA3b pyramidal neurons compared with CA1 and DG principal cells (Supplementary Fig. 8). These results indicate that enhanced neuronal excitability in the Xpnpep1 -/-CA3 pyramidal neurons and vigorous synaptic excitation at mossy fiber-CA3 synapses and associational/commissural-CA3 synapses may lead to selective excitotoxic cell death of CA3 neurons in the Xpnpep1 -/hippocampus. Because the R in of a neuron reflects resting membrane conductance, we measured the membrane current elicited by voltage steps (20 mV increments) between − 120 and − 20 mV from a holding potential of − 70 mV (Fig. 7k). Xpnpep1 −/− CA3b neurons exhibited smaller membrane currents in response to hyperpolarizing (− 120 to − 80 mV) and near-rest depolarizing (− 60 mV) voltage steps, while the magnitudes of outward currents induced by higher depolarization (− 40 and − 20 mV) were not significantly different in CA3b neurons from both genotypes (Fig. 7l). This result indicates that reduction of channels contributing to the resting conductance is responsible for the enhanced R in of CA3 neurons in the Xpnpep1 −/− mice.
Discussion
Although inborn errors of metabolism (IEMs) are common causes of brain dysfunction and intellectual disability, the molecular and cellular changes associated with brain dysfunction are unknown in most metabolic diseases. The present study demonstrates previously unknown changes in the hippocampus with inherited metabolic www.nature.com/scientificreports/ www.nature.com/scientificreports/ disease in which long-term exposure of the brain to the altered metabolic status in aminopeptidase P1 deficiency modifies hippocampal gene expression, glial population, and neuronal excitability in CA3 neurons. Specifically, the density of microglia in the hippocampal CA3 subfield was higher in Xpnpep1 −/− mice than in WT mice, whereas the mutant mice exhibited fewer astrocytes in the hippocampus. Interestingly, astrocyte activation or reactive gliosis was not detected in the Xpnpep1 −/− hippocampus. Xpnpep1 −/− CA3 pyramidal neurons exhibited enhanced gain in the input-output relationship, such that the same current injection produced more spikes in Xpnpep1 −/− neurons than in WT neurons. Thus, an aberrant glial environment and enhanced excitability in CA3 neurons might cause neurodegeneration and hippocampal dysfunction in aminopeptidase P1 deficiency. As IEMs are usually caused by defects in enzymes, accumulation of substrates for the defective enzyme in the cerebrospinal fluid is considered to cause brain dysfunction and injury, but long-term changes in the brains exposed to the accumulated substrates are unclear. In the present study, we identified 14 down-regulated genes in the Xpnpep1 −/− hippocampus through microarray and qRT-PCR. There is a possibility that some downregulated genes that are abundant in astrocytes might be identified as DEGs due to the reduction of astrocyte density in the Xpnpep1 −/− hippocampus. Intriguingly, a previous report showed that Slc6a20 mRNA was mainly detected in microglia, meninges, and choroid plexuses in the brain 12 . Although Xpnpep1 −/− mice exhibit an increased number of microglia in the hippocampal CA3 subfields, the expression levels of Slc6a20a transcripts in the hippocampus were decreased. Slc6a20a transports imino acids, including proline and hydroxy proline, through the Na + -and Cl --dependent mechanisms 12,13 . The reduction of Slc6a20a in the Xpnpep1 −/− hippocampus probably indicates an altered imino acid gradient across the plasma membrane of brain cells and further suggests adaptive changes in the Xpnpep1 −/− brain. Similarly, the reduced mRNA expression of Slc22a2 (OCT2), a transporter for norepinephrine (NE) and serotonin (5-HT), seems to be associated with adaptive changes in the Xpnpep1 −/− brain. Slc22a2 is highly expressed in limbic neurons including CA1 and CA3 pyramidal neurons, but not in astrocytes in the brain, and contributes to the clearance of NE and 5-HT, which suppress the firing activity of CA3 pyramidal neurons 39 . Reduction of Slc22a2 may slow the clearance of NE and 5-HT, thereby intensifying the suppression of abnormally excitable CA3 neurons in Xpnpep1 −/− mice. Another interesting finding from the gene expression profile of the Xpnpep1 −/− hippocampus was the down-regulation of desmoplakin (Dsp). Dsp is exclusively expressed in dentate granule cells (GCs) in the hippocampus, and the expression level of Dsp increases with postnatal development 14,22 . Considering that reduced Dsp expression is a sign of the "immature dentate gyrus (iDG)", which is frequently observed in genetically engineered mice with abnormal behaviors 40 , our results suggest that neurodegeneration and delayed neurodevelopment coincide in aminopeptidase P1 deficiency 41 . Intriguingly, mice with the iDG phenotype exhibit hyperexcitability of the dentate GCs. The existence of developmental retardation, hyperlocomotion, and impaired hippocampus-dependent learning in Xpnpep1 −/− mice 7,8 means it is likely that iDG, in addition to the hyperexcitable CA3 pyramidal neurons, contributes to hippocampal dysfunction. However, the excitability of Xpnpep1 −/− GCs requires further investigation.
Although Xpnpep1 −/− mice exhibit neurodegenerative cell death and vacuolation in the hippocampal CA3 region, glial alterations in the Xpnpep1 −/− hippocampus were quite different from those observed in common neurodegenerative diseases in that the density of astrocytes and expression levels of GFAP were reduced in the Xpnpep1 −/− hippocampus, while the density of microglia was increased specifically in the CA3 subfields. Intriguingly, microglial activation was not observed in the Xpnpep1 −/− mice. This observation suggests that the increased number of microglia in the Xpnpep1 −/− CA3 area might be associated with physiological housekeeping functions, such as migration to sites of neuronal death to phagocytose cellular debris or apoptotic cells, rather than initiation or exacerbation of neurodegeneration 33 .
Notably, consistent with the previous in situ hybridization results ( Supplementary Fig. 2) 42 , we observed predominant Xpnpep1-expression in neurons, but relatively weak expression in glial cells ( Fig. 3 and Supplementary Fig. 4). These observations indicate that the substrates of aminopeptidase P1 in the hippocampus are mostly cleared within the neurons and that accumulation of undigested peptide substrates containing a penultimate proline residue in the cerebrospinal fluid influences the glial population and neuronal excitability. Considering that a variety of oligopeptides with N-terminal X-pro motifs exhibit diverse biological activities 10 , long-term exposure of brain cells to the substrates of aminopeptidase P1 seems to result in altered gene expression in brain cells and neuro-glial interactions.
Activation of astrocytes with an increased expression of GFAP and S100β is a hallmark of brain diseases, including common late-onset neurodegenerative diseases, ischemic brain injuries, and epilepsy 27,30,43,44 . Moreover, phenylketonuria and homocystinuria, inborn errors of metabolism caused by the deficiency of phenylalanine hydroxylase and cystathionine β-synthase respectively, also exhibit gliosis in the brain 45,46 . Meanwhile, chronic unpredictable stress and starvation induced a reduction in astrocytes in the cerebral cortex 31,47 . A recent study showed that depletion of astrocytes by treatment with the gliotoxin L-α-aminoadipic acid (L-α-AAA) did not induce neuronal death in the hippocampal CA3 area without insults such as ischemia, but that the loss of astrocyte produced persistent Ca 2+ increase in the CA3 neurons after ischemia and reperfusion 48 . Thus, the reduction of astrocytes in the Xpnpep1 −/− hippocampus likely results in insufficient neuroprotection against intracellular Ca 2+ load during burst firing or the hyperactivation of neurons rather than direct induction of neurodegeneration in the Xpnpep1 −/− CA3 subfield. It has been suggested that S100β, an astrocytic calcium-binding protein, protects neurons against NMDA-induced cell death by activating nuclear factor κB (NF-κB) signaling 49 . In addition to neuroprotection, S100β released from astrocytes regulates neuronal activity and oscillations 50,51 . Therefore, fewer astrocytes and resultant insufficient S100β release seem to be associated with hippocampal dysfunction in Xpnpep1 −/− mice.
In the present study, we found that the excitability and AP waveform of CA3b neurons were changed by aminopeptidase P1 deficiency. The peak amplitudes of APs and the derivative (dV/dt) of membrane potentials during the overshoot phase were significantly reduced in the Xpnpep1 −/− CA3 neurons, whereas disruption of Xpnpep1 had no effect on other AP parameters including resting membrane potentials, AP threshold, AP duration, and www.nature.com/scientificreports/ dV/dt during the initial depolarization and repolarization phases. The reduction of AP peak amplitudes and dV/dt during the overshoot phase is likely caused by alterations in Na + currents but not by increased I A (A-type K + channel current) or I D (D-type K + channel current) because blockade of I A and I D slowed the repolarization of CA3 neurons without affecting the peak amplitude of AP 52,53 . Despite the reduced AP amplitudes, the same current injection produced more spikes in Xpnpep1 −/− CA3b neurons than WT neurons because of the increased input resistance (R in ). The enhanced excitability with reduced AP amplitude resembles oxytocinergic modulation of intrinsic properties of CA2 pyramidal neurons, in which reduction of AP amplitude induced by the activation of oxytocin receptor was blocked by intracellular calcium chelation 54 . In hippocampal CA3 pyramidal neurons, Na V 1.2 is distributed to the soma and along the axon evenly, while Na V 1.6 is concentrated to the distal axon initial segment (AIS) 55 . Based on the subcellular localization, Na V 1.2 is considered to play a critical role in AP propagation and somatic AP generation, whereas Na V 1.6 seems to determine the initiation and threshold of AP 55,56 , indicating that Na V 1.2 is probably associated with reduced dV/dt during the overshoot phase in Xpnpep1 −/− CA3 neurons. Interestingly, haploinsufficiency of Na V 1.2 in excitatory but not inhibitory neurons resulted in absence-like seizures with epileptiform electrocorticography (ECoG) activities in mice 57 . This finding implies that reduction of Na + channel expression does not necessarily reduce excitability in neurons, but that reduced function of certain isoforms of Na + channels can enhance neuronal activity. However, it should be noted that ionic mechanisms enhancing intrinsic excitability and reducing the amplitude of AP in Xpnpep1 −/− CA3 neurons are dissociable. We observed that the slope (gain) of firing frequency (output)-the injected current (input) correlated well with the R in of CA3b neurons (Fig. 7). This observation indicates that altered R in resulting from the decreased resting conductance is mainly responsible for enhanced AP firing in Xpnpep1 −/− CA3 neurons. As spontaneous excitatory synaptic inputs onto CA3 neurons are more frequent and larger than DG or CA1 neurons ( Supplementary Fig. 8), it is conceivable that vigorous spike firing by enhanced excitability and robust synaptic excitation result in selective neurodegeneration of CA3 neurons in Xpnpep1 −/− mice, despite the fact that they have fewer astrocytes in the entire hippocampus. Although the ion channels responsible for reduced resting conductance in Xpnpep1 −/− CA3 neurons require further identification, the present study provides cellular mechanisms underlying hippocampal dysfunction and CA3 neurodegeneration in aminopeptidase P1 deficiency.
Methods
Animals. Generation and genotyping of Xpnpep1 knockout mice has been previously described 7 . Mice were backcrossed to C57BL/6J and 129S4/SvJae mice for at least 10 generations before use. All analyses were performed on littermates of both genotypes generated by intercrosses between C57BL/6J and 129S4/SvJae heterozygous parents. Animals were housed 4-5 per cage in a specific pathogen-free facility, and maintained in a climatecontrolled room with free access to food and water in a 12 h light/dark cycle with the light on at 7:00. Animal maintenance and all animal experiments were performed under protocols approved by the Institutional Animal Care and Use Committee (IACUC) at Seoul National University. All methods were carried out in accordance with Guidelines for the Care and Use of Mammals in Neuroscience and Behavioral Research (National Research Council, US).
Microarray and qRT-PCR.
Hippocampi were removed from 4-to 5-week-old mice of both sexes and incubated in RNA stabilization reagent (RNAlater, Qiagen, USA) at 4 °C overnight. Total hippocampal RNA was prepared using QIAzol reagent and then cleaned using the RNeasy Mini Kit (Qiagen, USA) according to the manufacturer's protocol. To evaluate the integrity of the prepared RNA samples, the RNA integrity number (RIN) was determined using an Agilent 2100 Bioanalyzer (Agilent Technologies, USA), and samples with RIN (+/+, 7.37 ± 0.025; −/−, 7.25 ± 0.095) higher than 7.1 were used for further processing. The purity of RNA samples was evaluated using a spectrophotometer (NanoDrop ND-1000, Thermo Fisher Scientific, USA) by measuring the A260/A280 ratio (+/+, 2.057 ± 0.029; −/−, 2.06 ± 0.012) and the A260/A230 ratio (+/+, 1.93 ± 0.14; −/−, 1.92 ± 0.21). Synthesis of first-and second-strand (ss) cDNA, cRNA amplification and purification, second cycle synthesis of ss-cDNA and purification, fragmentation of ss-cDNA, and biotinylation of the fragmented cDNA were conducted according to the Affymetrix GeneChip procedure. The labeled fragmented cDNA was hybridized to the microarray chip (GeneChip Mouse Gene 1.0 ST array, Affymetrix, USA) containing 28,853 gene-level probe sets (770,317 distinct probes), and the hybridized probe array was stained with streptavidin-coupled fluorescent dye. The stained arrays were scanned with an Affymetrix GeneChip 3000 scanner, and the signal intensity of the gene expression levels was determined using Affymetrix Expression Console software. Hierarchical clustering and heatmap generation were performed using Morpheus software (https ://softw are.broad insti tute.org/morph eus/).
For qRT-PCR, first-strand cDNAs were synthesized from the total RNA with oligo (dT) primer using AMV reverse transcriptase (NEB, MA, USA) at 37 °C for 1 h. The cDNA templates were mixed with forward and reverse primers (Supplementary Table 1) and IQ SYBR Green Supermix (Bio-Rad, CA, USA). The real-time PCR analyses were performed using the CFX Connect Real-Time PCR detection system (Bio-Rad) using the following thermal cycling protocol: initial denaturation for 10 min at 95 °C; and 40 cycles alternating 15 s at 95 °C and 1 min at melting temperature. Melting curves and data analyses were performed with Precision Melt Analysis Software and CFX Manager software (Bio-Rad). The housekeeping gene Gapdh was used as a reference gene. The specificity and efficiency of all primer pairs were confirmed by RT-PCR and agarose gel electrophoresis ( Supplementary Fig. 9).
Primary neuron culture and transfection. Hippocampi were collected from embryonic day 18-19 rats and were incubated in HBSS containing 2.5% trypsin at 37 °C for 20 min. After rinsing with HBSS 3 times, neurons were dissociated by repeated trituration with a fire-polished Pasteur pipet, and were plated on cover- www.nature.com/scientificreports/ slips coated with poly-l-lysine and laminin. Neurons were cultured in neurobasal medium supplemented with B27 (Invitrogen), 2 mM l-glutamine, 1% (v/v) penicillin/streptomycin (100 U/ml, Gibco), and 2% fetal bovine serum (Gibco) in a 10% CO 2 incubator. For the IRES-mediated expression of epitope (HA or Flag)-tagged aminopeptidase P1 in cultured neurons, the eGFP sequence was PCR amplified from the pEGFP-N1 vector and inserted into the pGW1 vector at the HindIII and KpnI sites. The vector was serially digested with KpnI, Mung bean nuclease, and BglII to yield upstream blunt and downstream sticky ends. The IRES sequence was PCRamplified from the pIRES2-EGFP vector using 5′ phosphorylated forward primer and the reverse primer containing the recognition sequences for BglII, digested with BglII, and then ligated to the digested pGW1-eGFP vector. In parallel, Xpnpep1 cDNA was amplified from rat hippocampal cDNA using PCR primers containing the SalI recognition sequence (forward primer) and FLAG-or HA-EcoRI sequences (reverse primer). Epitope (HA or FLAG)-tagged Xpnpep1 was inserted into the pGW1-eGFP-IRES vector at the SalI and EcoRI sites. The sequence-verified constructs were transfected into cultured hippocampal neurons at DIV 12 using a calcium phosphate transfection kit (Takara Bio Inc. and Promega), and neurons were fixed with 4% paraformaldehyde/4% sucrose at DIV 14.
Glia-free hippocampal neuron cultures and neuron-free glial cultures were prepared from embryonic day 18-19 mice according to the protocol described above. To establish glia-free neuronal culture, dissociated hippocampal cells were cultivated in serum-free neurobasal medium, and cells were treated with the antimitotic agent AraC (3 μM; Sigma) for 8 days from DIV 12. AraC was then removed from the culture by washing the cells with fresh neuron culture medium, and neurons were harvested at DIV 21. To obtain neuron-free glial cultures, dissociated hippocampal cells were cultured in Dulbecco's modified Eagle's medium containing 2.5 mM glucose, 4 mM l-glutamine, 3.7 g/L sodium bicarbonate, 10% (v/v) FBS, 1 mM sodium pyruvate, and 1% (v/v) penicillin/ streptomycin. The cell culture medium was replaced with fresh media once every 3 days. To remove neurons, cells were detached with 0.25% trypsin-EDTA at DIV 9 and plated in a new culture dish. The cells were harvested at DIV 19 when they reached 90-100% confluence. www.nature.com/scientificreports/ glycerol, 12.5 mM EDTA, 0.02% (w/v) bromophenol blue, and 5% (v/v) 2-mercaptoethanol. Samples containing 10-15 μg of protein were loaded onto SDS-PAGE gels. The separated proteins were transferred to nitrocellulose membranes. The membranes were blocked in Tris-buffered saline (TBST, 0.1% Tween 20) containing 5% skim milk for 30 min at room temperature, and then successively incubated with diluted primary and horseradish peroxidase (HRP)-conjugated secondary antibodies for 1 h at room temperature. After each step, the membranes were rinsed 3 times for 10 min with TBST. The HRP signals were detected by enhanced chemiluminescence (GE Healthcare, UK). The production of polyclonal aminopeptidase P1-antibody has been described previously 7 . Anti-α-tubulin (Cat. # T5168) and anti-βIII-tubulin (Cat. # T8660) antibodies were purchased from Sigma-Aldrich (USA). All western blot experiments were independently repeated at least 3 times, and band intensities were quantified using MetaMorph software (Molecular Devices, Sunnyvale, USA).
Electrophysiology. Electrophysiological recordings from hippocampal slices were performed as described previously 59 . Briefly, hippocampal slices (400 μm) were prepared from 4-to 5-week-old male mice using a vibratome (VT1000S; Leica, Germany) in ice cold dissection buffer (in mM: sucrose 213, NaHCO 3 26, KCl 2.5, NaH 2 PO 4 1.25, d-glucose 10, Na-pyruvate 2, Na-ascorbate 1.3, MgCl 2 3.5, and CaCl 2 0.5 bubbled with 95% O 2 /5% CO 2 ). The slices were recovered at 36 °C for 1 h in the artificial cerebrospinal fluid (aCSF; in mM: NaCl 125, NaHCO 3 26, KCl 2.5, NaH 2 PO 4 1.25, d-glucose 10, MgCl 2 1.3, and CaCl 2 2.5), followed by maintenance at room temperature until use. Individual slices were then transferred to a recording chamber and perfused with aCSF at 30 °C. Whole-cell patch clamp recordings were performed using a MultiClamp 700B amplifier (Molecular Devices, USA). Signals were low-pass filtered at 2.8 kHz and digitized at 10 kHz using a Digidata 1440A digitizer (Molecular Devices, USA). Membrane potentials or currents of CA3b neurons were recorded using a pipette (3-4 MΩ) solution containing (in mM) K-gluconate 110, KCl 20, NaCl 8, HEPES 10, Mg-ATP 4, Na-GTP 0.3, and EGTA 0.5 (with pH 7.25, 290 mOsm). During the recording, the GABA A R blocker picrotoxin (50 μM), an APMAR blocker NBQX (10 μM), and an NMDAR blocker AP-5 (50 μM) were included in the aCSF. Recordings were started 10 min after establishment of the whole-cell configuration. The series resistance (< 10 MΩ) and seal resistance (> 1 GΩ) were monitored before and after recordings by applying a short (50 ms) hyperpolarization voltage pulse (− 5 mV), and the data were discarded if the resistance changed by more than 20% during the recording. In the current clamp experiments, neurons displaying an unstable resting membrane potential (RMP) at the beginning or during the recording were discarded. Action potentials (APs) were evoked by a brief (2-3 ms) minimal current (0.6-1 nA) injection. The amplitude of AP was measured as the difference between the peak voltage of spike and the baseline voltage (RMP). AP threshold was defined as the membrane potential at the clear inflection point between the electrotonic potential and the AP. AP duration was measured from threshold to 90% repolarization. Miniature excitatory postsynaptic currents (mEPSCs) were recorded at the holding potential of − 60 mV in the hippocampal principal cells in the presence of the GABA A R blocker picrotoxin (50 µM) and tetrodotoxin (1 µM) in the aCSF. All data were analyzed using custom macros written in Igor Pro 6 (WaveMetrics, OR, USA). All chemicals were purchased from Sigma-Aldrich (USA), except for picrotoxin, NBQX, and AP-5 (Tocris, UK).
|
v3-fos-license
|
2021-05-17T13:20:59.071Z
|
2021-05-17T00:00:00.000
|
234683716
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnbot.2021.679570/pdf",
"pdf_hash": "51bf23037fec61bee1895bc2fffbc70fcb770c12",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:652",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "51bf23037fec61bee1895bc2fffbc70fcb770c12",
"year": 2021
}
|
pes2o/s2orc
|
Reproducing Human Arm Strategy and Its Contribution to Balance Recovery Through Model Predictive Control
The study of human balance recovery strategies is important for human balance rehabilitation and humanoid robot balance control. To date, many efforts have been made to improve balance during quiet standing and walking motions. Arm usage (arm strategy) has been proposed to control the balance during walking motion in the literature. However, limited research exists on the contributions of the arm strategy for balance recovery during quiet standing along with ankle and hip strategy. Therefore, in this study, we built a simplified model with arms and proposed a controller based on nonlinear model predictive control to achieve human-like balance control. Three arm states of the model, namely, active arms, passive arms, and fixed arms, were considered to discuss the contributions of arm usage to human balance recovery during quiet standing. Furthermore, various indexes such as root mean square deviation of joint angles and recovery energy consumption were verified to reveal the mechanism behind arm strategy employment. In this study, we demonstrate to computationally reproduce human-like balance recovery with and without arm rotation during quiet standing while applying different magnitudes of perturbing forces on the upper body. In addition, the conducted human balance experiments are presented as supplementary information in this paper to demonstrate the concept on a typical example of arm strategy.
INTRODUCTION
Balance control mechanism of human has been researched to enhance balance ability of human and humanoid robots (Winter, 1995). In specific, principal balance recovery strategies, namely, ankle, hip, and stepping strategies have been studied based on human experiments (Nashner, 1985;Horak and Nashner, 1986;Horak et al., 1990) and artificial systems (Kuo and Zajac, 1993;Kuo, 1995;Shen et al., 2020b). These strategies have been considered as efficient means to help preventing falls and analyze the mechanism of balance control during standing and walking motions in human rehabilitation and humanoid robot control. For instance, human upright posture (UP) dynamic stability with a simplified inverted model or hip-ankle model has been studied based on bifurcation analyses to improve balance ability related to fall prevention and rehabilitation (Chagdes et al., 2013;Chumacero et al., 2018. Additionally, arm strategy has been considered as an efficient means to contribute to balance control and reduce the effects of a fall (Marigold and Patla, 2002;Roos et al., 2008;Pijnappels et al., 2010;Shen et al., 2020a).
Many studies related to the arm strategy have been conducted through human experiments and simulations. Cordo and Nashner (1982) studied rapid postural adjustment associated with a class of voluntary movements, including arm rotation, that disturb the postural balance. Ledebt (2000) concluded that arm postures help stabilize the body to maintain the upright position and that balance control improves because of the arm movement. Furthermore, he considered maximization of the gait efficiency based on an organism's propensity for convergence toward a stable coordination between the arms and legs. Atkeson and Stephens (2007) studied optimal control with boundary constraints from one optimization criterion to realize a multi-link model balance control and observed the movement of shoulder joints for the different perturbations. Aoustin et al. (2008) showed that arm swinging can help minimize the energy consumption during walking. Nakada et al. (2010) reviewed the mechanism of arm strategy for balance recovery and proposed Q-learning to produce appropriate arm control torques for humanoid. They concluded that the arm rotation strategy can widen the range of perturbation impulses. Bruijn et al. (2010) studied the influence of arm swinging on balance control for a perturbation as well as the local and global stability of the steady-state gait and concluded that arm movements contributed to the overall stability of human gait. Milosevic et al. (2011) estimated the effectiveness of arm motions in clinical balance and mobility. Boström et al. (2018) verified that in a dynamic balance task during challenged locomotion, the contribution of the upper body motions, particularly the one of arm movements, to human balance regulation increases with the difficulty of the task. The considered balance recovery tasks are in anteroposterior (A/P) direction. Objero et al. (2019) showed that arm movements are important for the control of mediolateral (M/L) postural sway, based on human experimental data.
It is worth noting that all the previous works did not cover the verification of the arm strategy with multiple cases, e.g., active arms, passive arms, and fixed arms, in their human experiments to discuss the usefulness of arm rotations. To our best knowledge, these arm strategies are relevant for stability improvement and energy efficiency in human and humanoid/bipedal walking and standing. Furthermore, they did not leverage nonlinear model predictive control (NMPC) for addressing multiple constraints of the ankle, hip, arm joint angles, and torques and reproducing human-like balance recovery controller in their artificial systems. The features of NMPC consistent with the capacity of the human body and brain such as constraints handling, predictive horizon, optimization, and robustness are not considered very well in all the previous work.
Therefore, we further developed the mechanism of arm strategy for balance recovery based on previous works and compared the results with human balance recovery experimental results. The contributions of our study are summarized as follows.
(1) A three-joint, five-link model is built to represent the human body structure for studying quiet standing balance recovery in the A/P direction. This model includes the foot, the lower body, the upper body, and the arms. (2) An NMPC with the system states and the input constraints is proposed from a neuroscience perspective to reproduce human-like balanced behavior evoked by the human central nervous system. (3) Various indexes are verified to evaluate the capability of balance recovery. The root mean square (RMS) deviation and energy consumption are compared for different cases, namely, active arms, passive arms, and fixed arms. These three cases of arm usages are recruited for balance recovery.
The obtained data indicate that balance recovery with active arms is the most effective strategy, and balance control with arm usage is better than that without arm usage. (4) Phase portraits of joint angles and whole body center of mass (WB-CoM) are considered to analyze the control pattern of balance recovery motion. (5) Ankle torque boundary constraints are set with different values. Besides, the relationship between ankle capacity and active arm usage is discussed since in our daily life ankle is easy to be injured, we want to observe how arm usages contribute to balance in this case. (6) By comparing the results of the numerical simulation and human experiments, human-like balance recovery with arm strategy is implemented and arm movements are found to enhance the capability of balance recovery.
The paper is organized as follows. In section 2, the simplified model with three different arm usages and their dynamic equation are introduced first. And, the balance recovery controller based on NMPC is proposed in section 2.2. In section 3, the results of simulation and human experiments are discussed to verify if actuated arm usage contributes to balance control. The conclusions of this study are summarized as well as future work in section 4.
Dynamic Equation of Simplified Models
To achieve quiet standing balance control, we regard the human body structure as a simplified three-joint and five-link model consisting of left-right arm joint, hip joint, ankle joint, and right arm, left arm, upper body, lower body, fixed foot (e.g., Figure 1). Table 1 summarizes the physical parameters of our model. Based on an existing anthropometric database (Kouchi et al., 2000) and the previous work (Atkeson and Stephens, 2007) dealing with optimization-based balance recovery strategy, the height and mass of the whole-body are 1.7 [m] and 69.3 [kg], respectively. Further, m 4 , m 3 , m 2 , m 1 , and m 0 represent the masses of the left arm, right arm, upper body, lower body, and foot, respectively; L 4 , L 3 , L 2 , L 1 , and L 0 represent the lengths of left arm, right arm, upper body, lower body, and foot, respectively; and q 3 , q 2 , and q 1 represent the left-right arm angle, hip angle, and ankle angle, respectively. Note that the body segments between the head and the left-right arm joint, between the left-right arm joint FIGURE 1 | Structure of the three-joint and five-link model. m 4 , m 3 , m 2 , m 1 , and m 0 represent the masses of the left arm, right arm, upper body, lower body, and foot, respectively. L 4 , L 3 , L 2 , L 1 , and L 0 represent the lengths of the left arm, right arm, upper body, lower body, and foot, respectively. q 3 , q 2 , and q 1 represent the left-right arm angle, hip angle, and ankle angle, respectively. Here, the right arm and left arm share the same joint motor. and the hip joint, and between the hip joint and the ankle joint are ignored. First, the dynamic equations of motion for this three-joint, five-link model controlled by the arm, hip, and ankle joint torques are computed based on Lagrange mechanics (Paul, 1981). The Lagrange equations and dynamic equation of motions are derived for the model with three different arm states separately: active arms, passive arms, and fixed arms, as shown in Table 2.
In that table, T and V represent the kinetic and potential energy, respectively; τ arm , τ hip , and τ ankle represents the arm torque, hip torque, and ankle torque, respectively; M 11 , M 12 , M 13 , M 21 , M 22 , M 23 , M 31 , M 32 , and M 33 are the inertia terms; and C 1 , C 2 , and C 3 denote the total centrifugal, Coriolis, and gravity forces.
Proposed NMPC for Balance Recovery
In this section, an NMPC scheme (Grüne and Pannek, 2017) is proposed to resolve the balance recovery problem. This problem can be solved as an iterative open-loop optimal control problem with a finite horizon and an observable initial states for each sampling time. The procedure of NMPC with constraints is illustrated in Figure 2 to strengthen the NMPC concept explanation. For example, let NMPC starts at k = 0 with a prediction horizon N t (here N t = 5) and the initial states x(0) = x. The predictive optimal control sequence for the entire horizon can be obtained as follows, The sequence of the predicted states is denoted by, Then, the first sample of the optimal control sequence τ opt (0) is applied to the system to produce the state x(1). And, the initial state is updated by x(1) for the new optimal control problem at the sampling time k = 1. Then, the above-described optimization process is repeated with the concept of receding horizon (moving horizon) to obtain a new optimal control sequence for the current system. Subsequently, the new initial states can be computed for the next optimal process. Therefore, NMPC is considered as a receding horizon iterative optimal control algorithm. The cost function considered in the optimal control problem of the NMPC is given by The penalty weighting dimension and constraints of the NMPC differ for the model with the three different arm states, including active arms, passive arms, and fixed arms. The cost function (3) is considered such that Q, Q f , and R are positive definite symmetric matrices. The states and the control torques can be penalized by tuning Q and R, respectively. Increasing Q is aimed to minimize the state tracking error while increasing R means a reduction of energy consumption. In this research, the ratio between Q and R for three cases of arm usages is set as the same value 10 3 named one optimization criterion (Atkeson and Stephens, 2007). Further, terminal weighting Q f = 10 5 can be used as a tuning parameter to penalize the terminal states to achieve stable NMPC performance.
The objective is to minimize the cost J x(0), τ (0,N t −1) subject to the following control input and state boundary for the model with three different arm strategies: (1) NMPC for Model with Active Arms For i a = 1, 2, 3, which represent ankle, hip, and arm joints respectively, and k a = 0, ..., N t − 1, boundary settings of the control inputs have been selected based on the work of Atkeson and Stephens (2007) where a constrained-based optimization is proposed for a multi-balance recovery strategy: (2) passive arms; (3) fixed arms.
Case
Lagrange equations Dynamic equation of motion FIGURE 2 | Schematic description of the NMPC at time k. The proposed dynamic model is recruited to predict the future motion state sequence x opt of the model system and compute optimal control input sequence τ opt of balance recovery based on the current state through solving an optimization problem. For instance, let NMPC starts at k = 0 with a prediction horizon N t (here N t = 5) and the initial states x(0).
For all i a = 1, ..., 6 and k a = 0, ..., N t , the states including angles and angular velocities of ankle, hip, and arm joints are bounded by It is necessary to point out that the three first elements of x denote joint angles, and the three last elements represent angular velocities; this is why the unit changes from [rad] to [rad/s]. We just put negative infinity in boundary settings for implementation purposes to keep a wide range of velocity values. However, based on the obtained results, the evolution of the velocities remains very reasonable, i.e., within the interval [−1.2, 1.2] as it can be observed from (2) NMPC for Model with Passive Arms For all i p = 1, 2 representing the notation of ankle and hip joints, respectively and k p = 0, ..., N t − 1, the control inputs are bounded by For all i p = 1, ..., 6 representing joint angles and angular velocities of ankle and hip, arm, and prediction horizon k p = 0, ..., N t , the states are bounded by the same constraint settings as the case with active arms. (
3) NMPC for Model with Fixed Arms
For all i f = 1, 2 representing ankle and hip joints respectively and k f = 0, ..., N t − 1, the control inputs are For all i f = 1, ..., 4 representing joint angles and angular velocities of ankle and hip, and prediction horizon k f = 0, ..., N t , the system states are bounded by With the system states and the input constraints, an NMPC is proposed from a neuroscience perspective to reproduce human-like balanced behavior evoked by the human central nervous system. The proposed NMPC also has a predictive aspect that allows predicting the future behavior and computes an optimal control balance strategy by minimizing systemic energy consumption of the whole body. Furthermore, the NMPC technique can handle simultaneously the state and input constraints, which is important to meet realistic requirements due to physical limitations of the human body such as joint ranges and torques saturation. All the previously proposed control techniques can not take into account constraints naturally. In this research, we proposed NMPC which can naturally take into account constraints. Different magnitudes of disturbing forces are applied to the model to observe the autonomous switch between the ankle, hip, and arm strategies and to examine the robustness of the proposed solution.
Simulation Parameter Setting
In this section, we analyze the model motion intensity using the total RMS deviation of the joint angles to verify the effectiveness of the arm strategy. The simulation settings are demonstrated in Figure 3. We pushed the position of the center of mass of the upper body with different disturbing forces backward and forward for 1 [s], which could be different with the previous study on perturbation setting with a balance board Yang, 2019, 2020). The maximum simulation time is set as 4 [s] that can make the model finish the process of balance recovery. The disturbing forces were as follows (Atkeson and Stephens, 2007):
Simulation Results and Discussion
NMPC controller produces predictive ankle-hip strategy after a perturbation while arm strategy can be employed only for the model of active arm setting. For the disturbing forces −80 [N] and 80 [N], only the model with the active arm can realize balance recovery from the unstable states. The models with passive arm and fixed arm are unable to obtain a solution for balance control under the same disturbing force. This indicates that the active arm rotation strategy widens the range of the disturbing forces; this result is similar to the conclusions derived in Nakada et al. (2010) and Kuindersma et al. (2011).
The schematic of the movements of the models with active, passive, and fixed arms for a disturbing force of 70 [N] is illustrated in Figure 4. The figure shows that the model with active arms has a better ability to realize balance recovery than the other two models. This is because the deviation of the center of mass of the model with active arm usage in the x-axis direction (e.g., Figure 5) is less than the other with passive and fixed arm usages. Figure 6 shows the center of mass for three different arm states is located within the stable region according to the evolution of the whole body center of mass (CoM) velocity vs. its position. Based on the obtained results from this Figure, we concluded that it is located within the stable region. It is worth . "a," "p," and "f" in the labels "adf," "pdf," and "fdf" represent the cases with active arms, passive arms, fixed arms, respectively, and "df" represents the disturbing forces.
noting that the size of CoM phase portraits for the model with active arm usage is smaller than those for the model with passive or fixed arm usage. This indicates that active arm usage can maintain the center of mass of the body to remain close to the origin (equilibrium point). From the stability aspect, active arm usage shows more advantage in balance recovery tasks by comparing the deviation of the center of mass.
The total RMS deviation can be calculated by where, N denotes the total samples number, which can be computed from the recovery time and the sampling period, q 1 (t) and q 2 (t) represent the ankle and hip angles at each sampling point, respectively.
The evolution of the total RMS deviation of the model for the three different arm states (active arm, passive arm, and fixed arm) for different disturbing forces is illustrated in Figure 7. Here, the total RMS deviation is defined to represent the body motion intensity. Figure 7 shows that the total RMS deviation of the balance recovery motion with active arms is less than that with passive arms. Furthermore, the total RMS deviation of the balance recovery motion with passive arms is less than that with fixed arms for the following disturbing forces: . This indicates that arm movements contribute to human body balance control and reduce the motion intensity of the hip joint. This conclusion in accordance with the one obtained from a human experiment (Boström et al., 2018). Besides, it is worth to note that the proposed model based on NMPC can recover after a wide range of perturbations; therefore, the robustness of the NMPC is verified as well. This is one of the advantages of the proposed controller with active arm usage. Figure 8 compares the energy consumption of the model for three different arm states and for different disturbing forces. The energy consumption in this research is joint mechanical energy, which can be computed through the total joint actuator energy consumption of ankle, hip, arms. First, we observe that as the disturbing force intensifies, the balance recovery motion consumes more energy for each case. Most importantly, for the same amount of push, the energy consumption for the balance recovery of the model with active arm rotation is the least followed by passive arm rotation. It is the biggest in the case without arm rotation. This indicates clearly that balance recovery with arm strategy can reduce energy consumption, which is human-like energy-efficient. Humans also optimize the motion behavior for balance recovery to save energy. Thus, the contribution of arm usage to human balance recovery can also be acknowledged from the perspective of energy-efficiency. Furthermore, there are consistent limit cycles of the balance recovery for the model with active arm usage over the different disturbing forces, indicating natural temporal regulation on the coordination of ankle, hip, and arm joints, respectively, in Figures 9-11. It means that there is temporal pattern to make compensation against disturbing forces, which can be viewed as there is control strategy since it forms similar form of phase portrait. It implies there is consistent ankle-hip-arm control strategy for active arm usage. However, it is noticeable that portrait form is largely deformed for the passive and fixed arm cases.
Similarly, the relationship of the ankle, hip, arm angles shows aligned spatial pattern over the joints, which represents synergetic joint coordination, and the maximum deviations exhibit linear approximations in Figure 12. The joint correlation of neighboring joints, such as ankle and hip, hip and arm, under the different disturbing forces are computed for synergy existence confirmation (Latash and Zatsiorsky, 2016). The mean joint correlation between ankle and hip is 0.898 and the one between hip and arm is 0.966. Thus, the balance motion with active arm rotation is highly coordinated, which means there is a good synergy performance. The synergy pattern here can represent the ability of task sharing and balance stabilization. However, the motions of balance recovery for the cases of the model with passive and fixed arms did not exhibit similar synergy performance because of the absence of certain patterns. From the synergy analysis perspective, the balance recovery for the model with active arm usage is better than that for the other two cases. Table 3 shows the contribution of active arm usage to the balance recovery under different ankle capacities. Since ankle is most common injured body site (Fong et al., 2007), we want to observe how arm usages improve the ability of balance maintenance. Here, five cases are considered through different ankle torque constraints (tc) and disturbing forces (df) (Negahban et al., 2013): Comparing the above ankle boundary constraint settings, we can note that the ankle capacity of case (1) is weaker than that of cases (2) and (3). For the same disturbing force df = 56 [N] applied on the center of mass of the upper body, in cases (1), (2), (3), the ankle capacity of case (1) reaches the maximum limit. Therefore, this model needs more efforts for balance recovery. Besides, a longer recovery time and a higher energy consumption are required for case (1), compared with those for cases (2) and (3). For a limited ankle capacity, such as in case (1), the active arm . "a," "p," and "f" in the labels "adf," "pdf," and "fdf" represent the cases with active arms, passive arms, fixed arms, respectively, and "df" represents the disturbing forces. . "a," "p," and "f" in the labels "adf," "pdf," and "fdf" represent the cases with active arms, passive arms, fixed arms, respectively, and "df" represents the disturbing forces.
Frontiers in Neurorobotics | www.frontiersin.org . "a" and "p" in the labels "adf" and "pdf" represent the cases with active arms and passive arms, respectively, and "df" represents the disturbing forces. . "a" in the label "adf" represents the case with active arms, and "df" represents the disturbing force. The mean joint correlation between ankle and hip is 0.898 and the one between hip and arm is 0.966. (2) and (3), and arm energy consumption is approximately 39 times that in cases (2) and (3). Similarly, for a same disturbing force of 70 [N], the active arm RMS deviation in case (4) is six times that in cases (2) and (3), and the arm energy consumption is approximately 24 times that in cases (2) and (3). These observations show that for a limited ankle capacity, arm rotation makes more effort for balance recovery. Furthermore, the ankle capacity in cases (2) and (3) for a disturbing force df = 56 [N] does not reach the maximum limit, and the movements of balance recovery are almost the same. For the same ankle capacity in cases (2) and (4), the disturbing force df = 70 [N] in case (4) makes the ankle capacity reaching the maximum limit, and active arms need more effort for balance recovery than that in case (2). By comparing cases (3) and (5), although the ankle capacity does not reach the maximum limit in both cases, more efforts are required for a bigger disturbing force.
RMS deviation is seven times those in cases
In the simulation study, various indexes are verified to evaluate the capability of balance recovery. The obtained data indicate that balance recovery with active arms is the most effective strategy, and balance control with arm usage is better than that without arm usage. Besides, Phase portraits of joint angles are considered to analyze the control pattern of balance recovery motion. Furthermore, Ankle torque boundary constraints are set with different values. The relationship between ankle capacity and active arm usage is discussed since in our daily life ankle is easy to be injured, we want to observe how arm usages contribute to balance in this case. Regarding the comparison of our work with previous studies dealing mainly with human balance control without arm strategy, it is worth to point out that in our study we considered three cases including (i) active, (ii) passive, and (iii) fixed arms. This last one corresponds to the case without arm strategy from the literature. Indeed, the obtained results show clearly that the balance model with active arm strategy leads to a less energy consumption, a more robust control, a more synergetic motion, and an improved balance ability, compared to the case without arm strategy.
Human Experimental Setting
Now, we apply three different magnitudes of a pushing force, namely, small push, medium push, and large push, to the backs of the subjects to observe the contributions of arm rotation to human quiet standing balance recovery. The magnitudes of the pushing forces are distinguished by the maximum position deviation of the marker on the subject's neck, and ground reaction force measured by two AMTI force plates. Furthermore, it is important to point out that even though the same pushing force is applied to all the subjects, there is no guarantee that the balance behavior of the subjects would be exactly the same. Consequently, we decided to quantify the levels of this pushing force and classify them into three levels (small, medium, and large). The key point behind this is to distinguish the subjects' balance recovery behavior based on these different levels of the pushing force. The subjects were five healthy men [mean age (25 ± 5) years, mean height (175 ± 10) cm, mean weight (70 ± 10) kg] without any known motor or neurological impairment. The protocols of human experiments were designed according to the Declaration of Helsinki and approved by the Tohoku University ethics committee. In fact, the human experiments have been conducted in two main stages. During the first one, dealing with a pre-training, the subjects are pushed with different forces (according to the three levels explained above, respecting the order : small, then medium, then large for security purposes) to learn how to maintain their balance. During the second stage, dealing with the final experimental tests, the previous different pushing force levels are considered, while disturbing the subjects in stand-up positions, and their behavior data are recorded. The exact spot of the push force is the upper back of the subject. For each level push force, five repetitions are performed for a single subject. The motion of the subject is tracked using 42 markers in the Optitrack system with eight cameras and the ground reaction forces are measured using two force-plates. Then, we export the tracking data of the motion and ground reaction forces and convert them to a standard data format, which can be used in OpenSim (Rajagopal et al., 2016). Then, we obtain the joint angles and torques for each subject through model scaling, inverse kinematics, and dynamics in OpenSim. Here, the inverse dynamics could be solved by using the top-down method. These results can be used to analyze the balance recovery motion and the functions of the ankle, hip, and arm for different magnitudes of disturbing forces.
Comparison With Human Experimental Results
Our discussion in this paragraph focuses on the representative movements on the Subject 1 since the trends discussed for this subject are consistent across all the subjects. The balance recovery motion of Subject 1 for three different magnitudes of push: (a) Small push, (b) Medium push, and (c) Large push is shown in Figure 13. Here, the active arm usage of subject 1 is in a good accordance with the one obtained in our simulation shown in Figure 4 reproduced by the proposed NMPC, where antiphase between arm and hip joint angles. Besides, as the push force increases, more arm usage can be recruited to improve the balance maintenance ability. Figure 14 shows the evolution of the joint angles and torques of the ankle, hip, and arm for the three different magnitudes of push. The ankle joint angles change slightly, which is similar with the simulation results illustrated in Figure 9, because of the structural limitation of the ankle joint compared to other joints. It is important to note that ankle joint angle and torque don't change from middle push to large push. It means ankle usage meets saturation due to its limited capacity. We have observed this phenomenon also in the simulation study. The hip joint rotates by a larger degree as the magnitude of the pushing force increases. This illustrates that hip joints play a major role in balance recovery. Furthermore, the deviation of the arm joint angles and torques increases. This is because when the magnitude of the pushing force increases, subject 1 attempts to recover balance through more efforts of the arm rotation. From Figure 14, we note that subject 1 spends a longer time in recovering balance for the large push. Table 4 presents the mean of the peak-to-peak values of the joint angles and torques of the ankle, hip, and arm of five subjects for different magnitudes of pushing force. Here, the deviation of the arm joint angles and torques is positively correlated with the magnitude of push force. For large push, we can notice that ankle joint angle increases only 1 degree from middle push case. It implies that ankle usage is already near the saturation due to mechanical constraints, thus the arm strategy to compensate disturbance is essential for large push. This process is consistent to the behavior we have observed in the proposed NMPC controller. Consequently, we conclude that active arm usage contributes to balance recovery in human experiment and the consistent behavior between the predictive controller study and human experiment. This indicates arm movements enhance the capability of balance recovery.
CONCLUSIONS AND FUTURE WORK
In this study, we built a simplified human model with arms and proposed an NMPC scheme to reproduce human balance behavior with arm usages. Three arm states, active arms, passive arms, and fixed arms, were considered to study the contributions of the arm movements to balance recovery with different magnitudes of a disturbing force during quiet standing. The contribution of arm usage to human balance control was verified by comparing the total RMS deviation of joint angles, and balance control with active arms was found to be the most effective in terms of the energy consumption and the disturbance effect minimization. Furthermore, the synergetic motion pattern was observed with kinematics during balance recovery with active arms while it was confirmed with joint correlation along with the steady smooth limit cycle pattern, and the total energy consumption was compared. Finally, the results of human experiments were compared with simulation to verify that active arm usage contributes to balance recovery. Our future work may focus on conducting more human balance recovery experiments and analyzing the synergy of body motion at the kinematic, kinetic, and muscle levels. This will help us to gain a better understanding of the mechanism of quiet standing balance with arm strategy and to develop an effective balance controller for rehabilitation.
DATA AVAILABILITY STATEMENT
The original contributions generated for the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Tohoku University ethics committee. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
KS, AC, and MH designed the study. KS implemented the experiments and processed the data. KS wrote the manuscript with the support of AC and MH. All authors have made contributions to the study and approved it for publication.
FUNDING
This work was supported in part by the GP-Mech Program of Tohoku University, Japan, and in part by the JSPS Grant-in-Aid for Scientific Research (B) under Grant 18H01399.
|
v3-fos-license
|
2022-01-30T06:19:58.432Z
|
2022-01-29T00:00:00.000
|
246387320
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00441-022-03587-z.pdf",
"pdf_hash": "6707db9656b8dd67556f4d3be5a22c05ced418b0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:653",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6dee96d503361cccea2c7a9396afc9def5fdd90e",
"year": 2022
}
|
pes2o/s2orc
|
Esomeprazole inhibits hypoxia/endothelial dysfunction–induced autophagy in preeclampsia
Preeclampsia (PE) affects 3 to 5% of pregnant women worldwide and is associated with fetal and maternal morbidity and mortality. Although a complete understanding of PE remains elusive, it has been widely accepted that a dysfunction of the placenta plays a key role in the pathogenesis of PE. In this study, we investigated the role of excessive placental autophagy during PE pathogenesis and explored whether esomeprazole ameliorates PE by inhibiting the autophagy in the placenta. The PE cellular model was established by treating the cells’ L-NAME and hypoxia. The PE mice model was established by L-NAME administration and was confirmed by the increased systolic blood pressure (SBP) and urinary protein detected. The autophagy and key proteins were detected in human placental tissue, in cells, and in the mice model by Western blot and immunofluorescence staining. Results showed that excessive autophagy could be detected in human PE placental tissue, in the PE cellular model, and in the PE mice model. Hypoxia induces autophagy by activating AMPKα and inhibiting mTOR in vivo and in vitro. Esomeprazole inhibits L‐NAME-induced autophagy in mice by inhibiting AMPKα and activating mTOR. In conclusion, this study demonstrates that the excessive autophagy induced by the SIRT1/AMPKα-mTOR pathway plays a significant role in the pathogenesis of PE. However, esomeprazole treatment inhibits AMPKα but activates mTOR, resulting in the inhibition of autophagy in the placenta and, therefore, mitigates PE symptoms.
Introduction
Preeclampsia (PE), characterized by hypertension and proteinuria in mid-or late-term pregnancy, is one of the most serious pregnancy complications, causing multiorgan injury (Mol et al. 2016a). Preeclampsia affects 3-5% of pregnancies worldwide and has been closely associated with fetal and maternal morbidity and mortality (Rana et al. 2019;Mol et al. 2016a).
An initial asymptomatic phase during the first trimester of gestation is widely accepted as the pathogenesis of preeclampsia, which is characterized by a deficient trophoblast invasion and spiral artery recasting disorder (Phipps et al. 2019), followed by anoxia and malnutrition of the placenta, resulting in hypertension and proteinuria. Moreover, the maternal-fetal interface shows inflammatory overactivation and endothelial dysfunction. The placenta plays a vital role in the pathogenesis of preeclampsia.
Hypoxia, undernutrition, inflammatory overactivation, and endothelial dysfunction can induce autophagy (Sasaki et al. 2007;Hiby et al. 2010). Autophagy is an intracellular self-degrading system characterized by a widespread degradation process for long-lived proteins or cytoplasmic components during undernutrition (Nakashima et al. 2017;Glick et al. 2010). Autophagy's regulatory mechanism is complex, and its upstream signaling pathway mainly involves an mTOR-dependent pathway and an mTORindependent pathway (AMPK, PI3K, Ras-MAPK, p53, PTEN, endoplasmic reticulum stress). Cells use macroautophagy/autophagy to facilitate survival by maintaining Shengyi Gu, Chenchen Zhou, and Jindan Pei contributed equally to this work. cellular integrity when experiencing strong environmental stimuli such as hypoxia (Mazure and Pouysségur 2010). In the case of hypoxia, HIF-1α and HIF-2α are the mediators of the hypoxic stress signal. The HIF-dependent induction of autophagy by hypoxia has been reported. Upon a hypoxia-mediated autophagy induction, the HIF-1αeliminated cells showed a decrease in the levels of the well-known autophagy markers beclin-1 and LC3B-II (Ravanan et al. 2017). HIF activates autophagy through BNIP3 (Bcl-2/E1B 19 kDa-interacting protein 3). The HIFα and IKK mediate the inflammatory and hypoxic signals, activating a cascade of signaling them downstream to activate inflammation and autophagy as stress responses (Bellot et al. 2009). The NF-κB links autophagy and inflammation via HIFα. The mTOR inhibition activates the NF-κB in the inflammatory pathway, while the mTOR activation initiates the transcriptional activity of HIFα. Additionally, autophagy is necessary for embryonic development and placenta implantation in mammals (Boya et al. 2018;Nakashima et al. 2019). For placental villi bathed in maternal blood, once the placenta is in an autophagy disorder, abundant toxic factors, such as soluble Fms-like tyrosine kinase-1(sFlt-1), are released and damage maternal vasculature (James et al. 2005). Thus, researchers have devoted increasing attention to the role of autophagy in the placenta.
The termination of the pregnancy is the first choice for the treatment of preeclampsia, which potentially generates iatrogenic prematurity in babies and affects neonatal outcomes (Tomimatsu et al. 2019). To prolong pregnancy, there is no treatment for the pathogenesis except for symptomatic treatment. Esomeprazole, a proton-pump inhibitor, is used to treat reflux esophagitis and hyperemesis gravidarum (Malfertheiner et al. 2017). It has been found to significantly reduce the levels of sFlt-1 and sENG and to effectively prolong pregnancy duration (Onda et al. 2017). Esomeprazole also mitigates endothelial dysfunction by inhibiting the expression of tumor necrosis factor α, vascular cell adhesion molecule 1, and endothelin 1 (Sandrim et al. 2019). A randomized placebo-controlled trial found that esomeprazole could prolong gestation three more days compared to the placebo in pregnancies with preterm preeclampsia (Cluver et al. 2018). Therefore, it has been suggested as a potential drug for treating preeclampsia. However, the mechanisms of esomeprazole in placental function remain unclear.
In this study, we report that normal autophagy was necessary for placental function. We investigate the effect of excessive autophagy and the effect of esomeprazole on PE. Finally, we find that esomeprazole inhibits hypoxia/ endothelial dysfunction-induced autophagy via the AMPK/mTOR pathway.
Placental tissue collection
Human placental tissue was collected under appropriate Human Research and Ethics Committee approvals (KS1957). Written and informed consents were obtained from each patient before surgery.
One cubic centimeter of placental tissue was removed from the maternal side of the placenta (3 cm from the edge in the 3, 6, 9, and 12 o'clock directions). The PE group was selected based on relevant recommendations by the Chinese Medical Association. The control placental tissues were obtained from age-matched preterm pregnancies with normally developing fetuses that did not have signs of hypertension disorder or other pregnancy-related diseases.
Human umbilical vein endothelial cells (HUVECs) seeded at a density of 2 × 10 6 cells into 10-cm dishes were kept at 37 ℃ in 5% CO 2 and 20% O 2 and cultured in ECM media (Sciencell) with a 5% Australian fetal bovine serum and EGF.
All cell experiments were repeated three times.
PE mice model
Eight-week-old female C57BL/6 mice (weighing 25-30 g) were used in this study. Male and female mice were mated in the same cage at a male-to-female ratio of 1:2. The time at which the vaginal plug was found was defined as GD0.5. Subsequently, pregnant mice were divided into three groups: the control group (normal pregnancy, n = 5), PE group (n = 5), and treated-PE group (esomeprazole 40 mg/ kg, n = 5). The PE and treated-PE groups were treated by continuous treatment with L-NAME dissolved in drinking water (1 g/L 10 mL/day; Sigma) beginning from GD7.5 to GD18.5. From GD7.5 to GD18.5, esomeprazole dissolved in 100 µl 1% DMSO (v/v) was injected intraperitoneally for the treated-PE group every day, and an equal volume of 1% DMSO (v/v) was injected intraperitoneally for the PE and control groups each day. At GD20, the mice were sacrificed, and the placenta and blood specimens were collected. All experimental protocols were approved by the ethics committee at Xinhua Hospital, Shanghai Jiaotong University School of Medicine.
Assessment of systolic blood pressure
The systolic blood pressure (SBP) of each mouse was measured noninvasively at GD18.5 via the Mouse and Rat Tail Cuff Blood Tail-Cuff Device (American IITC). Before each measurement was taken, each mouse was exposed to 37 ℃ for 5 min; measurements were taken three times for averaging purposes.
Placental histology
Placenta specimens from the separate groups were preserved in tissue fixatives, embedded in paraffin after 24 h, and cut into 5-µm sections. After hematoxylin and eosin staining and Masson trichrome staining, morphological evaluations were performed under light microscopy.
Western blot
Human tissue from control and PE placenta and cell lysates from HTR-8/Svneo cells, HUVEC cells, and mouse placentas from the control, PE, and treated-PE groups were detected by Western blotting, as previously described (Gu et al. 2019 ) and secondary antibodies were HRP-linked using goat anti-rabbit and goat anti-mouse IgG antibodies (CST,7074,7076). The bands were visualized using an ECL kit. Band densitometries in equiconditional area were evaluated by ImageJ. The intensity of both the target protein bands and corresponding internal reference bands was quantified. The intensity value of target protein bands was normalized to the value of the reference bands.
Immunofluorescence staining
Immunofluorescence staining was performed as previously described. Placenta specimens in distinct groups were preserved in tissue fixatives, embedded in paraffin after 24 h, cut into 5-µm sections, and permeabilized with 0.2% Triton X-100 in PBS. HTR-8/Svneo cells and HUVEC cells were fixed with 4% PFA for 15 min and permeabilized with 0.2% Triton X-100 in PBS. Placental sections and cells were precultured in a blocking solution at room temperature to block nonspecific binding sites. They were then cultured with primary antibodies overnight at 4 ℃. Placental sections and cells were treated with DAPI (40, 6-diamino-2-phenylindole) for nuclear detection. Fluorescence images were observed and captured using confocal microscopy.
Statistics
The data are represented as mean ± SD. One-way analysis of variance (ANOVA) was employed for multiple comparisons, followed by Turkey post hoc testing using SPSS 23.0 and GraphPad Prism 9. A P value or adjusted P value less than 0.05 was considered to be statistically significant.
Increased autophagy in placentas of PE patients
We first histologically analyzed the structure of healthy and PE placentas. H&E and Trichrome Masson's staining of placenta sections showed that PE placentas showed abnormal structure compared to healthy controls, including reduced branching, compaction of the labyrinth area, increased fibrosis, and reduced vascularization ( Fig. 1a-b′).
Previous studies have suggested that autophagy may play a role in PE pathogenesis; therefore, we analyzed the LC3B, a well-characterized marker of autophagy, and protein levels in placentas using the Western blot. Results showed that the LC3B expression was significantly increased in the PE placentas than in the control group (Fig. 1c, c′, P < 0.05). The immunofluorescence-staining analysis of PE and healthy placentas further confirmed the elevated expression of LC3B in PE placentas ( Fig. 1d-e″, P < 0.05).
Hypoxia induces autophagy by activating AMPKα and inhibiting mTOR in PE human placenta
Interestingly, Western blot analysis showed that the HIF-1α protein level was significantly increased in PE placenta compared to that in the control group (Fig. 1f, g′, P < 0.05), indicating that PE placentas were in a state of hypoxia. Further analysis showed that the phosphorylation level of AMPKα was significantly elevated (P < 0.05). On the contrary, the phosphorylation of mTOR was significantly reduced (P < 0.05) (Fig. 1f-g‴). These data suggest that hypoxia induces autophagy by activating AMPKα and inhibiting mTOR in PE human placenta.
Esomeprazole inhibits hypoxia-induced autophagy in vitro
We first tested whether esomeprazole could inhibit autophagy using a cell-based assay approach. Both the transwell migration and wound-healing assay showed that hypoxia and L-NAME (70 µM, 48 h) treatment inhibited the migration of HTR8/SVneo cells (P < 0.05). Esomeprazole treatment strongly rescued the cell migration of HTR8/SVneo cells ing. (f) The expression levels of the essential autophagy-related proteins HIF1α, PPARγ, AMPKα, p-AMPKα, mTOR, and p-mTOR were measured using the Western blot test. (g-g‴) Plots showing the quantification of HIF1α, PPARγ, p-AMPKα/AMPKα, and p-mTOR/ mTOR. All data are representative of three independent experiments. *P < 0.05, **P < 0.01 (P < 0.05) (Fig. 2a-d). Western blot analysis revealed that LC3B protein levels were elevated in HTR8/SVneo cells subject to hypoxia and L-NAME treatment (70 µM, 48 h), compared to cells under normoxia environment (P < 0.05). Concurrently, the expressions of PI3KC3 and P53 also were increased under hypoxia and L-NAME treatments. Nonetheless, esomeprazole treatment successfully reduced LC3B, P53, and PI3KC3 expression in hypoxia/L-NAME-treated HTR8/SVneo cells (Fig. 2e-f‴). This was further confirmed by the immunofluorescence staining analysis of LC3B and P53 (Fig. 3a-f″). In addition to HTR8/SVneo cells, we further validated these findings in HUVECs. As expected, HUVECs treated with hypoxia and L-NAME showed impaired cell migration ability, while the esomeprazole treatment effectively restored HUVEC migration (Fig. 4a, b). Moreover, esomeprazole potently suppressed hypoxia and L-NAME treatment-upregulated LC3B and P53 expression ( Fig. 4c-i″). Collectively, these data showed that esomeprazole can abrogate hypoxia and L-NAME-induced autophagy.
Esomeprazole reduces autophagy by inhibiting phosphorylation of AMPKα in vitro
Compared to the phosphorylation of AMPKα under normoxia, the phosphorylation of AMPKα was elevated in HTR8/SVneo cells under hypoxia and L-NAME treatment (70 µM, 48 h) (Fig. 5a-d). On the contrary, the phosphorylation of mTOR was significantly inhibited (Fig. 5d-e‴).
We also validated these findings in HUVECs Similar to HTR8/SVneo, HUVECs under hypoxia and L-NAME treatment showed significantly increased phospho-AMPKα and HIF1α levels and decreased PPARγ and phospho-mTOR levels, while they were restored by esomeprazole treatment (Fig. 6a-e‴). Therefore, these results suggest esomeprazole-inhibited hypoxia and L-NAME treatmentinduced autophagy by activating mTOR and inhibiting AMPKα signaling.
Esomeprazole ameliorates preeclampsia-like symptoms in mice induced by L-NAME treatment
PE is characterized by significantly increased systolic blood pressure and the onset of proteinuria. The administration of L-NAME successfully led to increased systolic pressure and the presence of protein in the urine (Fig. 7a, b). In addition, L-NAME treatment also increased the sFLT-1 level in serum (Fig. 7c). Esomeprazole treatment significantly reduced systolic pressure, urinary protein, and serum sFLT-1 levels. Furthermore, both H&E-based histology analysis and Trichrome Masson's staining demonstrated that the esomeprazole treatment effectively preserved placenta structure and reduced intraplacental fibrosis (Fig. 7d-e″). These data strongly indicate that PE symptoms were reversed by esomeprazole treatment.
Esomeprazole protects the placenta from L-NAME-induced autophagy by upregulating PPARγ in vivo
To understand the molecular mechanisms by which esomeprazole inhibits autophagy in L-NAME-treated placenta, we first analyzed the protein level of LC3B. Western blot analysis revealed that LC3B protein levels were significantly elevated in the PE group compared to the control group, whereas esomeprazole treatment potently reduced the LC3B levels (Fig. 8a, a′). Comparable results were obtained comparing the LC3B expression level in control, diseased, or treated placentas based on immunofluorescence staining-based assays (Fig. 8b-d″). More importantly, we found that PPARγ protein levels were also decreased in the PE mice placentas (P < 0.05). Interestingly, the phosphorylation of AMPKα was elevated in the PE group; however, the phosphorylation of mTOR was significantly inhibited (P < 0.05). Esomeprazole inhibited the phosphorylation of AMPKα, whereas it increased the phosphorylation of mTOR ( Fig. 8e-f‴).
Discussion
PE has negative consequences on maternal and fetal health during pregnancy, including increased perinatal mortality, preterm births, infants who are considered small for gestational age, high rates of cesarean deliveries, and other adverse outcomes, even in later postnatal periods (Antza et al. 2018;Gu et al. 2019). The relevant mechanism is not yet fully elucidated. Although many factors impact preeclampsia-including some not yet identified-recent evidence suggests that insufficient trophoblast cell invasion causes uterine spiral artery remodeling disorder and shallow placental implantation (Mol et al. 2016b). This leads to placental ischemia and hypoxia, followed by excessive autophagy. We found that HIF-1α was elevated in PE placenta; this only occurs for a long time under hypoxic conditions. In addition to the resulting placental ischemia and hypoxia, bioactive factors released by the mother into the blood circulation cause a systemic inflammatory reaction and vascular endothelial injury, inducing clinical symptoms of PE. Soluble Fms-like tyrosine kinase-1 (sFlt-1), an antagonist of VEGF, has been shown to induce preeclampsia-like disease in rodents (Hastie et al. 2019). This also has been confirmed in our animal experiments. The placenta is undoubtedly involved in the pathogenesis, considering that termination of the pregnancy can eliminate the disease.
In vitro, our findings indicate that autophagy was increased in PE placenta. During the early pregnancy period, a low-oxygen environment in the placenta is beneficial for the differentiation of trophoblast cells from trophoblastic stem cells, promoting trophoblastic cell Fig. 3 Esomeprazole suppresses autophagy in HTR-8/Svneo cells treated with hypoxia and L-NAME. (a-f″) The expressions of LC3B and P53 were measured by immunofluorescence staining. All data are representative of three independent experiments migration and remolding spiral arteries during this period in humans (Nakashima et al. 2017). Under these conditions, autophagy is activated (Yamanaka-Tatematsu et al. 2013). Proper autophagy is necessary to regulate protein quality control and maintain intracellular homeostasis (Levine and Kroemer 2019). However, excessive autophagy induced by long-term anoxia can lead to cell death and affect placental function. To better simulate the pathological conditions of preeclampsia, cells were treated by hypoxia and L-NAME treatment for 48 h, followed by the rapid activation of autophagy. Excessive autophagy disrupts invasion and vascular remodeling under hypoxia. We found that, with increased HIF-1α expression, placental autophagy Fig. 4 Esomeprazole suppresses autophagy in HUVEC treated with hypoxia and L-NAME. (a, a′, a″) Cell migrations upon hypoxia and L-NAME treatment were measured using transwell-migration assays. (b) Quantification of cell migration of (a, a′, a″). (c) LC3B proteins levels were detected via Western blot. (c′) Plot depicting the quantification of LC3B expression levels. (d-i″) The LC3B and P53 expressions were visualized using immunofluorescence staining. All data are representative of three independent experiments: *P < 0.05, **P < 0.01 ◂ Fig. 5 Esomeprazole inhibits autophagy in HTR-8/Svneo cells by suppressing the SIRT1/AMPKα pathway. (a-c″) Immunofluorescence staining of p-AMPKα in control or esomeprazole-treated HTR-8/Svneo. (d) Western blot analysis of the expression levels of p-AMPKα, AMPKα, PPARγ, SIRT1, mTOR, and p-mTOR in con-trol or esomeprazole-treated HTR-8/Svneo. (e-e⁗) Plots showing the quantification of HIF1α, PPARγ, p-AMPKα/AMPKα, and p-mTOR/ mTOR. All data are representative of three independent experiments: *P < 0.05, **P < 0.01 was enhanced; the autophagy manifested as elevated LC3B levels. Excessive autophagy is accompanied by proliferation arrest and the transformation to the pro-inflammatory phenotype (Nakashima et al. 2017). The placental-autophagy disorder releases abundant toxic factors that damage maternal vasculature.
Autophagy is controlled by multiple signaling pathways (Lamark et al. 2017). We found-in combination with increased HIF-1α expression-an increased phosphorylation of AMPKα via SIRT1 activation, the reduced phosphorylation of mTOR, and diminished activity, resulting in the enhancement of placental autophagy. This manifested as elevated LC3B levels. AMPK is best known as a protein kinase that regulates energy metabolism (Kim and Lee 2014). It is expressed in various metabolism-related organs and can be activated via various stimuli, including cellular stress, exercise, and numerous hormones and substances that affect cellular metabolism. AMPK activation is dependent on the deacetylation of the lysine of liver kinase B1 residue by SIRT1 (Ganesan et al. 2017). In our study, SIRT1 also was found to be highly activated in the PE placenta. SIRT1 exerts its cellular autonomic function by regulating many transcription factors, such as p53, in the nucleus. SIRT1 controls cellular-repair mechanisms, such as autophagy and mitochondrial biogenesis. The phosphorylation of AMPKα governs the formation of autophagy vesicles by the PI3KC3 Fig. 6 Esomeprazole inhibits autophagy via SIRT1/AMPKα pathway in HUVECs. (a-c″) Expression levels of p-AMPKα in HUVECs were visulized by immunofluorescence staining. (d) The protein levels of HIF1α, phospho-AMPKα, AMPKα, PPARγ, SIRT1, mTOR, and phospho-mTOR were measured using Western blotting. (e-e⁗) Plot showing the quantification of HIF1α, PPARγ, SIRT1, and phospho-mTOR protein levels in HUVECs. All data are representative of three independent experiments: *P < 0.05, **P < 0.01 complex (Hurley and Young 2017). Additionally, we found that the SIRT1-dependent activation of AMPK downregulates mTOR, which then initiates a cellular stress response that includes autophagy (Xu et al. 2012). The phosphorylation of mTOR increases the formation of the mTORC1 and mTORC2 complexes. The inhibition of mTORC1 activates autophagy, whereas its activation reduces autophagy. Our study provides evidence that, in the placenta, AMPK and mTOR are engaged in regulating autophagy. We found that excessive autophagy decreased the invasion of HTR8/SVneo. In addition, the hypoxia-induced phosphorylation of AMPKα reduces the expression of PPARγ. PPARγ is regulated in part by HIF-1α (Tache et al. 2013). PPAR-γ is an important regulator of spiral artery development and placental function (Liu et al. 2017). Its participation has been noted in the pathogenesis of intrauterine growth retardation and preeclampsia. Evidence suggests that PE placental autophagy is regulated by hypoxia-induced AMPKα phosphorylation.
Esomeprazole, a proton-pump inhibitor, is a drug widely used to treat gastroesophageal reflux disease, a common condition associated with pregnancy (Galmiche et al. 2011). It has been shown that patients receiving esomeprazole experience less gestational hypertension and lower plasma ENG and sFLT-1 levels (Onda et al. 2017). Our animal experiments confirm this. A prospective Dutch cohort study treated 430 pregnant women with preeclampsia with PPIs, alpha methyldopa, steroid hormones, ferrous fumarate, polyethylene glycol, and nifedipine. This study found that PPIs reduced the level of sFlt-1 level and sENG and prolonged the week of pregnancy effectively (Saleh et al. 2017). Previous research also found that PPIs could upregulate the key placental protective enzyme, heme-oxygenase 1, and then improved the maternal antioxidant-defense function. Furthermore, esomeprazole also mitigates tumor necrosis factor-α-induced endothelial dysfunction. Esomeprazole reduces expression of endothelial vascular cell adhesion molecule 1, prevents leukocyte adherence to endothelium, and promotes angiogenesis (Brownfoot et al. 2016;Kaitu'u-Lino et al. 2017). In our PE mice model, we also found that esomeprazole decreased sFlt-1 serum levels. It has been reported that heme-oxygenase 1 (HO-1) may decrease sFlt-1 secretion (Cudmore et al. 2007), although Tong et al. came to the opposite conclusion (Tong et al. 2015). Moreover, researchers found combining esomeprazole with other drugs, Fig. 7 Esomeprazole ameliorates the preeclampsia symptoms caused by L-NAME. (a) Plot showing the SBP of control, L-NAME-treated, or L-NAME + esomeprazoletreated mice were measured. (b) Plot showing the urinary protein concentrations of the control, L-NAME-treated, or L-NAME + esomeprazole-treated mice were determined. (c) The sFLT-1 concentrations in the serum of control, L-NAME, or L-NAME + esomeprazole-treated mice were determined at GD18.5 using ELISA. (d, d′, d″) Histological analysis of the placentas of control, L-NAME-treated, or L-NAME + esomeprazole-treated mice using H&E staining. (e, e′, e″) Fibrosis in the placentas of control, L-NAME-treated, or L-NAME + esomeprazole-treated mice were measured using Trichrome Masson staining. A blue color in the square indicates collagen deposition, N = 3 mice per group, *P < 0.05, **P < 0.01 such as metformin and sulfasalazine, in lower concentrations caused an additive reduction in sFlt-1 secretion in primary cytotrophoblasts, placental explants, and endothelial cells (Binder et al. 2020;Kaitu'u-Lino et al. 2018). Additionally, with treatment of esomeprazole, in the aortic tissues of pregnant L-NAME-treated mice, autophagy was inhibited (Zhang et al. 2021). This result is similar to ours, finding that placental autophagy was inhibited with the treatment of esomeprazole. Therefore, we hypothesize that esomeprazole treatment may reduce the autophagy of trophoblasts and endothelial cells, thereby preventing the release of sFLT-1 into maternal blood. In vitro, the autophagy of HTR-8/SVneo cells and HUVEC cells induced by hypoxia and L-NAME treatment was inhibited by the esomeprazole treatment. Under the esomeprazole treatment, the expression of HIF-1α and the activation of AMPK were reduced in vivo and in vitro. This constitutes evidence that cell hypoxia and metabolic abnormalities were relieved. Esomeprazole may prevent autophagy induced by hypoxia.
Conclusion
The PE placenta experiences excessive autophagy as a result of long-term anoxic conditions. This autophagy contributes to PE symptoms. Autophagy is regulated by the SIRT1/ AMPKα-mTOR pathway, upon which esomeprazole acts. Thus, esomeprazole can suppress preeclampsia-like symptoms by inhibiting excessive placental autophagy in PE, acting via the SIRT1/AMPKα-mTOR pathway. Furthermore, esomeprazole may increase the expression of PPARγ, thereby protecting placental function. Fig. 8 Esomeprazole-suppressed autophagy induced by L-NAME via AMPKα-mTOR in PE mice. (a) LC3B proteins levels in placentas of control, L-NAME-treated, or L-NAME + esomeprazoletreated mice were detected by Western blot. (a′) Plot depicting the quantification of LC3B protein levels. (b-d″) LC3B in placentas of control, L-NAME-treated, or L-NAME + esomeprazole-treated mice were evaluated by immunofluorescence staining. (e) The protein levels of phospho-AMPKα, AMPKα, PPARγ, SirT1, mTOR, and phospho-mTOR in placentas of control, L-NAME-treated, or L-NAME + esomeprazole-treated mice were detected by Western blot. (f-f⁗) Quantification of the LC3B, HIF1α, pAMPKα/ AMPKα, PPARγ, and phospho-mTOR/mTOR in placentas of control, L-NAME-treated, or L-NAME + esomeprazole-treated mice, N = 3 mice per group, *P < 0.05, **P < 0.01 ◂
|
v3-fos-license
|
2023-01-12T18:39:40.260Z
|
2022-12-30T00:00:00.000
|
255718697
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-7737/12/1/71/pdf?version=1672404211",
"pdf_hash": "cc36c38e987db949c02c9980e3400912eaf74cbb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:654",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"sha1": "d628d2480ad98e2a781cb66d7513f01f9abd528a",
"year": 2022
}
|
pes2o/s2orc
|
A Novel Method to Screen Strong Constitutive Promoters in Escherichia coli and Serratia marcescens for Industrial Applications
Simple Summary With the advancement of synthetic biology and metabolic engineering, regulatory elements applied for the accurate expression of target genes have become more important. Among them, due to their important role in regulating gene expression at the transcription level, a number of homologous or heterologous promoters have been used to improve the yield of target metabolites in different microorganisms. However, the method to isolate strong constitutive promoters in different microorganisms is still limited. Our work describes a novel approach to identify strong constitutive promoters in Escherichia coli and Serratia marcescens. The identified promoters were further used for fine-tuning gene expression and reprogramming the metabolic flux of L-valine and prodigiosin in E. coli and S. marcescens, respectively, and finally, the higher-level L-valine synthesis strain and prodigiosin production strain were isolated. The method shown in our study can also be a useful strategy to identify strong constitutive promoters in other bacteria and isolate other effective genetic regulatory elements, such as ribosome binding sites, terminators, and N-terminal coding sequences (NCS), for tuning gene expression in different microorganisms. Abstract Promoters serve as the switch of gene transcription, playing an important role in regulating gene expression and metabolites production. However, the approach to screening strong constitutive promoters in microorganisms is still limited. In this study, a novel method was designed to identify strong constitutive promoters in E. coli and S. marcescens based on random genomic interruption and fluorescence-activated cell sorting (FACS) technology. First, genomes of E. coli, Bacillus subtilis, and Corynebacterium glutamicum were randomly interrupted and inserted into the upstream of reporter gene gfp to construct three promoter libraries, and a potential strong constitutive promoter (PBS) suitable for E. coli was screened via FACS technology. Second, the core promoter sequence (PBS76) of the screened promoter was identified by sequence truncation. Third, a promoter library of PBS76 was constructed by installing degenerate bases via chemical synthesis for further improving its strength, and the intensity of the produced promoter PBS76-100 was 59.56 times higher than that of the promoter PBBa_J23118. Subsequently, promoters PBBa_J23118, PBS76, PBS76-50, PBS76-75, PBS76-85, and PBS76-100 with different strengths were applied to enhance the metabolic flux of L-valine synthesis, and the L-valine yield was significantly improved. Finally, a strong constitutive promoter suitable for S. marcescens was screened by a similar method and applied to enhance prodigiosin production by 34.81%. Taken together, the construction of a promoter library based on random genomic interruption was effective to screen the strong constitutive promoters for fine-tuning gene expression and reprogramming metabolic flux in various microorganisms.
Introduction
Microorganisms have been widely used to synthesize numerous chemicals and materials that have previously been produced from fossil resources, especially in the fermentative production of amino acids, antibiofiguretics, enzymes, and biofuels [1][2][3][4]. To achieve product accumulation at the desired levels, efficient genetic tools for controlling the key gene expression of related biosynthetic pathways are indispensable [3,5,6]. Hence, with the demand for the precise quantification of target gene expression in the dynamic and fine-tuned regulation of metabolic flux in synthetic and systems biotechnology, gene expression tools and systems have been rapidly established and developed for mediating gene expression and balancing metabolic flux in various organisms [7,8]. Genetic regulatory elements are of great importance for metabolic engineering and synthetic biological applications, allowing the precise regulation of gene expression at the desired levels [9,10]. At present, genetic regulatory elements for mediating gene expression mainly rely on the transcriptional, post-transcriptional, translational, and protein degradation levels, and include promoters, ribosome binding sites, terminators, small regulatory RNAs, and so on [11][12][13][14][15]. Among them, promoter engineering has emerged as a powerful tool to strongly control gene expression at the level of transcription and has a significant role in tuning the expression of downstream genes; thus, it is widely employed for improving the yield of target metabolites in different microorganisms [16,17].
Generally, transcriptional regulation of key genes is a direct and resultful approach to balance metabolic flux and optimize gene expression [18,19]. To accurately regulate the metabolic flux of the products, it is essential to construct an ingenious method to generate a promoter library covering a wide range of promoters of various intensities, from which promoters of appropriate strength can be screened. In recent years, promoter libraries have been created by promoter modification, omics screening, de novo design, and so on [20,21]. Moreover, constitutive promoters have commonly acted as the regulatory element to realize the regulation of a metabolic network in multiple organisms, such as E. coli, C. glutamicum, B. subtilis, Streptomyces, and some non-model strains [22][23][24][25]. In Halomonas bluephagenesis, the promoter (Pporin) of the strongest-expressed protein porin was used to construct a library with a wide range of intensities. The promoter library was constructed via saturation mutagenesis in the core region of the promoter, covering the relative transcriptional strengths from 40 to 14,000, and could work in both E. coli and H. bluephagenesis for the expression of heterologous genes [26]. Due to the deficiency of available promoters in Burkholderiales, an approach was carried out to screen strong constitutive promoters based on transcriptome sequencing. Thirty-seven promoters identified from the omics data were then cloned and characterized with a firefly luciferase reporter and applied to drive the 56 kb epothilone BGC and 23 kb rhizomide BGC for the efficient production of epothilone and rhizomide [27]. In Bacillus licheniformis DW2, a gradient strength promoter library was obtained by coupling the bacitracin synthetase gene cluster promoter PbacA with various 5 -UTRs. The screened promoters with strong, middle, and weakest intensities were then successfully used in protein expression and metabolic pathway optimization [28]. On the basis of the endogenous tandem promoter Pldh, a promoter library was developed to fine-tune the expression of the gene pyc to enhance malic acid biosynthesis in Bacillus coagulans [29]. Moreover, a flow cytometrybased quantitative method was constructed for gene expression, and 200 native or synthetic promoters were identified in a high-throughput platform in Streptomyces [17]. In addition, to realize the dynamic regulation of gene expression, exogenous inducers are required to control the timing of gene expression, but the inducers are usually expensive and unstable, making theme unsuitable for industrial application. Alternatively, self-regulated promoters are workable to dynamically modulate gene expression without the addition of any triggers, and have thus been widely used in balancing cell growth and product synthesis [30]. Above all, although promoters suitable for target gene expression could be successfully screened from the designed promoter libraries, the methods to screen strong constitutive promoters in different microorganisms are still limited, and most were obtained by random mutations Biology 2023, 12, 71 3 of 20 on the reported promoters or by identification from the omics data [31][32][33][34][35][36]. Hence, more effort should be made to design a novel method to identify strong constitutive promoters in different bacteria.
In this study, an approach based on random genomic interruption and FACS technology was designed to isolate strong constitutive promoters in E. coli and S. marcescens. Green fluorescent protein (GFP) was used as a reporter gene, and FACS technology was applied to sort high-intensity promoters. After identification, characterization, and modification of the strong constitutive promoters screened from these libraries, the final regulatory elements were applied to enhance the metabolic flux in the biosynthesis of L-valine and prodigiosin to confirm the strength of the identified promoters. The main workflow involved: (1) Randomly interrupting the genomes of E. coli, B. subtilis, and C. glutamicum to construct three different promoter libraries, respectively; (2) Characterizing of the identified strong constitutive promoter; (3) Constructing a gradient strength promoter library based on the modification of promoter P BS76 ; (4) Applying the regulatory element to enhance the expression of ilvCDE genes for L-valine overproduction in E. coli W3110; (5) Screening for strong constitutive promoters applicable for S. marcescens and utilizing the promoter to optimize the metabolic flux of prodigiosin synthesis in S. marcescens JNB5-1. Therefore, the method shown in our study can also be a useful strategy to identify strong constitutive promoters in other bacteria.
Strains, Plasmids, and Cultivation
Strains and plasmids used in this study are listed in Table S1. E. coli JM109 was used as the host for vector construction. E. coli MG1655, B. subtilis 168, and C. glutamicum ATCC13032 were employed as the targets for random genomic interruption manipulation. S. marcescens JNB5-1 was used to screen strong constitutive promoters for application in prodigiosin synthesis. Among them, E. coli and B. subtilis were grown on lysogeny broth (LB) medium (yeast extract 5 g/L, tryptone 10 g/L, NaCl 10 g/L) at 37 • C for 8-12 h. C. glutamicum was cultured in brain heart infusion (BHI) medium at 30 • C for 16-24 h. S. marcescens was cultivated overnight in LB medium at 30 • C. When necessary, 100 µg/mL ampicillin was added into the medium for selections.
Design and Construction of Random Library to Isolate Strong Constitutive Promoters Based on Random Genomic Interruption
To construct random libraries to isolate strong constitutive promoters in E. coli and S. marcescens, genome interruption was carried out. First, genomes of E. coli MG1655, B. subtilis 168, C. glutamicum ATCC13032, and S. marcescens JNB5-1 were extracted using TIANamp Bacteria DNA Kit. Here, two approaches (ultrasound and enzyme digestion) were performed to randomly interrupt genomes. In this study, genomes of E. coli MG1655 and S. marcescens JNB5-1 were interrupted by ultrasonic. For genome fragmentation using an ultrasonic crusher, the output power was set at 20%, the ultrasonic time was 1 s, and the ultrasonic interval was set at 3 s. Ultrasound was performed a total of 30 times to break the genomes into random fragments ranging from 100 to 250 bp. Then, Klenow Fragment (Takara, Beijing, China) was used to repair and smooth the 5 protruding end of the doublestranded DNA. Specifically, a 20 µL reaction system is described below: Template DNA 25 ng, 10 × Klenow Fragment buffer 2.5 µL, dNTP 2.5 µL, Klenow Fragment 1 µL, and ddH 2 O to 20 µL. After reaction at 37 • C for 3 h, the solution was then incubated at 65 • C for 5 min. Finally, the treated mixed fragment was inserted into the upstream of the report gene gfp, and then cloned into the vector pUC19 and pUCP18, to form the plasmid libraries pUC19-P EC -gfp and pUCP18-P SM -gfp, respectively.
Genomes of B. subtilis 168 and C. glutamicum ATCC13032 were interrupted by enzyme digestion. Mbo I (Takara, Beijing, China) was used to achieve the interruption of B. subtilis 168 and C. glutamicum ATCC13032 genomes. The restriction site identified by Mbo I was GATC, and linear DNA fragments of different sizes would be obtained in the places with this restriction site in the genome. In the 20 µL reaction solution, the amount of Mbo I added was 1 µL, the amount of 10× K buffer added was 2 µL, DNA fragments added were less than 1 µg, and the rest was supplemented with ddH 2 O. Then, the reaction solution was incubated at 37 • C for 20 min. The linearized fragments obtained by enzyme digestion need dephosphorylation before being cloned into the vector. Thus, alkaline phosphatase (Takara, Beijing, China) was used to remove the phosphate group at the 5 end of DNA fragments. DNA fragments (1-10 pmol), 5 µL of alkaline phosphatase, and 10× SAP buffer were added to the reaction system, which was made up to 50 µL with sterile water. First, the solution was reacted at 37 • C for 15-30 min, and then it was incubated at 65 • C for 15 min to inactivate it. At this point, 2.5 µL of 3 M NaCl and precooled absolute ethanol were added, and the solution was placed at −20 • C for 1 h. The solution was centrifuged and washed by 200 µL of precooled 70% ethanol, and finally, it was dried and dissolved with TE buffer. Similarly, random DNA fragments of B. subtilis 168 and C. glutamicum ATCC13032 genomes were inserted into the plasmid pUC19-gfp to obtained plasmids libraries pUC19-P BS -gfp and pUC19-P CG -gfp, respectively. Then, plasmids pUC19-P EC -gfp, pUC19-P BS -gfp, and pUC19-P CG -gfp were transformed into E. coli JM109 to isolate strong constitutive promoters in E. coli, as the expression of fluorescent GFP is driven by the random genomic regions. pUCP18-P SM -gfp was transformed into S. marcescens JNB5-1 to isolate strong constitutive promoters in S. marcescens. The transformants incubated on the LB plates were collected and washed with PBS buffer three times, and then the OD 600 was diluted to about 0.3, followed by FACS sorting. The recombinant strains E. coli and S. marcescens with higher fluorescence were coated on the plates after FACS sorting for overnight culture. All the colonies grown on the plates were further inoculated into 96-deep-well plates and cultured at 37 • C. GFP fluorescence and optical density (OD 600 ) were detected using the microplate reader after 10 h of incubation.
Construction of the Promoter P BS76 Library
The truncated promoter P BS76 was applied to construct four random mutation libraries. Thus, P BS76 was ligated with the reporter gene gfp and cloned into vector pUC19 to construct the basic plasmid for subsequent library construction. Four mutation libraries were constructed using the basic plasmid pUC19-P BS76 -gfp as a template through PCR with primers containing degenerate bases. The PCR ligation product was then transformed into the competent cells of E. coli JM109. All the colonies on the LB plates were collected and subjected to plasmid extraction, and then the plasmid libraries were electroporated into E. coli MG1655. The strains incubated on the plates were collected and washed with PBS buffer three times, and then the OD 600 was diluted to about 0.3, followed by FACS sorting. Colonies with higher fluorescence intensity than that of the control were selected and spread on the LB plates containing ampicillin (100 µg/mL). The recombinant E. coli strains containing the fluorescence plasmids grown on the LB agar plates were collected and cultured in 96-deep-well plates at 37 • C for the second-round screening with enhanced GFP fluorescence. After overnight incubation, GFP fluorescence and optical density (OD 600 ) were measured using the microplate reader.
Construction of Recombinant Strains
The vector used for promoter screening and validation was pUC19, and the reporter gene gfp was used to characterize the promoter strength. The construction procedure of gfp expression plasmid pUC19-P BBa_23118 -gfp served as an example. In brief, promoter P BBa_23118 and gene gfp were amplified by corresponding primers (Table S2) and fused by overlap extension PCR. Then, the fused fragment was inserted into the linearized vector pUC19 using ClonExpress One Step Cloning Kit (Vazyme Biotech Co., Nanjing, China), resulting in plasmid pUC19-P BBa_23118 -gfp. The recombinant plasmid was then transformed into E. coli JM109 and selected on an LB agar plate containing ampicillin (100 µg/mL). Similarly, the other GFP expression vectors under the control of different promoters were constructed using this approach. As for the gene expression plasmids constructed based on Biology 2023, 12, 71 5 of 20 expression vectors pTrc99a and pUCP18 for the overexpression of genes ilvCDE and pigFN, respectively, the genes ilvCDE and pigFN were amplified and purified, and then the purified PCR products were ligated with the linearized plasmids pTrc99a and pUCP18 through Gibson assembly (ClonExpress ® II One Step Cloning Kit, Vazyme Bio Inc., Nanjing, China). The recombinant plasmids were transformed into strains E. coli W3110 and S. marcescens JNB5-1 by electroporation, and selected on LB agar plate containing ampicillin (100 µg/mL).
Fluorescence Assays
The recombinant E. coli strains that contained the fluorescence plasmids or the plasmid libraries were inoculated in LB medium and cultured at 37 • C, 200 rpm for 10 h in the plate shaker. The cultures were diluted with 0.1 M phosphate-buffered saline (PBS, pH 7.4) to guarantee an OD 600 value of about 0.5 before fluorescence detection. Then, 200 µL of each culture was transferred into the 96-deep-well black plates, and the whole plate was measured using the microplate reader (Cytation 3, BioTek Instruments, Inc., Winooski, VT, USA) under an excitation wavelength of 490 nm and an emission wavelength of 530 nm at 25 • C. Fluorescence intensity was determined by dividing the fluorescence by OD 600 . The background fluorescence value of strains without fluorescence expression (FP bg ) and the background OD 600 of the medium (OD bg ) were corrected, and the following Equation (1) was used to calculated the relative fluorescence intensities.
Cultivation in Shake Flasks
The engineered strains modified for L-valine production were first activated on LB agar plates, then inoculated into 30 mL seed medium in the 500 mL shake flask and cultivated for 10 h at 37 • C and 230 rpm. The seed medium contained 20 g/L glucose, 10 g/L yeast extract, 5 g/L peptone, 1.2 g/L KH 2 PO 4 , 1.2 g/L MgSO 4 ·7H 2 O, 10 mg/L FeSO 4 ·7H 2 O, 10 mg/L MnSO 4 ·H 2 O, 1.3 mg/L VB 1 , and 0.3 mg/L V H . Then, 5 mL seed culture was transferred into 25 mL fermentation medium in a 500 mL shake flask and cultivated for 24 h at 37 • C and 230 rpm. The fermentation medium contained 20 g/L glucose, 2 g/L yeast extract, 4 g/L peptone, 1 g/L NaCl, 2 g/L KH 2 PO 4 , 0.7 g/L MgSO 4 ·7H 2 O, 0.1 g/L FeSO 4 ·7H 2 O, 0.1 g/L MnSO 4 ·H 2 O, 0.8 mg/L VB 1 , and 0.2 mg/L V H . During the fermentation process, pH was maintained at 7.0 by adding NH 4 OH with phenol red as an indicator and adding 60% glucose solution when the glucose concentration was low.
For prodigiosin production, the engineered strains were grown overnight at 30 • C and 200 rpm in a rotary shaker, then transferred into the 250 mL shake flask containing 30 mL fermentation medium for 72 h cultivation at 30 • C. The fermentation medium contained 20 g/L sucrose, 15 g/L beef extract, 10 g/L CaCl 2 , 7.5 g/L L-proline, 0.2 g/L MgSO 4 ·7H 2 O, and 6 mg/L FeSO 4 ·7H 2 O.
Analytical Methods
Cell growth was measured by the UV spectrophotometer at OD 600 . The L-valine production was determined by HPLC using acetonitrile/water (10:90 v/v) as the mobile phase. The flow rate was 1 mL/min, and detection wavelength was set as 278 nm. Glucose concentration was detected by using a glucose biosensor (SBA-40C, Shandong, China). Prodigiosin was measured as previously reported with minor modification [37]. The fermentation medium was dissolved in acidic ethanol (pH 3.0) and placed for 8 h until fully dissolved. Then, the samples were centrifuged at 10,000 rpm for 10 min. Finally, the supernatant was taken for determination at the absorbance of A 535 . Fluorescence characterization with cytometry was evaluated with a BD FACS AriaII cell sorter (BD FACSDiscover™ S8, Becton, Dickinson and Company, Franklin Lakes, NJ, USA). The collected cells were diluted into cold PBS before detection, loaded, and run at the rate of 0.5 µL/s. A total of 10,000 samples were captured for subsequent sorting.
All the experiments were carried out in triplicate, and all the data are expressed as the mean ± standard deviation. One-way ANOVA was used for comparing statistical difference between the groups of experimental data.
Screening of Potential Strong Constitutive Promoters Based on a Novel Approach of Random Genomic Interruption and FACS Technology
In microorganisms, protein expression level is affected by many factors, such as transcription initiation, transcription termination, translation regulation, and RNA stability, among which transcription initiation plays an essential role. Promoters, as the switch to control transcription initiation, play a key role in regulating the output level of genes. Therefore, promoters are often modified to achieve the appropriate expression levels of the target gene. Currently, screening promoters of appropriate strength by constructing various promoter libraries is the most common approach. However, the methods for isolating strong constitutive promoters in microorganisms are still limited, and are mainly based on promoter modification and omics screening. In this study, a novel approach to screen strong constitutive promoters based on random genomic disruption and FACS technology was designed ( Figure 1A). Genomes of E. coli MG1655, B. subtilis 168, and C. glutamicum ATCC13032 were extracted, randomly interrupted, and inserted into the upstream of the reporter gene gfp for further construction of promoter libraries. Specifically, genomes of B. subtilis 168 and C. glutamicum ATCC13032 were first digested using the enzyme Mob I, and the reaction solution was reacted at 37 • C for 20 min. As shown in Figure 1B, the broken genomic fragments were mainly concentrated around 100-250 bp, shown in the lane 2 and 3. The genome of E. coli MG1655 was fragmented by ultrasound, and the power was set to 20%. After 30 s of fragmentation, the band presented in lane 1 was obtained. The resulting fragments from the three sources were inserted into vector pUC19, upstream of the reporter gene gfp, respectively. At this point, as the expression of fluorescent GFP was driven by the random genomic regions, three random promoter libraries were generated (Library1-EC, Library2-BS, Library3-CG). Then, all transformants grown on the LB plates were collected and subjected to FACS sorting. E. coli/pUC19-P BBa_J23118 -gfp was used as the control strain to screen mutants with significantly higher fluorescence intensity. In Library1-EC, the proportion of strains with significantly higher fluorescence value accounted for 0.88% of the total library, while in Library2-BS and Library3-CG, the proportion of strains with significantly higher fluorescence value accounted for 1.52% and 0.1% of the total library, respectively ( Figure 1C). The sorted strains were then coated on the LB plates and inoculated in 96-deep-well plates for rescreening. Obviously, the fluorescence values of about 90% of the mutants were higher than that of the control strain. Moreover, in each library, there was a mutant with significantly higher fluorescence intensity than that of the other mutants ( Figure 1D). As shown in Figure 1E, on the LB plates, the color of the three mutants with the stronger fluorescence intensities was significantly greener than that of the control. Fluorescence microscopy also showed that the fluorescence intensity of the mutant cells was significantly higher than that of control strain. Finally, the fluorescence values of the three mutants that contained the gfp gene under the control of promoters from different sources were compared. The results showed that the fluorescence intensity of the mutant containing the gfp gene under the control of the promoter from B. subtilis was significantly higher than that of the other two mutants, and about 19.7 times higher than that of the control strain ( Figure 1F). Thus, the screened random genomic region that contained sequences of the strong constitutive promoter from B. subtilis 168 was selected for further study. In conclusion, the construction of the promoter library based on random genomic interruption and FACS technology could be effectively applied in the screening of strong constitutive promoters in E. coli. showed that the fluorescence intensity of the mutant containing the gfp gene under the control of the promoter from B. subtilis was significantly higher than that of the other two mutants, and about 19.7 times higher than that of the control strain ( Figure 1F). Thus, the screened random genomic region that contained sequences of the strong constitutive promoter from B. subtilis 168 was selected for further study. In conclusion, the construction of the promoter library based on random genomic interruption and FACS technology could be effectively applied in the screening of strong constitutive promoters in E. coli. One-way analysis of variance (ANOVA) was used to examine the mean differences between the data groups. **** p < 0.001, *** p < 0.005.
Characterization of the Identified Strong Constitutive Promoter from B. subtilis 168
In order to further analyze and identify the random sequence obtained from B. subtilis 168, the plasmid of the mutant was sequenced. The Results show that the random sequence upstream of the reporter gene gfp was the 136 bp sequence upstream of the transcription factor AcoR, which regulates acetoin synthesis in B. subtilis. Subsequently, the 136 bp sequence was shortened from both ends to identify the sequences of the promoter region of the acoR gene. As shown in Figure 2A, the sequence was truncated from the 5 end to the 3 end, with each truncation being 20 bp, until the sequence was truncated to 16 bp, and a total of six truncated DNA fragments (P1-P6) were obtained. Similarly, the sequence was truncated from the 3 end to the 5 end by 20 bp each time, and six other DNA fragments (P7-P12) were also obtained. The truncated DNA fragments were ligated before the reporter gene gfp by PCR amplification, cloned into the vector pUC19, and then transformed into E. coli JM109 for verification. Data showed that as the sequence was truncated from 5 end to 3 end to 116 bp, 96 bp, and 76 bp, the fluorescence intensity was basically consistent with that of the original DNA fragment. However, when the sequence truncated to 56 bp, 36 bp, and 16 bp, the expression level of GFP decreased significantly. In addition, when the sequence was truncated from 3 end to 5 end, only DNA fragment P7 had the same fluorescence intensity as that of the original DNA fragment. After that, the other truncated sequences showed a significantly lower fluorescence value than that of the original sequence. ( Figure 2B). Subsequently, the fluorescence intensity of DNA fragments P1-P6 was further analyzed by FACS analysis to verify the above experimental results. The fluorescence intensity of P1, P2, and P3 was consistent with that of the original DNA fragment, while the fluorescence value of P4, P5, and P6 was significantly weaker ( Figure 2C). These results indicated that the sequences located between positions −82 and −7 relative to the TIS of the acoR gene contained the identified strong constitutive promoter (here we named P BS76 ). These data are consistent with the result of N O Ali et al. [38]. The identified P BS76 was used as the subsequent research object. Above all, sequence truncation is an effective way to identify the promoter regions of the screened sequence. One-way analysis of variance (ANOVA) was used to examine the mean differences betwee the data groups. **** p < 0.001.
Construction of a Gradient Promoter Library via Modifying Promoter PBS76
To achieve high yields of target metabolites, appropriate promoters should b screened and used for fine-tuning gene expression and reprogramming the metabolic flu of target products. Theoretically, the strength of the promoter is determined by adjacen sequences from the −35 region to the transcription start site. Hence, to better regulate th expression of key enzymes and enhance the metabolic flux of target product synthesis, th One-way analysis of variance (ANOVA) was used to examine the mean differences between the data groups. **** p < 0.001.
Construction of a Gradient Promoter Library via Modifying Promoter P BS76
To achieve high yields of target metabolites, appropriate promoters should be screened and used for fine-tuning gene expression and reprogramming the metabolic flux of target products. Theoretically, the strength of the promoter is determined by adjacent sequences from the −35 region to the transcription start site. Hence, to better regulate the expression of key enzymes and enhance the metabolic flux of target product synthesis, the screened strong constitutive promoter P BS76 was further modified by the construction of promoter libraries. The workflow of the construction and screening of the promoter random mutagenesis library is shown in Figure 3A. As shown in Figure 3B, four promoter P BS76 mutation libraries were designed and constructed with sequences of −35 region (Library 1), spacer sequences between −35 and −10 regions (Library 2), sequences of −10 region (Library 3), and sequences between −10 region and transcription start site (Library 4) modified by the introduction of random base "N", respectively. Firstly, random base "N" was introduced into the four regions of P BS76 promoter in plasmid pUC19-P BS76 -gfp by inverse PCR, and the resulting plasmids containing mutation libraries were transferred into E. coli JM109, respectively. Then, FACS sorting technology was applied to screen the mutant strains with a higher fluorescence intensity than that of the control. The results showed that the portion with a higher fluorescence intensity in Library 1, Library 2, Library 3, and Library 4 accounted for 0.9%, 1.6%, 1.1% and 2.1% of the total library, respectively. As shown in Figure 3C, the fluorescence intensity of Library 1 and Library 3 mainly fluctuated in the range of 0-10 4 (a.u.), and the activity of most promoters was concentrated between 0 and 10 3 (a.u.). The proportion of mutant strains with a high fluorescence value in Library 1 and Library 3 was relatively small. The fluorescence range of mutants in Library 2 and Library 4 was mainly in the range of 10 3 -10 5 (a.u.), which was higher than that in Library 1 and Library 3. Thus, mutants with a significantly higher fluorescence intensity than that of the control could be easier to screen in Library 2 and Library 4. The mutants with a higher fluorescence intensity sorted from Library 2 and Library 4 were coated on LB plates for overnight cultivation. The grown colonies were randomly selected and inoculated in 96-deep-well plates, and fluorescence intensity was measured after 8 h of incubation. The results indicated that a mutant (P BS76-variant ) with a significantly higher fluorescence intensity than that of others was screened using promoter P BS76 as the control ( Figure 3D). The intensity of P BS76-variant was 59.56 times higher than that of promoter P BBa_J23118 ( Figure 3E). In total, four promoter libraries were constructed to obtain a series of promoters with a gradient of intensity, among which the strongest one was 59.56 times higher than that of promoter P BBa_J23118, and 3.03 times higher than that of promoter P BS76 . In the following experiments, to further confirm the strength of the promoters identified in our study, six promoters, P BBa_J23118 , P BS76 , P BS76-50 , P BS76-75 , P BS76-85 , and P BS76-100 , with different strengths were selected for the regulation of expression levels of key genes and metabolic flux in L-valine synthesis. One-way analysis of variance (ANOVA) was used to examine the mean differences between the data groups. **** p < 0.001, ** p < 0.01.
Application of the Identified Regulatory Elements for L-Valine Overproduction in E. coli W3110
To obtain the higher-level titer of target products, proper metabolic engineering should be carried out for the target metabolite. Here, to further validate the promoter activities identified in our study, L-valine, a branched-chain amino acid, which is widely used in nutrient supplements and the pharmaceutical industry, served as the target metabolite [39,40]. Based on the synthetic pathway of L-valine, three key enzymes (acetohydroxy acid isomeroreductase, encoding by ilvC; branched-chain amino acid aminotransferase, encoding by ilvE; dihydroxy acid dehydratase, encoding by ilvD) involved in L-valine biosynthesis were subjected to manipulate for better production of L-valine ( Figure 4A). A L-valineproducing strain (Val01) obtained by ARTP mutagenesis of E. coli W3110 was used as the starting strain [41]. The L-valine production of Val01 could reach 2.90 g/L after 24 h fermentation in shake flask. Promoters P BBa_J23118 , P BS76 , P BS76-50 , P BS76-75 , P BS76-85 , and P BS76-100 with different strength were applied to enhance the metabolic flux of L-valine synthesis. Specifically, the key ilvCDE genes were cloned into the vector pTrc99a under the control of promoters P BBa_J23118 , P BS76 , P BS76-50 , P BS76-75 , P BS76-85 , and P BS76-100 , respectively, and then transformed into E. coli JM109. The transformants were selected and verified by colony PCR and sequencing. The successful transformants were transformed into strain Val01, resulting in recombinant strains Val02, Val03, Val04, Val05, Val06, and Val07 ( Figure 4B). Subsequently, the recombinant strains were subjected to shake flask fermentation for 24 h to compare their ability to produce L-valine. Based on our results, the cell biomass of strains Val02, Val03, Val04, Val05, and Val06 was not different from that of strain Val01 after 24 h shake-flask fermentation, while the OD 600 of strain Val07 only reached 28. Obviously, the intensity of promoters was positively correlated with the L-valine yield produced by these seven strains. Among them, the L-valine yield produced by Val07 reached 7.92 g/L, while amounts of 3.98, 5.12, 6.55, 7.03, and 7.53 g/L L-valine were attained by strains Val02, Val03, Val04, Val05, and Val06, respectively ( Figure 4C). Strain Val07 owned the highest L-valine yield, 173.1% higher than that of strain Val01. The strength of promoter P BS76-100 was too strong, which caused the disorder of intracellular metabolism and the burden on cell growth, so the cell growth was severely inhibited. However, due to the enhanced metabolic flux of L-valine synthesis driven by promoter P BS76-100 , there was an improvement in L-valine yield. Taken together, these results further confirmed that promoters P BS76 , P BS76-50 , P BS76-75 , P BS76-85, and P BS76-100 are strong promoters compared to promoter P BBa_J23118 and could be used to improve the production of target products.
Screening of Strong Constitutive Promoter Applicable for S. marcescens by Random Genomic Disruption and FACS Technology
Prodigiosin is a red pigment with great economic value and of widely promising application due to its immunosuppressive and anticancer activities [37]. S. marcescens is the main producing strain for prodigiosin synthesis. Research of the model organism for prodigiosin synthesis has mainly focused on transcriptional regulation, the function of a two-component system and a quorum sensing system [42][43][44]. However, due to the unclear genetic background and the lack of gene-editing approaches of S. marcescens, the improvement in prodigiosin production has been severely limited. Moreover, the shortage of an effective synthetic biology toolbox to precisely regulate gene expression has impeded its development seriously. Here, to further confirm the general applicability of the method conducted in our study to screen strong constitutive promoters in different bacteria, a gradient strength promoter library was constructed via random genomic disruption for screening strong constitutive promoters in S. marcescens as mentioned above ( Figure 5A). First, the genome of S. marcescens JNB5-1 was extracted and interrupted by ultrasonic. The output power of the ultrasonic crusher was set at 20%, the ultrasonic time was 1 s, and the ultrasonic interval set at 3 s. A total of 30 repeats of ultrasound were achieved to break the genome into random fragments ranging from 100 to 300 bp ( Figure 5B). The 5 end of the fragments was repaired and smoothed by the Klenow Fragment. Then, the random fragments were inserted into the upstream of the reporter gene gfp and cloned into the vector pUCP18, resulting in the plasmid library pUCP18-P SM -gfp. The constructed plasmid was then transformed into S. marcescens JNB5-1, and the transformants grown on the LB plates were collected for further FACS sorting. S. marcescens JNB5-1 containing plasmid pUCP18-P pig -gfp was used as the control, which was used to express the gene gfp under the control of the native promoter P pig of prodigiosin synthesis gene cluster. Mutants with a higher GFP fluorescence than that of the control were collected for further analysis. The results showed that the fluorescence intensity varied over a 1000-fold range from 10 2 to 10 5 (a.u.), and the activities of most promoters were mainly concentrated between 10 3 and 10 4 (a.u.) ( Figure 5C). Moreover, the sorted strains were then coated on LB plates, and the colonies were inoculated into 96-deep-well plates for the second round of screening. Among them, the GFP expression intensity of 95% mutants was higher than that of the control ( Figure 5D). The fluorescence intensity of one of the mutants containing the gfp gene under the control of the promoter P SM (the promoter of the operon rpsQ-rpsJ) was significantly higher than that of the others. Therefore, the random sequence of this mutant was selected for further research. One-way analysis of variance (ANOVA) was used to examine the mean differences between the data groups. **** p < 0.001, *** p < 0.005.
Screening of Strong Constitutive Promoter Applicable for S. marcescens by Random Genomic Disruption and FACS Technology
Prodigiosin is a red pigment with great economic value and of widely promising application due to its immunosuppressive and anticancer activities [37]. S. marcescens is the Figure 4. Optimization of the L-valine pathway metabolic flux using the screened gradient promoter in E. coli W3110. (A) L-valine synthesis pathway in E. coli W3110. ilvIH, acetolactate synthase; ilvC, acetohydroxy acid isomeroreductase; ilvD, dihydroxy acid dehydratase; ilvE, branched-chain amino acid aminotransferase; dotted line indicates feedback inhibition. (B) Enhancing the expression of ilvCDE genes by using the gradient strength promoters. Strains Val02 to Val07 were constructed by overexpressing the ilvCDE genes with promoters P BBa_J23118 , P BS76 , P BS76-50 , P BS76-75 , P BS76-85 , and P BS76-100 in strain Val01. (C) L-valine titers of strains Val01 to Val07 were detected in shake-flask fermentation. For (C), error bars indicate standard deviations. One-way analysis of variance (ANOVA) was used to examine the mean differences between the data groups. **** p < 0.001, *** p < 0.005.
( Figure 5C). Moreover, the sorted strains were then coated on LB plates, and the colonies were inoculated into 96-deep-well plates for the second round of screening. Among them, the GFP expression intensity of 95% mutants was higher than that of the control ( Figure 5D). The fluorescence intensity of one of the mutants containing the gfp gene under the control of the promoter PSM (the promoter of the operon rpsQ-rpsJ) was significantly higher than that of the others. Therefore, the random sequence of this mutant was selected for further research. One-way analysis of variance (ANOVA) was used to examine the mean differences between the data groups. *** p < 0.005, ** p < 0.01. Nowadays, prodigiosin attracts much attention due to its multiple biological activities. In S. marcescens, the synthesis of prodigiosin is achieved by the pig gene cluster. 2-methyl-3n-amyl-pyrrole (MAP) and 4-methoxy-2,2 -bipyrrole-5-carbaldehyde (MBC) are two major precursors and are condensed into prodigiosin via the PEP-utilizing enzyme. Importantly, O-methyl transferase (encoded by pigF) and oxidoreductase (encoded by pigN) involved in catalyzing 4-hydroxy-2,2 -bipyrrole-5-carbaldehyde (HBC) to the formation of 4-methoxy-2,2 -bipyrrole-5-carbaldehyde (MBC) have been confirmed to play the most significant role in prodigiosin synthesis among the pig gene cluster in our previous study ( Figure 5E). In addition, substituting the promoter of a gene cluster with strong constitutive promoters has been proven as a feasible strategy for metabolite production. Here, to study the application of the strong constitutive promoter P SM screened from S. marcescens in the optimization of the prodigiosin metabolic pathway, the gene expression of pigFN was regulated by replacing promoters with those of different strengths. The native promoter P pig of the pig gene cluster, an endogenous promoter P RplJ from S. marcescens found using RNA-Seq [37], and the novel constitutive promoter P SM that was screened based on random genomic interruption were selected to mediate the expression of pigFN. Thus, the expression vectors of pigFN genes mediated by promoters P pig , P RplJ , and P SM were then transferred into S. marcescens JNB5-1, resulting in strains SM01, SM02, and SM03. Then, these strains were cultivated in fermentation medium for 72 h. As shown in Figure 5F, after 72 h of fermentation, the prodigiosin production of the wild-type JNB5-1 was 6.32 g/L. The yield of prodigiosin in strains SM01 and SM02 reached 7.50 g/L and 7.91 g/L, respectively. The maximum prodigiosin titer of strain SM03 reached 8.52 g/L, which was 34.81% higher than that produced by the wild-type strain. Meanwhile, the activity of promoter P SM was stronger than that of promoter P RplJ in the S. marcescens expression system. Altogether, these results suggested that the method based on random genomic disruption and FACS technology could also be a valuable approach to screen strong constitutive promoters in S. marcescens.
Discussion
The transcription of DNA to RNA is the first step in protein expression, and promoter, RNA polymerase, and sigma factor are typically involved in this process. Among them, promoters directly affect the transcription rate of protein, and hence they are common targets used for engineering the biosynthesis of natural and non-natural products in model or non-model strains [45]. To improve L-proline production in C. glutamicum, tailored promoter libraries were used to fine-tuned the expression levels of the target gdh, pyc, and proB genes, and finally, L-proline production was significantly increased [46]. To balance the metabolic flux of the naringenin biosynthesis pathway in E. coli, constitutive promoters with a gradient of intensity were randomly selected to control the expression levels of the pathway genes of naringenin. After screening more than 1200 candidates via the ultraviolet spectrophotometry-fluorescence spectrophotometry high-throughput method, the metabolic flux of the naringenin synthetic pathway was appropriately balanced, and finally, the naringenin was significantly improved [47]. Song et al. screened promoters from B. subtilis in various conditions, and a strong promoter, PtrnQ, in comparison to P43, was isolated and used to elevate the final production of both cytoplasmic BgaB and secreted protein α-amylase [36]. Promoter engineering has been also used to improve the production of 3-aminopropionic acid [48] and myo-inositol in E. coli [49], pullulanase [31], methyl parathion hydrolase, and chlorothalonil hydrolytic dehalogenase in B. subtilis [50], L-lysine in C. glutamicum [51], medium-chain-length polyhydroxyalkanoates in Pseudomonas mendocina [32], co-enzyme Q 10 in Rhodobacter sphaeroides [33], polycyclic tetramate macrolactam in Streptomyces albus [34], co-enzyme A in Corynebacterium ammoniagenes [35], H 2 in Thermococcus onnurineus [52], and so on. However, although promoters suitable for target gene expression and products production could be successfully screened from random mutation on the reported promoters or identified from the omics data, the current methods to screen strong constitutive promoters in different microorganisms are limited. In this study, a novel approach based on random genomic interruption was designed for the construction of promoter libraries, and strong constitutive promoters suitable for E. coli and S. marcescens were obtained by using FACS technology to sort high-intensity promoter sequences (Figures 1 and 5). Further, to confirm the strength of the identified promoters, the promoters obtained in our study were applied to enhance the expression of ilvCDE genes in E. coli W3110 to enhance L-valine production, and to increase expression of the pigFN genes in S. marcescens JNB5-1 to improve prodigiosin synthesis. As shown in Figures 4 and 5, the L-valine production in strain Val07 and prodigiosin synthesis in strain SM03 were significantly increased compared to E. coli Val01 and S. marcescens JNB5-1 (Figures 4 and 5). As far as we know, our study is the first to identify strong constitutive promoters through the method of random genomic interruption and FACS technology.
L-valine, an essential amino acid, is one of the three branched-chain amino acids (BCAAs), and studies on L-valine have shown that it is widely used in many industrial fields, such as pharmaceuticals, cosmetics, food, and feed [53]. Moreover, with the growing world market for L-valine, the microbial cell factory for the efficient synthesis of L-valine has become of increasing interest. C. glutamicum is the most commonly used industrial microorganism for producing L-valine. Due to the advantages of clear genetic background and easy genetic manipulation, E. coli has become another important host for the synthesis of L-valine in recent years [54,55]. However, possibly due to the more complex regulatory mechanism for L-valine biosynthesis in E. coli, there are fewer reports of L-valine-producing E. coli strains than there are of L-valine-producing C. glutamicum strains. Park et al. developed an engineered L-valine production strain through systematic metabolic engineering of E. coli W, and finally, a high-level production of L-valine (60.7 g/L) with a yield of 0.22 g/g glucose was obtained [54]. Considering that cofactor balance is another important factor required for L-valine biosynthesis, Savrasova et al. constructed an E. coli MG1655-based Lvaline-producing strain by replacing the native NADPH-dependent aminotransferase with a heterologous NADH-dependent leucine dehydrogenase [56]. By systematic metabolic engineering, Hao et al. constructed a chromosomally engineered L-valine-producing strain, and the final strain could produce 84 g/L L-valine with a yield and productivity of 0.41 g/g glucose and 2.33 g/L/h, respectively, in a 5 L bioreactor through a two-stage fed-batch fermentation [57]. In our previous study, we constructed a high-level L-valine production E. coli strain (92 g/L) using multimodular engineering [41]. In this study, to further confirm the strength of the promoters identified in our study, promoters P BBa_J23118 , P BS76 (identified by the method based on random genomic interruption and FACS technology), and four promoters, P BS76-50 (weakest), P BS76-75 (middle), P BS76-85 (strong), and P BS76-100 (strongest), from our isolated P BS76 promoter library were applied to increase the expression levels of ilvCDE genes, and strains Val02, Val03, Val04, Val05, Val06, and Val07 were obtained, respectively. The results of shake flask fermentation showed that the L-valine production in strains Val02, Val03, Val04, Val05, Val06, and Val07 was significantly enhanced compared to the original strain Val01 (Figure 4). These results further support that the method based on random genomic interruption and FACS technology is a very effective method to screen strong constitutive promoters in different bacteria.
Prodigiosin (PG), a red linear tripyrrole pigment, is the most prominent member of the prodiginine family and is produced by S. marcescens, Serratia rubidaea, Streptomyces coelicolor, Streptomyces griseoviridis, Serratia nematodiphila, and so on [58]. Among them, S. marcescens is the most widely studied prodigiosin-producing strain. Studies on prodigiosin have shown that it has important antimicrobial, anticancer, and immunosuppressive properties, and hence prodigiosin has received widespread attention in the last few decades [59]. Due to being economical and more environmentally friendly, the biotechnological production of prodigiosin by S. marcescens has recently attracted a great deal of interest. However, the high-efficiency production of prodigiosin by S. marcescens for commercial purposes is still a challenge. The prodigiosin biosynthesis pathway in S. marcescens is encoded by the pigABCDEFGHIJKLMN genes, a total of 14 genes, which are transcribed as a polycistronic mRNA from a promoter upstream of the pigA gene. Among them, the genes pigB, pigD, and pigE are involved in the biosynthesis of 2-methyl-3-n-amyl-pyrrole (MAP), while the genes pigA, pigF, pigG, pigH, pigI, pigJ, pigK, pigL, pigM, and pigN use L-proline as the substrate to synthesize 4-methoxy-2,2 -bipyrrole-5-carbaldehyde (MBC). Finally, the terminal condensing enzyme PigC encoding by the pigC gene condenses MAP and MBC to prodigiosin [58]. In our previous study, via transcriptomics and proteomics, we identified that the genes pigN and pigF probably play the most important role in prodigiosin biosynthesis in the pig gene cluster [60]. In addition, through the introduction of a polynucleotide fragment into the pigN 3 untranslated region and disulfide bonds into the O-methyl transferase (PigF), the prodigiosin production in strain S. marcescens JNB5-1 was significantly enhanced [61]. In this study, to verify the universality of the method of screening strong constitutive promoters based on the random genomic interruption and FACS technology, a strong constitutive promoter P SM was firstly isolated based on the approach shown in our study. Then, to improve the production of prodigiosin in strain JNB5-1, the genes pigN and pigF were overexpressed under the control of the promoters P pig , P RplJ , and P SM , and strains SM01, SM02, and SM03 were obtained, respectively. The results showed that the prodigiosin titer of these strain was positively correlated with the strength of the promoters. Among them, the prodigiosin titer of strain SM03 increased to 8.52 g/L, which was 1.35-times greater than that of the original strain JNB5-1 (6.32 g/L; Figure 5). This result further suggests that random genomic interruption and FACS technology is an effective method to identify strong constitutive promoter in the host that we are interested in.
Conclusions
This work describes a novel method to identify strong constitutive promoters in E. coli and S. marcescens based on random genomic interruption and FACS technology, and the identified promoters were further used in fine-tuning gene expression and reprogramming metabolic flux for higher-level production of L-valine and prodigiosin in E. coli and S. marcescens, respectively. The method shown in our study can also be a useful strategy for isolating other effective genetic regulatory elements, such as ribosome binding sites, terminators, and N-terminal coding sequences (NCS), in different microorganisms.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/biology12010071/s1, Table S1: Strains and plasmids used in this study; Table S2: Primers used in this study.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
v3-fos-license
|
2017-11-08T22:01:18.833Z
|
2013-10-18T00:00:00.000
|
22843938
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.nature.com/articles/cdd2013142.pdf",
"pdf_hash": "d3e6d8c8ec2d74476a47cf3d0766d89ef86d5983",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:656",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "4bdd5eb73c18327c0bdb37133ee0bfcd073c084d",
"year": 2013
}
|
pes2o/s2orc
|
Perturbation of Hoxb5 signaling in vagal and trunk neural crest cells causes apoptosis and neurocristopathies in mice
Neural crest cells (NCCs) migrate from different regions along the anterior–posterior axis of the neural tube (NT) to form different structures. Defective NCC development causes congenital neurocristopathies affecting multiple NCC-derived tissues in human. Perturbed Hoxb5 signaling in vagal NCC causes enteric nervous system (ENS) defects. This study aims to further investigate if perturbed Hoxb5 signaling in trunk NCC contributes to defects of other NCC-derived tissues besides the ENS. We perturbed Hoxb5 signaling in NCC from the entire NT, and investigated its impact in the development of tissues derived from these cells in mice. Perturbation of Hoxb5 signaling in these NCC resulted in Sox9 downregulation, NCC apoptosis, hypoplastic sympathetic and dorsal root ganglia, hypopigmentation and ENS defects. Mutant mice with NCC-specific Sox9 deletion also displayed some of these phenotypes. In vitro and in vivo assays indicated that the Sox9 promoter was bound and trans-activated by Hoxb5. In ovo studies further revealed that Sox9 alleviated apoptosis induced by perturbed Hoxb5 signaling, and Hoxb5 induced ectopic Sox9 expression in chick NT. This study demonstrates that Hoxb5 regulates Sox9 expression in NCC and disruption of this signaling causes Sox9 downregulation, NCC apoptosis and multiple NCC-developmental defects. Phenotypes such as ENS deficiency, hypopigmentation and some of the neurological defects are reported in patients with Hirschsprung disease (HSCR). Whether dysregulation of Hoxb5 signaling and early depletion of NCC contribute to ENS defect and other neurocristopathies in HSCR patients deserves further investigation.
Neural crest cells (NCCs) are generated from the dorsal neural tube (NT), migrate to the periphery and give rise to diverse cell lineages in many tissues according to the anterior-posterior (A-P) level of the NT from which they originate. [1][2][3] Cranial NCCs differentiate into teeth, bone, cartilage and connective tissue in the head; vagal NCCs contribute to the enteric nervous system (ENS) and cardiac outflow tracts; trunk NCCs give rise to sympathetic ganglia and norepinephrine-producing cells in the adrenal gland; sacral NCC contributes to the ENS of distal gut. [4][5][6][7][8][9] Skin pigment cells are derived from NCC from all A-P levels. Defective NCC development causes neurocristopathies.
Hox genes encode transcription factors with a DNA-binding homeodomain that recognizes a specific sequence and thereby mediates transcriptional regulation of target genes in numerous developmental processes. In mammals, 39 Hox genes are separated into four clusters Hoxa, Hoxb, Hoxc and Hoxd, on four different chromosomes. Hox genes are subdivided into 13 paralogous groups, and these Hox genes are expressed in overlapping domains with spatially staggered anterior expression boundaries along the NT. [10][11][12][13] The different combinations of Hox genes expressed in these domains are mirrored in the NCC derived from those domains. [14][15][16] So far, our knowledge regarding the role of Hox genes during NCC development comes mostly from lossof-function experiments, however, the overlapping expression and extensive functional redundancy among Hox genes preclude detailed investigations of the developmental functions of individual Hox genes in this way.
Among the Hox genes expressed in the developing gut, Hoxb5 expression pattern is intimately associated with vagal NCC and ENS development. [17][18][19][20][21][22][23][24] Deletion of Hoxb5 caused a rostral shift of the shoulder girdle in mice, implying a patterning role of Hoxb5 in specifying the position of limbs on the body axis. 25 However, no abnormal development was observed in NCC or other tissues that express Hoxb5, probably due to functional redundancy among paralogous Hox members with overlapping expression domains.
To circumvent the problem of functional redundancy and investigate the function of Hoxb5 in NCC, we generated transgenic mice that express a dominant-negative chimeric protein, engrailed-Hoxb5 (enb5), upon Cre-induction.
This enb5 repressor competes with Hoxb5 for binding to target genes, thereby disrupting the developmental pathways that require Hoxb5. Using these mice, we previously showed that blocking Hoxb5 signaling in vagal NCC causes reduced Ret expression, retarded NCC migration and ENS defects, 22 indicating that Hoxb5 regulates vagal NCC and ENS development.
Hoxb5 is also expressed in trunk NCC. To investigate whether Hoxb5 regulates trunk NCC development, in this study we crossed enb5 mice with Wnt1-Cre mice to induce enb5 expression in NCC from the full length of the NT, and looked for NCC abnormalities. We showed that Hoxb5 regulates the expression of Sox9 in trunk NCC, and that perturbation of Hoxb5 signaling in NCC causes downregulation of Sox9, apoptosis of NCC and neurocristopathies in mice.
Results
Hoxb5 perturbation in NCC causes developmental defects. To investigate the function of Hoxb5 in NCC derived from the entire A-P axis of the NT, we crossed enb5 mice with Wnt1-Cre mice. 26 This induced the expression of enb5 protein, perturbing Hoxb5 function in NCC derived from all along the NT. We then studied the effects on the development of NCC and NCC-derived structures.
To trace NCC in Wnt1-Cre/enb5 mice, we used Rosa26R (R26R) Cre activity-reporter mice in our crosses. At E9.0, beta-galactosidase (X-gal)-positive cells were located in the midbrain, frontal-nasal region, first and second branchial arches in both Wnt1-Cre/R26R and Wnt1-Cre/R26R/enb5 (Figure 1a). By E9.5, X-gal-positive cells were also detected in the hindbrain, third and fourth branchial arches, circumpharyngeal ridge, frontal-nasal process and the entire NT. As the embryos developed further, more X-gal-positive cells were found populating the midbrain, hindbrain, NT and NCCderived structures including the branchial arches, dorsal root ganglion (DRG), sympathetic ganglion and the frontal-nasal region at E10.5. X-gal-positive NCC populated similar regions in Wnt1-Cre/R26R and Wnt1-Cre/R26R/enb5 embryos; however, the intensity of X-gal staining in the NT, DRG and sympathetic ganglion was generally weaker in the Wnt1-Cre/ R26R/enb5 embryos, indicating that fewer X-gal-positive cells were present.
At the trunk level (just distal to forelimb) of E12.5 Wnt1-Cre/ R26R embryos, X-gal-positive cells were localized in the dorsal NT, DRG, in the myenteric region of the small intestine and the segmental nerves of the tail (Figure 1b). At the same A-P level of the E12.5 Wnt1-Cre/R26R/enb5 embryos, only very faint X-gal staining was detected in the dorsal NT and there was no X-gal staining in the DRG (Figure 1b). Furthermore, the DRG in Wnt1-Cre/R26R/enb5 was smaller than that in Wnt1-Cre/R26R. Reduced X-gal staining was also observed in the myenteric region of the small intestine in Wnt1-Cre/R26R/enb5.
NCC-derived melanoblasts were present between the skin epidermis and the dermis in Wnt1-Cre/R26R at E12.5 (Figure 2b; arrowheads), but not in Wnt1-Cre/ R26R/enb5. Defects in cranial, sympathetic and DRG in Wnt1-Cre/ enb5. Glia and neuronal differentiation in the cranial, sympathetic and DRG in enb5 and Wnt1-Cre/enb5 embryos were analyzed using Sox10 (NCC and glia marker) and Islet1/2 (neuron marker). Both transcripts were detected in the trigeminal ganglion, facio-acoustic ganglion, glossopharyngeal ganglion, vagus ganglion, otic vesicle, sympathetic ganglion and DRG in both enb5 and Wnt1-Cre/enb5 at E9.5 and E10.5 (Figures 1c and d). The temporal and spatial expression patterns of Sox10 and Islet1/2 in the cranial, sympathetic and DRG were comparable, but the intensity was lower in Wnt1-Cre/enb5, in which Hoxb5 signaling was perturbed in all NCC.
We determined the total volume of the DRG from 300 consecutive transverse sections from the trunk level (between the forelimb bud and the hindlimb bud) of E11.5 enb5 and Wnt1-Cre/enb5 embryos. The average volume of DRG in Wnt1-Cre/enb5 was only one-quarter of that in enb5 ( Figure 1f). Immunostaining showed Islet1/2 protein expression in neurons in the DRG and the lateral motor column of enb5 and Wnt1-Cre/enb5 mice ( Figure 1e); however, the DRG was markedly smaller in Wnt1-Cre/enb5. In both genotypes, the percentages of neurons in DRG were comparable (Wnt1-Cre/enb5 68.5 ± 5.2% (mean ± S.D.), enb5 73.3±2.9% (mean±S.D.; Figure 1f), suggesting that the shrinkage of DRG was not solely attributable to loss of neurons.
Defects in skin melanoblasts in Wnt1-Cre/enb5. Trunk NCCs migrate dorsal-laterally between the surface ectoderm and dorsal surface of the somites, then differentiate into melanoblasts expressing melanin-producing enzyme dopachrome tautomerase (Dct). These melanoblasts are precursors of pigment cells in skin and iris of the eye. In enb5 embryos, at E9.5, a cluster of Dct-positive melanoblasts appeared at the cervical region posterior to the otic vesicle ( Figure 2a). By E10.5, this cervical cluster dispersed and migrated ventrally, populating the branchial arches and anterior trunk (Figure 2a). Strong Dct expression was also detected in the developing eye and the telecephalon. In Wnt1-Cre/enb5 embryos, there was a drastic reduction in the number of Dct-expressing melanoblasts in the cervical region at E9.5 ( Figure 2a). By E10.5, fewer melanoblasts had populated the branchial arches, and there were no detectable melanoblasts in the anterior trunk region (Figure 2a). In contrast, Dct expression in the developing eye and in the telecephalon in Wnt1-Cre/enb5 embryos was comparable to that in enb5 embryos. This absence of melanoblasts during early development led to skin hypopigmentation in Wnt1-Cre/ enb5 mice at postnatal day 3 when fur started to form (Figure 2c). By postnatal day 21, patches of white fur were obvious (Figure 2c). In adults, melanocytes were detected in the hair follicles in Wnt1-Cre/R26R and in the hair follicles of the skin with normal pigmentation in Wnt1-Cre/enb5, but not in the hair follicles in the white patches ( Figure 2d).
Defects in ENS in Wnt1-Cre/enb5. We investigated the colonization of the intestine by NCC using X-gal. By E12.5, X-gal-positive neuroblasts colonized the small intestine of both Wnt1-Cre/R26R and Wnt1-Cre/R26R/enb5 embryos ( Figure 1b). However, the intensity in the small intestine was lower in Wnt1-Cre/R26R/enb5, which indicated that there were fewer neuroblasts. By E16.5, neuroblasts had completed populating the entire gut, reaching the end of the colon in Wnt1-Cre/R26R embryos (Figure 2e). In contrast, in five out of six Wnt1-Cre/R26R/enb5 embryos of the same litter, the neuroblasts had colonized only as far as the cecum, or up to half of the length or anterior two-thirds of the colon (Figure 2e). This defective colonization of the intestine by neuroblasts resulted in ENS anomaly in postnatal mice. At P28, a phenotype of megacolon, which resembled HSCR in humans, was observed in Wnt1-Cre/enb5 (Figure 2f). Immunostaining for Tuj1 (a pan-neuronal marker) detected no enteric neurons in the distal part of the affected colon in Wnt1-Cre/enb5, whereas the proximal part was normally populated (Figure 2f).
In summary, disruption of Hoxb5 signaling by enb5 in NCC from the entire A-P axis of the NT resulted in multiple NCCdevelopmental defects including hypoplasia of cranial, sympathetic and DRG, hypopigmentation and ENS defects in mice.
Hoxb5 function is required for the survival of vagal and trunk NCC. The developmental defects observed in multiple NCC-derived lineages in Wnt1-Cre/enb5 mice prompted us to examine whether the NCC population was depleted in these embryos. Using TUNEL assay, we detected apoptotic cells in the dorsal NT and the regions between the cardinal vein and dorsal aorta in the levels of both vagal and trunk regions in Wnt1-Cre/enb5 at E9.5 and E10.5 (Figures 3b and d), whereas apoptotic cells were rarely detected in enb5 embryos (Figures 3a and c). The majority of the apoptotic cells were immunostained for p75 NTR (Figure 3e), a marker of NCC. Thus, blocking the activity of Hoxb5 in NCC along the whole A-P axis of the NT leads to apoptosis of NCC. Apoptosis of NCC was also reported in mice nullified for Sox9. 27 Deletion of Sox9 in Wnt1-Cre expressing cells resulted in vagal and trunk NCC apoptosis and developmental defects. Sox9 is a NCC marker and deletion of Sox9 specifically in NCC by Wnt1-Cre system resulted in cell apoptosis in the ventral and dorsal NT, along the NCC migration pathways in E9.5 Wnt1-Cre/Sox9 flox/flox embryos ( Figure 4a). Apoptotic cells were not detected in Sox9 flox/flox control littermates (Figure 4a).
In Wnt1-Cre/R26R embryos, X-gal-positive NCCs were found populating the trigeminal and facio-acoustic ganglion (arrows; Figure 4b), NT and DRG at E9.5; and sympathetic ganglia and DRG at E10.5 ( Figure 4b). However, the blue staining in the corresponding structures at both stages of Wnt1-Cre/R26R/Sox9 flox/flox embryos was relatively less extensive and less intense ( Figure 4b). We determined the total volume of the DRG from the trunk level (between the forelimb bud and the hindlimb bud) of E11.5 Wnt1-Cre/ Sox9 flox/flox and Sox9 flox/flox embryos, and found that DRG volume in Wnt1-Cre/Sox9 flox/flox was only 60% of that in Sox9 flox/flox (Supplementary Figure 1A). The percentages of neurons in DRG in both genotypes were comparable (Supplementary Figure 1B), implying that, as in the conditional Hoxb5 mutants, the shrinkage of DRG in the absence of Sox9 activity was not solely attributable to loss of neurons.
We also investigated the migration of enteric neuroblasts and skin melanoblasts in Wnt1-Cre/Sox9 flox/flox using X-gal staining. At E14.5, neuroblasts had colonized the entire gut and reached the distal colon in Wnt1-Cre/R26R (Figure 4c). In contrast, the neuroblasts had populated only up to the cecum and aganglionosis was observed in the distal colon of Wnt1-Cre/R26R/Sox9 flox/flox embryos (Figure 4c).
The expression of Dct in the skin epidermis of Wnt1-Cre/ Sox9 flox/flox and Sox9 flox/flox embryos at E10.5 was comparable in terms of both pattern and intensity ( Figure 4d). By E12.5, melanoblasts were found between the epidermis and dermis of both Wnt1-Cre/R26R/Sox9 flox/flox and Wnt1-Cre/ R26R at the trunk level (compare Figures 4e and 2b), indicating that melanoblast development was largely unaffected by the Sox9 deletion in NCC.
Perturbation of Hoxb5 function disrupts Sox9 expression in NCC. Blocking Sox9 activity in NCC along the A-P axis of the NT leads to NCC apoptosis and developmental defects of DRG, sympathetic ganglion and ENS, which resemble most of the developmental defects observed when Hoxb5 activity is blocked in the same regions. This suggested that Hoxb5 and Sox9 might function in the same developmental pathway.
To investigate the relationship between Hoxb5 and Sox9 in our mutants, we assessed Sox9 expression. In E9.5 enb5, Sox9-expressing NCCs were localized in the dorsal NT, along the dorsal-lateral migration pathway under the surface ectoderm and ventral-medial migration pathway to the cardinal vein ( Figure 5a). In contrast, in Wnt1-Cre/enb5 mice, no Sox9 immuno-positive NCCs were seen in the dorsal NT or under the surface ectoderm; and only a few Sox9-expressing NCCs were found in the vicinity of the cardinal vein ( Figure 5b). These results suggested that Sox9 might act downstream of Hoxb5 in the same signaling pathway.
HOXB5 binds to SOX9 promoter. In silico analysis predicted three potential HOX-binding sites in the SOX9 promoter: R1A ( À 483 to À 466), R1B ( À 468 to À 450) and R2 ( À 35 to À 16) (Figure 6a). We evaluated the binding of Figure 1 Reduction of LacZ-expressing cells and shrinkage of DRG in Wnt1-Cre/enb5 mice. (a) LacZ-expressing cells (stained blue) in Wnt1-Cre/R26R and Wnt1-Cre/ R26R/enb5 embryos (E9.0-E10.5) were localized by whole-mount X-gal staining for b-galactosidase. (b) X-gal-stained E12.5 embryos of Wnt1-Cre/R26R and Wnt1-Cre/ R26R/enb5 embryos were sectioned to reveal the spatial distribution of LacZ-expressing cells. Boxed regions were magnified and shown on the right. Arrowheads indicated the LacZ-expressing cells at the dorsal NT. DRG was demarcated with broken line. Expression of Sox10 (purple; c) and Islet1/2 (purple; d) in enb5 and Wnt1-Cre/enb5 embryos was analyzed by whole-mount in situ hybridization. (e) Expression of Islet1/2 (green) on sections of E11.5 enb5 and Wnt1-Cre/enb5 embryos was analyzed by immunofluorescence. Dorsal root ganglion was demarcated with broken line. (f) Average total volume of DRG (mean±S.D.) from the trunk level of E11.5 enb5 and Wnt1-Cre/ enb5 embryos was determined and compared. Volume of DRGs in enb5 embryo was taken arbitrarily as 100%. Average % of Islet1/2 immuno-positive neurons versus total number of cells (mean ± S.D.) in DRG of E11.5 enb5 and Wnt1-Cre/enb5 embryos was determined and compared. Number of embryos analyzed was indicated by 'n'. ba, branchial arch; cr, circumpharyngeal ridge; drg, dorsal root ganglion; fa, facio-acoustic ganglion; fgt, foregut; gg, glossopharyngeal ganglion; li, liver; lmc, lateral motor column; lu, lung; mb, midbrain; nt, neural tube; ov, otic vesicle; sg, sympathetic ganglion; si, small intestine; sn, segmental nerve; t, tail; tg, trigeminal ganglion; vg, vagus ganglion HOXB5 to the SOX9 promoter using electro-mobility shift assay (EMSA) with a glutathione S-transferase (GST)-HOXB5 fusion protein and polymerase chain reaction (PCR) products spanning these binding sequences (Figure 6a). A retarded band was observed only in the lanes containing the GST-HOXB5 and the PCR fragments (probe) Hoxb5 dysregulation causes neurocristopathies MKM Kam et al of R1 and R2. No retarded band was observed if GST-HOXB5 was replaced with GST protein, or if excess unlabeled probe was included in the reaction mixtures. More importantly, the specific binding between GST-HOXB5 and the probe was completely abolished if the respective HOXbinding sequence was deleted from the probe.
We extended this analysis by looking at the binding of HOXB5 to the SOX9 promoter in a neuroblastoma cell line (SK-N-SH) transfected with HOXB5, using chromatin immunoprecipitation (ChIP) followed by quantitative PCR. Binding of HOXB5 to R1 and R2 was significantly enriched compared with the nonspecific IgG control (Figure 6b). Furthermore, we confirmed the binding of Hoxb5 to the Sox9 promoter in vivo in the central nervous system (CNS) of E9.5 wild-type embryos (Figure 6c).
HOXB5 trans-activation from SOX9 promoter is suppressed by enb5. To study whether HOXB5 protein transactivates the SOX9 promoter, we used a luciferase reporter construct consisting of 1100 bp ( À 1034 to þ 67) of the SOX9 gene 5 0 of the luciferase gene in SK-N-SH cells. HOXB5 increased the transcription by 5.8±0.1 (mean ± S.D.) fold compared with the pRC/CMV control (Figure 6d). Deletion of R1A and R1B from the SOX9 promoter reduced the induction (to 4.98±0.10 and 3.61±0.20 (mean±S.D.) fold, respectively). Conversely, deletion of R2 increased the induction to 13.52 ± 1.52 (mean±S.D.) fold. Our data revealed that binding of HOXB5 to these elements of the SOX9 promoter increases overall transcription from the SOX9 promoter, which could be attributed to the differential binding of HOXB5 onto R1 to R2.
Transfection of the dominant-negative form of Hoxb5, enb5, prominently suppressed the trans-activation of SOX9 by HOXB5 (Figure 6e). To test the mechanism, we generated a vector expressing a Flag-tagged enb5 (Flag-enb5). As expected, Flag-enb5 suppressed HOXB5-transactivated SOX9 promoter activity by 50% (Figure 6e). ChIP and quantitative PCR analysis revealed a modest but significant enrichment (1.98±0.08 and 1.25±0.06 (mean±S.E.M.) fold for R1 and R2, respectively) of binding of Flag-enb5 to the SOX9 promoter compared with a nonspecific IgG control. This suggested that Flag-enb5, thus enb5, binds to the same regions of the SOX9 promoter as HOXB5 (Figure 6f). Therefore, HOXB5 binds to the SOX9 promoter and this can be suppressed by the dominant-negative chimera, enb5.
Sox9 alleviates enb5-induced cell death and Hoxb5 induces Sox9 expression in ovo. In ovo electroporation of chick NT with enb5 resulted in obvious cell death in the transfected region but only a few apoptotic cells were detected in the contralateral non-electroporated side. Co-electroporation of Hoxb5 or Sox9 notably reduced The proximal colon of Wnt1-Cre/enb5 mice was swollen but the distal colon was constricted. Sections of the proximal and distal colons of enb5 and Wnt1-Cre/enb5 mice were immunostained (green) for Tuj1 to localize enteric neurons. The levels from which the transverse sections were analyzed were indicated by 'a' and 'b'. Number of gut tissues, embryos or mice analyzed was indicated by 'n'. ce, cecum; cm, circular muscle; co, colon; der, dermis; e, eye; epi, epidermis; mu, mucosa; lm, longitudinal muscle; ov, otic vesicle; st, stomach; si, small intestine Figure 3 Apoptosis of NCCs in Wnt1-Cre/enb5 mice. (a-d) TUNEL staining was performed to localize apoptotic cells (green) on Wnt1-Cre or enb5 embryos and Wnt1-Cre/enb5 sections of the trunk level. Boxed regions were magnified and shown as insets. (e) Section of E9.5 Wnt1-Cre/enb5 embryo was examined by TUNEL staining (green) and immunofluorescence (red) using anti-p75 NTR serum. Number of embryos analyzed was indicated by 'n'. cv, cardinal vein; da, dorsal aorta; ht, heart; pe, pharyngeal ectoderm enb5-induced cell death (Figure 7a), again suggesting that Sox9 and Hoxb5 act in the same signaling pathway. When chick NT was electroporated with Hoxb5, ectopic expression of Sox9 was induced on the transfected side at 6-12 h, confirming the trans-activation action of Sox9 by Hoxb5 (Figure 7b). embryos were localized by X-gal staining. Number of embryos analyzed was indicated by 'n'. ba, branchial arch; ce, cecum; co, colon; cv, cardinal vein; da, dorsal aorta; der, dermis; drg, dorsal root ganglion; e, eye; epi, epidermis; ht, heart; nt, neural tube; se, surface ectoderm; sg, sympathetic ganglion; st, stomach; si, small intestine; te, telecephalon
Discussion
In this study, we used mice expressing a dominant-negative form of Hoxb5 (enb5) in NCC from all along the NT to investigate the impact of abnormal Hoxb5 signaling in the development of NCC-derived tissues. These Wnt1-Cre/enb5 mice displayed defects in multiple NCC-derived tissues, suggesting that NCCs were affected before lineage specification. TUNEL analysis revealed that pre-migratory and migratory NCC underwent apoptosis in E9.5 Wnt1-Cre/enb5 embryos. Taken together, our previous report 22 and this study indicate that Hoxb5 regulates the survival of vagal and trunk NCC.
In mouse embryos, vagal NCCs enter the foregut at E9.5 and colonize the gut reaching distal hindgut at E14.5. During migration, NCCs proliferate, interact with gut mesenchyme, and differentiate into neurons and glia. NCC growth defects and errors in their interactions with gut mesenchyme contribute to abnormal ENS. 28,29 In Wnt1-Cre/enb5 embryos, expression of enb5 by Wnt1-Cre led to apoptosis of premigratory and early migratory NCC. Expression of enb5 in vagal NCC by another Cre mouse (b3-IIIa-Cre) resulted in NCC migration defect in the intestine. 22 In b3-IIIa-Cre/enb5 mice, enb5-expressing neuroblasts survive, proliferate and differentiate normally in the intestine, and NCC apoptosis was not observed. The lack of NCC apoptosis in b3-IIIa-Cre/enb5 embryos is attributed to a more restricted and late expression of enb5 in b3-IIIa-Cre/enb5 22 than that in Wnt1-Cre/enb5. All these indicate that the differentiation and proliferation of neuroblasts in the intestine is largely unaffected by enb5. ENS defect in Wnt1-Cre/enb5 mice is mainly attributed to NCC apoptosis, which leads to reduction of the population size of the NCC entering the gut. Whether enb5 affects NCC formation requires further investigation. Hoxb5 is also expressed in gut mesenchyme, suggesting it may influence the gut environment. 17 However, enb5 is only expressed in NCC in this and previous 22 studies, indicating that ENS anomalies in these mice are due to defective NCC growth.
Sox9 is required for NCC development, and we have demonstrated that blocking Hoxb5 downregulated Sox9 expression in these cells. HOXB5 binds to SOX9 promoter and induces SOX9 expression, and this induction of SOX9 promoter activity was substantially reduced by enb5. The human and mouse Sox9 promoters share high sequence conservation (470%), and Hoxb5-binding sites were also identified within the mouse Sox9 promoter (Supplementary Figure 2). ChIP assay on CNS confirmed that Hoxb5 bound to the mouse Sox9 promoter. More importantly, expression of enb5 in NCC reduced Sox9 expression in pre-migratory and migratory NCC in Wnt1-Cre/enb5 embryos. Our findings revealed that apoptosis of trunk NCC and defects in multiple trunk NCC-derived tissues in Wnt1-Cre/enb5 were consequences of downregulated Sox9 expression in NCC.
Wnt1-Cre/Sox9 flox/flox mice displayed craniofacial defects resembling campomelic dysplasia in human. 30 Cranial NCCderived cartilages and endochondral bones of the head were completely absent in Wnt1-Cre/Sox9 flox/flox mice, but development of trunk NCC-derived structures was not addressed. Wnt1-Cre/enb5 mice also display domed skull and short snout as Wnt1-Cre/Sox9 flox/flox mice, but the cranial NCC-derived skeletal elements are not absent but malformed in Wnt1-Cre/ enb5 heads (unpublished data). Different fates of NCC are dictated by both the intrinsic properties and external environmental factors. Trunk NCCs show overlapping Hoxb5 and Sox9 expression pattern and they committed to apoptosis when there is a loss of Hoxb5 or Sox9 functions in respective mutant mice. However, in cranial NCC, the expression patterns of Hoxb5 20,21,24 and Sox9 31,32 in mice are not completely overlapping, which indicates that Hoxb5 alone is neither sufficient nor essential for Sox9 expression. Moreover, the apoptotic patterns seen in the developing heads of Wnt1-Cre/Sox9 flox/flox and Wnt1-Cre/enb5 mice were not exactly the same too (unpublished data). These suggested that the regulation of Hoxb5 on Sox9 in trunk NCC may not be identical to that in cranial NCC. Therefore, it is not totally unexpected to observe phenotypic differences in some NCC-derived tissues between Wnt1-Cre/Sox9 flox/flox and Wnt1-Cre/enb5 mice, and Sox9 deletion or enb5 expression by Wnt1-Cre may affect different developmental aspects of cranial NCC.
Sox9 provides the competence for trunk NCC to migrate from the NT; it is also required for NCC survival. 27 However, not all the trunk NCC died in Sox9 À / À mutants, as some NCC-derived neuro-glial components were still present in the DRG and peripheral nerve as shown by the residual expression of Brn3.0 (neuron marker) and Sox10 (glia marker), 27 indicating that the generation and survival of a subpopulation of trunk NCC was independent of Sox9 function.
Wnt1-Cre/enb5 and Wnt1-Cre/Sox9 flox/flox mutants display some common phenotypes including NCC apoptosis, defective ENS, reduction of DRG and sympathetic ganglia, however, the pigmentation defect was observed only in Wnt1-Cre/enb5. The ENS defects in Wnt1-Cre/Sox9 flox/flox mutants denoted that Sox9 was also required for vagal NCC development in addition to its inductive and survival roles in trunk NCC. In mouse, trunk NCCs migrate to their target organs in three waves: the first two migrate ventral-medially to Figure 5 Reduction of Sox9 expression in Wnt1-Cre/enb5 mice. (a and b) Sox9-expressing cells (green; arrows) were localized in E9.5 Wnt1-Cre or enb5 (a) and Wnt1-Cre/enb5 (b) embryos by immunofluorescence using anti-Sox9 serum on sections of the trunk level. Regions highlighted were magnified and shown as insets. Arrowhead denoted the notochord. Number of embryos analyzed was indicated by 'n'. cv, cardinal vein; da, dorsal aorta; ht, heart; nt, neural tube generate the neuro-glial derivatives at E9.0-E10.5, including the DRG and sympathetic ganglion; the third wave migrates dorsal-laterally along the surface ectoderm and dermomyotome to form the skin melanoblasts at E10.5-E13.5. 33 In line with the role of Sox9 in NCC survival, deletion of Sox9 in Wnt1-Cre expressing cells led to NCC apoptosis, DRG, sympathetic ganglion and ENS defects. The lack of a pigmentation defect in Wnt1-Cre/Sox9 flox/flox mice implies that the survival of trunk NCC at the later embryonic stage, which give rise to skin pigment cells, is not so dependent on Sox9.
Hirschsprung disease (HSCR, MIM142623) is a neurocristopathy of ENS deficiency, characterized by the absence of enteric ganglia from variable lengths of the gut. Approximately 30% of HSCR patients exhibit additional NCC-associated anomalies, known as syndromic HSCR. 28,[38][39][40] Mutations in SOX10, EDN3 or EDNRB have been identified to be responsible for 65-85% of the syndromic HSCR cases with pigmentation defects, whereas genes responsible for the remaining cases still left undetermined. 41 Especially for EDN3, a reduction in spinal sensory innervation of the rectum was reported in mice with disruption of Edn3 gene expression. 42 In line with this, this study also showed that perturbation of Hoxb5 function in NCC of mice causes multiple neurological phenotypes and hypopigmentation, resembling some of the phenotypes of syndromic HSCR. Nevertheless, the causal relationship Determination of the sizes of DRG in embryos. Transverse sections (10 mm in thickness) from the trunk level (between the forelimb bud and the hindlimb bud) were mounted onto glass slides, stained with hematoxylin and eosin. Photos of 300 consecutive sections of each embryo were taken at 100 Â on a Nikon Eclipse E600 microscope (Tokyo, Japan) fitted with Sony Digital Camera DSM1200F (Tokyo, Japan). DRG on either side of the NT in the photos were demarcated manually. The area of DRG on each section was determined using the software ImageJ (NIH, Bethesda, MD, USA) and an arbitrary value was given. The volume of DRG on each section was then determined by multiplying the total area of the DRG (on each side of the NT) by 10 (each section was of 10 mm in thickness).
In situ hybridization. In situ hybridization was performed on whole embryos to detect the expression of Sox10 and Islet1/2 as previously described. 45 Electro-mobility shift assay. PCR products spanning the putative HOXbinding sites of the human SOX9 promoter were labeled with Biotin-11-UTP (Pierce, Thermo Fisher Scientific, Rockland, MA, USA), EMSA was performed as previously described. 46 ChIP and quantitative PCR assay. ChIP assay on human neuroblastoma cell line SK-N-SH (#HTB-11) (ATCC, Manassas, VA, USA) transfected with HOXB5 or Flag-enb5 was performed as previously described. 46 ChIP assay on mouse developing CNS was performed with head and NT tissues from 27 E9.5 WT mouse embryos with minor modifications. 46,47 See Supplementary Information for quantitative PCR primers for human and mouse Sox9 promoters.
Transient transfection and dual-luciferase reporter assay. A SOX9-luciferase reporting construct containing the 1100 bp ( À 1034 to þ 36) DNA fragment of the human SOX9 promoter 48 and the luciferase gene was used for the study. Mutated SOX9-luciferase constructs with the predicted HOX-binding sites deleted were generated using QuikChange Site-Directed Mutagenesis Kit (Stratagene, Santa Clara, CA, USA). The dual-luciferase reporter assay was performed as previously described. 46 At least two independent triplicate or quintuplicate experiments were performed, and the luciferase activity was presented as relative luciferase unit normalized with the Renilla luciferase internal control.
Statistical analyses. ANOVA was performed for all experiments to calculate the differences between groups, and P-value o0.05 was regarded as statistical significant.
Conflict of Interest
The authors declare no conflict of interest.
|
v3-fos-license
|
2022-05-19T14:32:14.121Z
|
2022-05-18T00:00:00.000
|
248867114
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://substanceabusepolicy.biomedcentral.com/track/pdf/10.1186/s13011-022-00469-z",
"pdf_hash": "35cd2faee8337e31d16dcf734c2616d6374f5650",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:658",
"s2fieldsofstudy": [
"Law",
"Medicine"
],
"sha1": "35cd2faee8337e31d16dcf734c2616d6374f5650",
"year": 2022
}
|
pes2o/s2orc
|
Certificate-of-need laws and substance use treatment
Background Certificate-of-need (CON) laws in place in most US states require healthcare providers to prove to a state board that their proposed services are necessary in order to be allowed to open or expand. While CON laws most commonly target hospital and nursing home beds, many states require CONs for other types of healthcare providers and services. As of 2020, 23 states retain CON laws specifically for substance use treatment, requiring providers to prove their “economic necessity” before opening or expanding. In contrast to the extensive academic literature on how hospital and nursing home CON laws affect costs and access, substance use CON laws are essentially unstudied. Methods Using 2002–19 data on substance use treatment facilities from the Substance Abuse and Mental Health Services Administration’s National Survey of Substance Abuse Treatment Services, we measure the effect of CON laws on access to substance use treatment. Using fixed-effects analysis of states enacting and repealing substance use CON laws, we measure how CON laws affect the number of substance use treament facilities and beds per capita in a state. Results We find that CON laws have no statistically significant effect on the number of facilities, beds, or clients and no significant effect on the acceptance of Medicare. However, they reduce the acceptance of private insurance by a statistically significant 6.0%. Conclusions Policy makers may wish to reconsider whether substance use CON laws are promoting their goals. Supplementary Information The online version contains supplementary material available at 10.1186/s13011-022-00469-z.
Introduction
Substance use disorders (SUDs) are chronic health conditions and characterized by clinically significant impairment, including health problems, disability, engaging in unintended risky behaviors, and failure to meet major responsibilities at work, school, or home, related to the use of alcohol and/or illicit drugs [1]. These conditions impose substantial costs on both affected individuals and the nation as a whole in terms of lost lives, lost health, lost productivity, and crime. Data from the National Survey on Drug Use and Health suggest that, in 2019, 7.4% of the population (20.4 million people aged 12 or older) had a SUD in the past year, and even this may be an underestimate [2]. In the same year, there were 70,630 deaths from drug overdose [3]. Although SUDs are preventable and treatable health conditions and treatment has been shown to reduce SUDs and their associated harms, a treatment gap continues to exist [4]. Common SUDs are alcohol, cannabis, stimulants, and opioids [5]. Estimates suggest that less than 20% of those with SUDs received any treatment in the past year [6]. Furthermore, these conditions are most prevalent among low-income and uninsured individuals [7], implying that taxpayers Open Access *Correspondence: jbailey6@providence.edu Bailey et al. Substance Abuse Treatment, Prevention, and Policy (2022) 17:38 finance a large share of the costs associated with SUDs. For example, while Medicaid covered 16% of the nonelderly adult population, Medicaid covered 38% of nonelderly adults with SUD in 2017 [7]. Certificate-of-need (CON) laws in place in most US states require healthcare providers to prove to a state board that their proposed services are necessary in order to be allowed to open or expand. Conover and Bailey [8] provide extensive background on their history, intent, and effects. While CON laws most commonly target hospital and nursing home beds, data from the American Health Planning Association show that some states require CONs for up to 28 separate types of healthcare providers and services. In the face of a national opioid epidemic, 22 states and the District of Columbia retain CON laws specifically for SUD treatment, requiring providers to prove their "economic necessity" before opening or expanding. In contrast to the extensive academic literature on how hospital and nursing home CON laws affect costs and access, substance use CON laws are essentially unstudied. Only one prior article has studied the effect of substance use CON laws, and only one outcome has been studied. Noh and Brown [9] found that CON laws led to fewer SUD treatment facilities per capita.
Given that SUDs place a great burden on both the affected individual and society with the annual economic costs of SUDs being $555 billion in 2019 dollars [4], understanding how CON laws for SUD services affect access to treatment is important. One of the commonly cited barriers to accessing services is the lack of available treatment providers or programs [6]. If CON laws promote access to care for poor and underserved communities, as one of its intended justifications, CON laws may increase access to treatment among low-income populations. However, if CON laws act as restrictions on entry into a market and reduce competition, CON laws may decrease access to treatment through reduced facilities and/or available beds.
Using data on CONs from the American Health Planning Association and the Mercatus Center together with 2002-19 data on treatment facilities from the Substance Abuse and Mental Health Services Administration's (SAMHSA's) National Survey of Substance Abuse Treatment Services (N-SSATS), we measure the effect of CON laws on access to substance use treatment. Using fixedeffects analysis of states enacting and repealing substance use CON laws, we measure how CON laws affect the number of SUD treatment facilities, beds per capita, and clients per capita in a state. In addition, we measure the effect of CON laws on the forms of payment that treatment facilities accept, with a large share of cash-only facilities serving as a proxy for excess demand.
Data
Data on CON laws come from the American Health Planning Association (AHPA) [10] and the Mercatus Center [11]. Different states wrote their CON laws to apply to different types of treatments, capital equipment, and health facilities. From 1992 to 2016, AHPA tracked which states required a CON for each of 28 different types of healthcare, ranging from acute-care hospital beds, MRIs, neonatal intensive care units, to SUD treatment facilities. The most common types of CON restrictions in 2016 were for acute-care hospital beds (27 states), long-term acute-care beds (26 states), and ambulatory surgery centers (26 states). The data sources make it clear that CON typically applies to both proposed new facilities and to expansions of existing facilities, particularly adding new beds. The sources do not make it clear which states require CON for all substance use treatment and which states exempt outpatient facilities. While AHPA has not updated its data since 2016, other organizations began tracking CONs more recently. The National Council of State Legislatures [12] and the Institute for Justice [13] provided snapshots of 2019, and the Mercatus Center provided the most recent CON census for 2020 [11].
When there were discrepancies between data sets, we examined the relevant state statutes and regulations to determine when exactly states passed or repealed substance use CON laws; see Additional file 1: Appendix Table 1 for details. During the period of our study, our data show 2 states repealing substance use CON laws (Alaska and Nevada), 1 state adding one (Kentucky), and 2 states both adding and repealing them (Connecticut and Washington DC). Figure 1 shows which states had substance use CON laws in place as of 2020.
All data on SUD treatment facilities are from the 2002-19 National Survey of Substance Abuse Treatment Services (N-SSATS), a survey conducted by SAMHSA [14]. The survey is completed by specialty SUD treatment facilities, with response rates of approximately 89%. A specialty SUD treatment facility is defined by SAMHSA as a hospital (including VA), residential facility, outpatient treatment facility, or other facility with an SUD treatment program that offers the following services: outpatient, inpatient, or residential rehabilitation treatment; detoxification; opioid-use treatment; and halfway-house services. Treatment in specialty settings accounts for approximately 70% of SUD expenditures in 2015 [14].
The survey asks a wide variety of questions including how many clients the facility saw last year, how many beds it has, and what forms of payment it accepts. However, not every question was asked every year. While the survey began in 1997, we use data from 2002 and on, as such omissions were particularly common between 1997 and 2001. While facility-level responses are publicly available, we use the state-level aggregate data provided by N-SSATS, given that our goal is to determine the effects of CON laws at the state level. The N-SSATS provides raw counts of the total number of facilities, beds, clients, and facilities accepting various forms of payment in each state; we have rescaled these variables. We calculate the percentage of facilities accepting certain forms of payment (Medicaid, Medicare, private insurance) by dividing the number of facilities that accept each form of payment by the total number of facilities in a state. We calculate per capita versions of the facilities, beds, and clients variables using data on total state population from the Current Population Survey: facilities per 100,000 residents, beds per 100,000 residents, and clients per 1000 residents. For the facilities-per-state variable, we use the number of facilities eligible to be surveyed, regardless of whether they responded to the survey. Client counts in the N-SSATS represent a snapshot of the number of clients on an average day, not annual totals. State-level demographic control variables come from the Current Population Survey and were collected via the Integrated Public Use Microdata Survey [15]. We also control for two relevant state-level policy variables. Data on Prescription Drug Monitoring Programs (PDMP) come from Horwitz et al. 2021 [16]. Data on health insurance benefit mandates for drug treatment are from the Blue Cross Blue Shield Association [17]. These laws require private health insurers to cover drug treatment; see Bailey 2022 for a discussion of how they affect healthcare finance [18]. Table 1 shows the summary statistics for all variables used in our analysis.
Methods
We estimate fixed-effects regressions of the following form: The dependent variables Y st in various regressions include the natural log of facilities per 100,000 residents, natural log of beds per 100,000 residents, natural log of Table 2 shows the results of the regressions described above. CON laws have no statistically significant effect on the number of facilities, beds, or clients and no significant effect on the acceptance of Medicare. The effect on Medicaid acceptance is not statistically significant at convetional levels. However, CON reduces the acceptance of private insurance by a statistically significant 6.0%. Table 3 repeats the regressions from Table 2, but includes a state-specific linear time trend in each one. Results remain similar; we still find that CON is associated with a statistically significant reduction in the acceptance of private insurance, now slightly larger at 6.25%. We still find no other effects of CON at conventional levels of statistical significance.
Conclusion
States adding substance use CON laws are associated with a lower likelihood of facilities accepting private insurance, with no statistically significant effect on the number of facilities, beds, or clients per capita and no significant effect on acceptance of Medicare or Medicaid. These results are somewhat puzzling, as we expected that the mechanism by which CON laws lead to fewer forms of payment being accepted is by reducing the number of facilities in the market and so reducing competition. But we found no significant effect on the number of facilities. When controlling for state-specific time trends the estimated effect for facilities did turn negative and close to significant, while the coefficients on beds and clients were positive and the coefficients for all forms of payment were negative. These coefficients are consistent with CON leading to fewer but larger facilities, with each facility being more selective about which forms of payment the accept. But with our data and empirical strategy, only the reduction in acceptance of private insurance is statistically significant at conventional levels.
One limitation of our study is that endogeneity is pervasive in this setting: states might pass or repeal CON laws based on their expectations of the need for treatment facilities, and the demand for care varies with substance use levels, which we do not control for directly and which themselves may depend on the availability and effectiveness of treatment facilities.
A further limitation, shared by most work on substance use, is that we rely on surveys for our key variables. Correspondence with the data-set creators concerning potential mistaken responses noted that "N-SSATS is a voluntary survey that substance use facilities complete to the best of their ability based on their understanding of the questions.. .. we have noted this possible discrepancy and will consider implementing procedures to identify and impute these variables in future surveys. " Likewise, we noted discrepancies between two versions of the AHPA data on substance use CON laws; AHPA advised us that the "matrix" version we used should be trusted over the map version, which has since been removed from their website. Future work should consider the use of administrative data where possible.
While these limitations mean that our estimates lack precision, it remains the case that the evidence for CON requirements for SUD treatment facilities is either nonexistent or negative. The stated intention of CON laws is to promote access to care [8], but the only previous study on them [8] found that they reduce the number of SUD facilities, and we find that their only significant effect is to reduce the forms of payment accepted by facilities. Given that taxpayers finance a large share of the costs associated with SUDs through the funding of public insurance programs, free treatment paid through government grants and contracts, and cost shifting, policy makers may wish to reconsider whether substance use CON laws are promoting their intended goals.
|
v3-fos-license
|
2021-05-07T13:17:54.900Z
|
2021-05-07T00:00:00.000
|
233874428
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.607085/pdf",
"pdf_hash": "b0bda3b48cbe8700ed2b8119a1fa3d25c54837fc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:659",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b0bda3b48cbe8700ed2b8119a1fa3d25c54837fc",
"year": 2021
}
|
pes2o/s2orc
|
Novel Compound Heterozygous Pathogenic Variants in SUOX Cause Isolated Sulfite Oxidase Deficiency in a Chinese Han Family
Aim To explore the clinical imaging, laboratory and genetic characteristics of a newborn boy with isolated sulfite oxidase deficiency (ISOD) in a Chinese mainland cohort. Methods Homocysteine and uric acid in plasma and cysteine and total homocysteine in the blood spot were assessed in a Chinese newborn patient with progressive encephalopathy, tonic seizures, abnormal muscle tone, and feeding difficulties. Whole exome sequencing and Sanger sequencing facilitated an accurate diagnosis. The pathogenicity predictions and conservation analysis of the identified mutations were conducted by bioinformatics tools. Results Low total homocysteine was detected in the blood spot, while homocysteine and uric acid levels were normal in the plasma. S-sulfocysteine was abnormally elevated in urine. A follow-up examination revealed several progressive neuropathological findings. Also, intermittent convulsions and axial dystonia were observed. However, the coordination of sucking and swallowing was slightly improved. A novel paternal nonsense variant c.475G > T (p.Glu159∗) and a novel maternal missense variant c.1201A > G (p.Lys401Glu) in SUOX were identified in this case by co-segregation verification. Conclusion This is the second report of early-onset ISOD case in a non-consanguineous Chinese mainland family. Combined with the clinical characteristics and biochemical indexes, we speculated that these two novel pathogenic variants of the SUOX gene underlie the cause of the disease in this patient. Next-generation sequencing (NGS) and Sanger sequencing provided reliable basis for clinical and prenatal diagnoses of this family, it also enriched the mutation spectrum of the SUOX gene.
INTRODUCTION
Isolated sulfite oxidase deficiency (ISOD, OMIM: 272300) is an autosomal recessive inherited neurometabolic disease caused by deficient activity of sulfite oxidase. It is characterized by some severe neurological symptoms, including seizures, often non-effective to anticonvulsant medications, and rapidly progressive encephalopathy resulting in a similar condition of neonatal hypoxic ischemia. The majority of the patients developed microcephaly, feeding difficulties, and dislocated ocular lenses. Tissue accumulation and high urinary excretion of sulfite, thiosulfate, and S-sulfocysteine were the main biochemical features of the disease (Tan et al., 2005). The time of onset is neonatal or early infantile period. The incidence of ISOD has not been reported epidemiologically. To date, < 50 cases have been reported worldwide (van der Klei-van Moorsel et al., 1991;Rupar et al., 1996;Garrett et al., 1998;Johnson et al., 2002a;Lee et al., 2002;Seidahmed et al., 2005;Claerhout et al., 2018;Chen et al., 2014;Rocha et al., 2014;Zaki et al., 2016;Brumaru et al., 2017;Lee et al., 2017;Mhanni et al., 2020;Sharawat et al., 2020;Du et al., 2021). Recently, four early-onset ISOD patients have been reported in Hong Kong and Taiwan, China Lee et al., 2002Lee et al., , 2017Chen et al., 2014), one early-onset patient in Chinese mainland (Du et al., 2021), and one late-onset ISOD pedigree including three patients have been reported in Chinese mainland (Tian et al., 2019).
Oxidation of sulfite is catalyzed by sulfite oxidase (SO) to sulfate, which constitutes the terminal reaction in the oxidative degradation of sulfur-containing amino acids, methionine, and cysteine. SO is a molybdo hemoprotein comprising of 545 amino acids. The gene encoding SO (SUOX, OMIM 606887) maps to chromosome 12q13.2-12q13.3, and the coding sequence contains three exons and two introns (Johnson et al., 2002a). To date, only 29 SUOX variants were reported in HGMD database, including missense, nonsense, and deletion, or insertion mutations, which have been identified in unrelated individuals with ISOD worldwide. However, only 5/29 mutations were reported in Taiwan patients (Chen et al., 2014). In this study, we presented the clinical, imaging, and biochemical characteristics of an 18-day-old newborn boy with SO deficiency in the mainland Chinese cohort and two previously unreported pathogenic variants in the SUOX gene. The patient was diagnosed based on the clinical features and genetic analysis.
Clinical Features and Biochemical Findings of the Patient
The proband was a male child born to non-consanguineous chinese parents with a full-term gestation and a vaginal delivery. He had a normal weight (3,020 g) and head circumference (34 cm) at birth. The family history was unremarkable. All members of his family participated in this study after providing written informed consent. The Ethics Committee of the Xi'an Children's Hospital reviewed and approved our study protocol that was in compliance with the Helsinki declaration.
The proband had projectile vomitting at the age of 16 days, accompanied by irritable crying, fever, and diarrhea. After 2 days, he was admitted for further treatment, wherein cardiovascular, abdominal, genitourinary, electrolytes, hepatic, and renal functions were found to be normal except abundant leukocytes detected in the urine routine. The results of blood tandem mass spectrometry analysis were normal. Urine organic acidemia screening showed slightly elevated 3-hydroxypropionic acid, 4-hydroxyphenylacetic acid, and 4-hydroxyphenyl-lactic acid. The day after the admission, he presented enophthalmos in the crying or quiet state. Brain magnetic resonance imaging (MRI) did not show any significant abnormality (Figure 1Aa). The visual evoked potential showed decreased binocular amplitude and prolonged latency. The physicians suspected diarrhea and urinary tract infections, which could be treated before discharge from the hospital.
At the age of 33 days, he was readmitted for fever and diarrhea, which rapidly progressed to encephalopathy, including tonic seizures, unconsciousness, dyspnea, and lethargy. Physical examination did not reveal dysmorphia. The birth weight increased only 230 g in 1 month. Moreover, the patient was irritable, hypertonic, and his coordination of sucking and swallowing was severely impaired ( Table 1). Blood and cerebrospinal fluid cultures yielded negative results. Also, the serum ammonia and lactic acid level were significantly elevated. Electro encephalogram (EEG) showed moderately abnormal neonatal data: multifocal sharp waves and frequent discharge. The seizures were partially controlled by phenobarbital. Craniocerebral ultrasound showed cerebral edema. Brain MRI showed diffuse signal abnormalities in bilateral cerebral hemispheres, basal ganglia, and thalamus (Figure 1Ab), and hence, a neurometabolic disorder was suspected. Fundus examination showed ischemic changes in the optic nerve in both eyes. Plasma amino acid and urinary organic acid profiles did not reveal any obvious abnormality. The following treatment measures were adopted for the patients: (1) Anti-infection treatment of ceftazidime, and the fluid volume was limited to 80-100 mL/kg/d; (2) Mannitol and furosemide were used to reduce intracranial pressure and brain edema; (3) Phenobarbital was used to control seizures in the early stages, following which, levetiracetam was applied. (4) Either oxygen or passive inhalation of oxygen was supplied; (5) L-carnitine and sodium bicarbonate infusion were given to correct acidosis; (6) Fasting was initiated, and then the low-protein milk powder was fed through the gastrointestinal tract. After 18 days of treatment, despite the difficulty in feeding (a tiny spoon feeding was necessary) and abnormal muscle tension, the infant showed the following symptoms: normal body temperature, steady breathing, flat bregma, seizure reduction, correction of acidosis, and decreased blood ammonia and lactate. Subsequently, it was instructed to continue feeding the patient with low-protein milk powder with oral administration of levetiracetam and levocanidin.
A follow-up examination at the age of 5 months, he was presented with slow increase of body weight and progressive microcephaly. His weight was 5,500 g, and the head circumference was 40 cm which is 2 SD below the mean. Sucking and swallowing were significantly improved, but he also presented intermittent convulsions and axial dystonia. A repeated MRI on the same day showed polycystic encephalomalacia and atrophy with bilateral subdural effusion (Figure 1Ac). Serum ammonia and lactic acid levels returned to normal. Moreover, based on the genetic test results, we detected the level of S-sulfocysteine in patient's urine and the level of t-homocysteine in patient's dry blood spots by liquid chromatography-mass spectrometry (Shimadzu, Tokyo, Japan) (Fu et al., 2013;Sass et al., 2004). We also detected the level of uric acid in patient's serum by uric acid method (Maccura, Chengdu, China), and the level of homocysteine in patient's serum by cycling method (Gcell, Beijing, China) (Roberts and Roberts, 2004). The data showed that low total homocysteine was found in blood spot, while homocysteine and uric acid levels were normal in the plasma. S-sulfocysteine was abnormally elevated in urine ( Table 2). The patient was treated with low methionine, low protein diet and low cysteine, and rehabilitation (dysphagia and sports) training was given. At 9 months, the patient died of worsened condition due to renewed fever and convulsions.
Genetic Analysis
Genomic DNA was extracted from 3 mL of peripheral blood leucocytes using the QIAamp Blood Midi Kit (Qiagen, Valencia, CA, United States), according to the manufacturer's instructions. Whole exomes were captured (MyGenostics Inc., Beijing, China) and sequenced on Illumina HiSeq 2000 sequencer. Alignment and variant calling were performed by applying an in-house bioinformatics pipeline (MyGenostics). The variants with a minor allele frequency of <0.05 in population databases, such as 1,000 genome, ESP6500, dbSNP, EXAC, and in-house database (MyGenostics), expected to affect protein coding/splicing or present in the Human Gene Mutation Database (HGMD), were included in the analysis.
The identified mutation was verified among the remaining family members by Sanger sequencing. The pathogenicity of candidate variants was deduced according to the American College of Medical Genetics and Genomics (ACMG) guidelines. The effect of missense variation on the three-dimensional (3D) structure of SUOX protein was analyzed by Swiss-PDB viewer (PDB: 1MJ4).
Structure-Function Correlations of SUOX Variants
Sulfite oxidase is a homodimeric protein in the intermembrane space of mitochondria. It plays a vital role in the metabolic pathway of sulfur amino acids that are involved in the last step reaction in the oxidative degradation of the sulfurcontaining amino acids, cysteine and methionine (Macleod et al., 1961;Feng et al., 2007). The SO deficiency prevents the sulfites from being oxidized to sulfates. The natural enzyme is a homodimer with a molecular mass of approximately 110 kDa. Each monomer include three different domains: a smaller N-terminal cytochrome b5 heme-binding domain, a central domain harboring the molybdenum cofactor (Moco), and a larger C-terminal dimerization domain with crucial residues at the dimer interface ( Figure 1C) (Kisker et al., 1997). The nonsense variant p.Glu159 * is harbored on the C-terminus of the cytochrome b5 heme-binding domain and near the beginning of the molybdopterin-binding domain of the SO, which might produce a truncated protein containing 159 amino acids, lacking a crucial molybdopterin-binding domain. The missense variant p.Lys401Glu is present in the last residue of the molybdopterin-binding domain, leading to the glutamic acid instead of lysine acid at position 401 in the SO protein. Reportedly, other missense variants (R160Q) in this domain can reduce enzyme activity (Garrett et al., 1998). Moreover, lysine 401 is conserved across evolution of SO ( Figure 1D). SWISS-MODEL 1 simulates the prominent amino acid and conformational changes in the influenced polypeptide ( Figure 1E). Consequently, the length of the side chain was altered after the substitution of lysine by glutamic acid.
DISCUSSION
Moco is a core component of the sulfite oxidase maturation process. On the other hand, the synthesis of Moco requires several steps, the related enzymes are encoded by the genes MOCS1, MOCS2, MOCS3, and GEPH. Hence, the defect of Moco synthesis results in combined deficiencies of the enzymes SO, xanthine dehydrogenase, and aldehyde oxidase (Atwal and Scaglia, 2016). The two forms of SO deficiencies are regarded as Moco deficiency (MoCD) and ISOD, respectively. Nonetheless, these deficiencies are difficult to distinguish based on clinical manifestations. Biochemically, the affected individuals with ISOD and MoCD show the accumulation of sulfite, thiosulfate, and S-sulfocysteine in the tissues and body (Zaki et al., 2016). However, individuals with MoCD also display elevated urinary xanthine and hypoxanthine levels (Schwarz, 2005). In addition, urinary urothione, a breakdown product of the molybdenum cofactor, is absent in MoCD but present in ISOD (Sass et al., 2010). Therefore, genetic analysis is vital for the definite diagnosis of ISOD.
Most of the ISOD patients see a doctor in the neonatal period and the clinical manifestation is usually severe, including a progressive course with spasticity, intellectual deficit, microcephaly, and possible development of lens dislocation. In addition, ISOD is an incurable disease without an effective long-term therapy. Also, late-onset and mild forms of the illness have been described (Barbot et al., 1995;Touati et al., 2000;Del Rizzo et al., 2013;Rocha et al., 2014;Tian et al., 2019). The neuropathological characteristic of ISOD is significant but non-specific. The neuroimaging by computed tomography (CT) or MRI showed progressive neuropathological results, including cerebellar and cerebral atrophy, white matter changes, ventriculomegaly, and cystic leukomalacia (Claerhout et al., 2018). The clinical phenotype of our patient with ISOD was similar to that reported in the literature except for the absence of lens dislocation. Moreover, the results of the brain MRI showed progressive development; the MRI at the 5 months of age showed gradual polycystic encephalomalacia and atrophy with bilateral subdural effusion compared to that in the newborn. The natural history of ectopia lentis is difficult to describe because not all patients present lens subluxation in the first year of life (Lueder and Steiner, 1995). Our patient did not display ectopic lens but only ischemic changes in the optic nerve in both eyes, and the phenotype may or not appear with the age, thereby necessitating a regular follow-up. Biochemically, the patient presented low total homocysteine in the blood spot, while homocysteine and uric acid in plasma were normal. S-sulfocysteine presented an abnormally elevated level in urine. These clinical manifestations and laboratory results were in accordance with the diagnosis of ISOD.
Sulfite oxidase is a molybdo hemoprotein with a homodimer structure. Each monomer of SO contains three identical domains. Presently, the potential functionality of SO is not clear, but the dimerization of SO is crucial for a functional enzyme. Thus, mutations around the dimerization interface of SO result in the inactivation of the enzyme (Karakas and Kisker, 2005).
In the central molybdenum domain, the pterin-based Moco forms the catalytic site of SO. Moreover, Moco is a vital constitute of the SO maturation process and a primary factor for heme integration and dimerization, further necessitating mitochondrial localization of SO (Atwal and Scaglia, 2016). The patient carried the heterozygous variant p.Lys401Glu, which is localized in the last residue of the molybdenum domain and adjacent to the dimer interface. Hence, we speculated that p.Lys401Glu affects the interaction between molybdenum and dimerization domains, which might disturb the structural stability of the protein. Thus, it is speculated that the positive charge lysine is replaced by the negative charge of glutamic acid, which might affect the binding of the enzyme active site. In addition, the lysine guanidino group might attract the divalent sulfite anion. The second novel variant p.Glu159 * in the first domain of SO introduced stop codons and led to the premature termination of protein translation. Therefore, this variant led to a severe form of SO deficiency in our patient. Herein, we conducted genetic analysis on the family and identified that the variants, c.475G > T(p.Glu159 * ) and c.1201A > G(p.Lys401Glu) derived from the father and mother, respectively. Genetic counseling is indispensable for the family which has a ISOD proband because the situation is often lethal in the neonatal period. Although the patient beyond the neonatal period, severe sequelae are unavoidable. In view of this situation, amniocentesis should be carried out between 15 and 23 weeks of the subsequent pregnancy in this couple for prenatal diagnosis (Özcan et al., 2017). The analysis of SUOX exon 6 is recommended to deduce whether the fetus carries any of the pathogenic variants from his parents.
The correlation between genotype and phenotype of ISOD has not yet been well elucidated. Reportedly, the clinical manifestations of patients with SUOX missense mutations were milder than those with null mutations (Claerhout et al., 2018), because these missense mutations of the SUOX gene only resulted in reduced enzyme synthesis, while null mutations abolished SUOX biosynthesis (Rocha et al., 2014). Herein, we reviewed 29 ISOD patients who reported genotypic and phenotypic features with integrity (Table 1); 20/29 patients were early-onset and 9/29 were late-onset or mild presentation. Interestingly, all the late-onset patients carried the missense variants that were distributed in the three structural domains of the SUOX protein. Conversely, among the 40 alleles carried by earlyonset patients, nonsense variants accounted for 26/40 (65%) and missense variants accounted for 14/40 (35%) (Figure 1C). The age of onset ranged from 14h to 40 days in patients with early-onset or severe phenotypes, while it ranged from 1 month to 2 years in patients with late-onset or mild phenotypes and even in patients who spontaneously recovered without treatment. Therefore, the age at onset of ISOD patients may be related to the type of genetic variation. This conclusion provides a reasonable explanation for the clinical severity of our case.
To date, the treatment for neonatal ISOD is not promising. Typically, symptomatic treatment is primarily used to control seizures but with little success. However, dietary restriction intake of methionine, cysteine, and taurine has been found to be effective for mild patients with ISOD (Barbot et al., 1995;Touati et al., 2000;Del Rizzo et al., 2013;Rocha et al., 2014;Tian et al., 2019). In some circumstances, spontaneously recovery of late-onset mild ISOD has been reported (Tian et al., 2019). Belaidi et al. (2015) reported that the oxygen reactivity of mammalian SO provides a novel therapeutic route for the treatment of ISOD and MoCD. According to a recent study, oxidative stress and mitochondrial dysfunction underlie the pathophysiology of the brain damage of ISOD, providing novel viewpoints for the potential therapeutic strategies for this condition (Wyse et al., 2019). Thus, we tried low sulfur amino acid diet and oral levetiracetam, which improved the feeding difficulties; however, epilepsy did not improve significantly.
In conclusion, ISOD is a rare neurometabolic disorder that is difficult to diagnose by clinical symptoms alone. The two novel potentially pathogenic variants in SUOX were found in a Chinese mainland newborn patient with ISOD, and the clinical features were described comprehensively. Thus, the patients with suspected ISOD maybe more effectively diagnosed by genetic analysis, which would further improve the mutation spectrum of SUOX. In addition, genetic counseling is crucial because severe neurodegeneration develops, especially in the early neonatal period that prevents birth defects.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by The Ethical Committee of the Xi'an Children's Hospital. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
JZ: conceptualization, writing manuscript, and editing. YA: data curation and sample sequencing. HJ: data curation, software, and methodology. HW: funding acquisition. FC: writing manuscript, editing, and manuscript review. YY: funding acquisition, project administration, and manuscript review. All authors contributed to the article and approved the submitted version.
|
v3-fos-license
|
2023-01-20T14:24:25.846Z
|
2015-04-29T00:00:00.000
|
256004832
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-015-1147-3",
"pdf_hash": "03909dab76880bda39c4f144d0110864d28c1ce7",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:662",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "03909dab76880bda39c4f144d0110864d28c1ce7",
"year": 2015
}
|
pes2o/s2orc
|
Uptake of influenza vaccination in pregnancy amongst Australian Aboriginal and Torres Strait Islander women: a mixed-methods pilot study
Influenza infection during pregnancy causes significant morbidity and mortality. Immunisation against influenza is recommended during pregnancy in several countries however, there are limited data on vaccine uptake, and the determinants of vaccination, in pregnant Australian Aboriginal and/or Torres Islander women. This study aimed to collect pilot data on vaccine uptake and attitudes towards, and perceptions of, maternal influenza vaccination in this population in order to inform the development of larger studies. A mixed-methods study comprised of a cross-sectional survey and yarning circles (focus groups) amongst Aboriginal and Torres Strait Islander women attending two primary health care services. The women were between 28 weeks gestation and less than 16 weeks post-birth. These data were supplemented by data collected in an ongoing national Australian study of maternal influenza vaccination. Aboriginal research officers collected community data and data from the yarning circles which were based on a narrative enquiry framework. Descriptive statistics were used to analyse quantitative data and thematic analyses were applied to qualitative data. Quantitative data were available for 53 women and seven of these women participated in the yarning circles. The proportion of women who reported receipt of an influenza vaccine during their pregnancy was 9/53. Less than half of the participants (21/53) reported they had been offered the vaccine in pregnancy. Forty-three percent reported they would get a vaccine if they became pregnant again. Qualitative data suggested perceived benefits to themselves and their infants were important factors in the decision to be vaccinated but there was insufficient information available to women to make that choice. The rates of influenza immunisation may continue to remain low for Aboriginal and/or Torres Strait Islander women during pregnancy. Access to services and recommendations by a health care worker may be factors in the lower rates. Our findings support the need for larger studies directed at monitoring and understanding the determinants of maternal influenza vaccine uptake during pregnancy in Australian Aboriginal and Torres Strait Islander women. This research will best be achieved using methods that account for the social and cultural contexts of Aboriginal and Torres Strait Islander communities in Australia.
Background
Influenza infection during pregnancy is associated with increased morbidity, higher rates of hospitalization than the general population [1,2] and high mortality rates, particularly during pandemic periods [3]. Consequently, pregnant women are deemed a priority group for influenza vaccination both in Australia and overseas [4][5][6]. Furthermore maternal influenza vaccination during pregnancy appears to confer protection against influenza infection in young infants up to six months of age, a group for whom a licensed influenza vaccine is not available [7,8].
Aboriginal and/or Torres Strait Islander Australians are at increased risk of influenza and adverse influenza infection outcomes [9,10]. Given the increased risk, annual influenza vaccination is recommended in Australia, and the vaccine is free, for all Aboriginal and/or Torres Strait Island Peoples aged 15 years and older [11]. There are limited data available on uptake during pregnancy in this population. A large national study is underway in Australia (the FluMum Study) [12] however those data will be predominantly derived from women giving birth in major cities and may not reflect the experience of women in regional and rural areas.
In order to achieve high uptake an in-depth understanding of the determinants of vaccination is required through large studies that account for the heterogeneity of the Australian Aboriginal and Torres Strait Islander population [13,14]. To inform these studies, we conducted a mixed-methods pilot study of influenza vaccine uptake, and the factors influencing vaccine uptake, during pregnancy in Aboriginal and Torres Strait Islander women attending two Aboriginal and Torres Strait Islander specific primary health care centres in south-east Queensland, Australia.
Setting
The study was conducted from January to April 2014 in Caboolture and Toowoomba, Queensland. Both towns are within two hours drive of Brisbane, the capital city of Queensland, and are home to large Aboriginal and Torres Strait Islander communities. Both towns are serviced by Aboriginal and Torres Strait Islander specific primary health care services that include maternal and child health programs and have total clinic populations exceeding 3000 clients each.
Design
The study utilised a mixed methods approach comprised of three components: 1. An analysis of data collected from women enrolled in the national FluMum study [12] at the Brisbane site. 2. A cross-sectional survey of women presenting to the two primary health care services. 3. Yarning circles (focus groups) [15] of women who had completed the cross-sectional survey outlined in step 2.
The study was approved by the Human Research Ethics Committee of the Queensland University of Technology (#1300000839).
The FluMum study
FluMum is a national cohort study of influenza vaccine uptake during pregnancy and the effectiveness of prenatal influenza vaccination in preventing laboratory confirmed influenza in infants. The protocol for this study has been published elsewhere [12]. For this current study, we extracted data on women recruited to the FluMum study for the years 2012 and 2013 who identified as Aboriginal and/or Torres Strait Islander at enrolment.
Community-based cross-sectional surveys
We conducted surveys of women presenting to the two participating health services. Whether or not it was standard care at these specific health services to routinely offer and/or provide influenza vaccination to pregnant women was not considered in the choice of recruitment sites as women may present to numerous health care providers (eg midwives, obstetricians, general practitioners etc.) at several different locations over the course of their pregnancy. Women were approached for participation by Aboriginal researchers who explained the study using a plain language statement and written informed consent was obtained. Women who agreed to participate in the survey were asked if they would like to participate in the yarning groups described below and re-consented to that component of the research.
Women were eligible for inclusion in the study if they: identified as Aboriginal and/or Torres Strait Islander; were aged 17 years or more; were between 28 weeks gestation and less than 16 weeks post birth; were willing and able to adhere to all protocol requirements; and, had sufficient verbal English to permit questionnaire completion at study entry. There were no specific exclusion criteria.
At the time of recruitment, participants completed a structured questionnaire with the assistance of the Aboriginal researcher. This sought self-reported influenza status, information relating to the barriers/influences of influenza vaccination, self-reported maternal medical/ obstetric history and some socio-demographic indicators. The questionnaire did not specifically relate to whether or not the participating clinics offered the vaccine or whether the offer of influenza vaccination during pregnancy was standard care at those clinics Data were entered into a password protected, electronic database housed at the Queensland Children's Medical Research Institute.
Sample size and statistics
The cross-sectional survey was undertaken as a convenience sample. No formal sample size calculations were undertaken given this was a pilot study. Our primary outcome and analysis was the proportion of Aboriginal and/or Torres Strait Islander women who self-reported that they received an influenza vaccine during pregnancy. Secondary outcomes included perceptions of influenza and influenza vaccination in pregnancy. Multivariate logistic regression models were planned to identify potential independent predictors of vaccine uptake during pregnancy if sufficient women were recruited to allow meaningful analysis.
Yarning circles methods
The qualitative arm of the study recognized the social context of human behaviour and theorized how norms, routines and patterns of practice develop within those contexts. It suggested that an individual's knowledge or reality is a social product derived from relations with others through temporal and contextual interactions that influence and determine meanings and actions [16]. This approach is directly suited to studying and analyzing how Aboriginal and Torres Strait Islander women experience pregnancy and vaccination, as well as contextual accounts of decision making around vaccination. Narrative inquiry guided the process of this study [17,18]. We aimed to enroll a maximum of eight participants per group (four groups in total). If saturation point had not been reached following these four groups, a further two groups were to be conducted.
Data collection and participants
'Yarning circles' [15] were held in settings and at a time convenient to participants. Semi-structured sessions began with time for the facilitator to build relationships and rapport with the participants to provide a safe space to share and consider views in the context of the views of others [19,20]. Sessions were audiotaped and a research assistant was present to observe and take notes during the session. A semi-structured interview guide, based on the "Theory of Planned Behaviour" [21] was used to inform the conversations. This theory is an accepted approach to understanding behavioural choices of Aboriginal people [22]. The following trigger questions were employed:
Data analysis
All yarning circles were transcribed verbatim. The original per protocol analyses of the transcripts included detailed deductive and inductive processes and plans to identify and define underlying patterns across the stories with an emphasis on social, cultural and historical context. However, given the unanticipated small number of participants recruited, narrative summaries of the transcripts and major themes are provided here.
Quantitative data
A total of 741 women were recruited to the FluMum study in Brisbane in 2012 and 2013. Of these, 16 (2%) identified as Aboriginal and/or Torres Strait Islander. Sixty-five women were screened in the communitybased survey and 37 women were enrolled between January and April 2014. Reasons for non-participation in the community study included time constraints, the partner not wanting involvement, ineligibility and inability to contact after initial screening. This provided data on a total of 53 women for inclusion in the analysis. The demographic and pregnancy status of these women are presented in Table 1.
Knowledge of influenza
All but two women had heard about influenza and 18 (34%) women reported they had received an influenza vaccine at some time in the past. Twenty-one (40%) women reported someone had spoken to them about influenza during their pregnancy and some of these women reported several sources. Fifteen of these women reported they had received that information from their general practitioner (GP), 14 from a midwife and seven from other sources (nurse, partner, not specified, obstetrician). Thirty-nine (39/53) women reported that they thought getting influenza during their pregnancy was a serious disease.
Influenza vaccination during pregnancy
Although nine (9/53) women reported receipt of an influenza vaccine during their pregnancy, 26/53 said they knew influenza vaccine was recommended during pregnancy. Most women (28/53, 53%) said they would accept the vaccine if it was offered during their pregnancy, seven indicated they would not, five did not know and the data were missing for the remaining seven women. Whereas nine women said they had received an influenza vaccine in their pregnancy, 23/53 reported they were offered the vaccine. Only five of the women who were offered a vaccine actually received a vaccine during their pregnancy (an uptake rate of 22% amongst those that were offered the vaccine). There were 25 women who reported that they were not offered the vaccine during their pregnancy and four women who did not answer the question. Only three of the 25 women who were not offered the vaccine in their pregnancy said they would not have accepted the offer. Two women reported someone had recommended they not get the influenza vaccine during their pregnancy, one by a midwife and one by a grandmother.
In contrast to the 17% uptake rate for influenza vaccination during the most recent pregnancy, 25/53 women reported they would get the influenza vaccine if they became pregnant again, nine said they would not, six did not know and the data were missing for 13 women.
Perceptions of influenza vaccine in pregnancy
Participants were asked a series of questions about their perceptions of influenza vaccine in pregnancy and immunisations in general ( Table 2). Few women, regardless of their pregnancy vaccination status, thought the vaccine would stop them from getting influenza in their pregnancy or that it would stop their newborn from getting influenza. From a safety perspective, only two of nine vaccinated women thought a vaccine during pregnancy could make them sick compared with 22/42 unvaccinated women. Very few women in either group thought vaccination in pregnancy could harm the unborn baby (Table 2).
Yarning circles results
Three yarning circles were held between January and April 2014. While at least 10 women were consented for each group, and planned to attend, only seven in total participated across all three groups. The majority of reasons for non-attendance were related to pregnancy factors (birth, complications, tiredness) or changes in personal circumstances. Participants ranged in age from 21 to 34 years and all were mothers of two or more children. One participant was a practising Aboriginal Health Worker.
Feelings about pregnancy
One participant expressed she "loved being pregnant" however others voiced feelings of isolation, heightened levels of stress, lack of support, concerns about the impact of their pregnancy on other family members and that they were struggling emotionally. Regular in-home support for women experiencing difficulties during their pregnancies was seen as a particular need that was not being met.
"I just need basically someone like to sit down to talk to every now and then and but I did have friends, they just changed in what they do….."
Experience with influenza and influenza vaccination
All participants were aware of influenza and had either been ill themselves or reported illness amongst their family members. Overall the participants were supportive of influenza vaccination during pregnancy, and in general, particularly if it was thought to benefit both the mother and the unborn child.
F33 "Yeah, I basically find it's very important to have it that is like, it's not mainly for your health, but if you're like your kids end up gettin sick too, it's good for them to have it too… ….like when you don't have it you're more sicker than when you do have it…like it calms it down a lot….." Participants also thought that influenza vaccination was important for family members and most thought that even if it didn't prevent influenza, the vaccine may lessen the severity of disease. They were interested in the safety of the vaccine, what products were used to make the vaccine and wanted to understand the risks of vaccination to self and the foetus, this information is being provided as part of study feedback. All women indicated they would recommend influenza vaccine during pregnancy to friends and family, and all but one indicated they would be willing to be vaccinated. The participant who said she would not be immunized ascribed this to complications in a past pregnancy. However, she was supportive of others getting the vaccine, a comment that was not explored further in the group discussion.
Five of the seven participants were not aware that influenza vaccination was recommended and available free for pregnant women. The majority reported their health service providers had not discussed influenza vaccination with them during their pregnancies. There was a need for more health education, awareness and promotion around influenza vaccinations for pregnant mothers dedicated specifically to Aboriginal and Torres Strait Islander peoples and there was not enough information in either the community or hospital setting. Two members discussed doctor-led education sessions, which would strengthen the relationship between doctors and patients, with patients obtaining information from the source. Other areas discussed included awareness of the ingredients of the influenza vaccine and its benefits. One participant indicated that on top of just providing information, services should just give the vaccine at the time the person was there as the steps required to get vaccinated (ie go to pharmacy, come back to clinic etc.) were difficult to complete given competing priorities. Doesn't think flu is serious 1 (11%) 0 (0%) Can't afford the cost of flu vaccine 0 (0%) 1 (2%) *excludes data on 3 women for whom vaccination status was unknown.
F21 "Ummm now when you go in there they have to give you all these descriptions and all that but they don't do nothing about it… they should just say if you wanted to get the needle, they should just pull out the needle …".
When thinking about what would influence them to be vaccinated during pregnancy, the majority considered the potential benefit to themselves and their unborn child as the primary factor.
Health services
The "ideal" setting for participants with respect to pregnancy care and getting vaccinated during pregnancy was one that was culturally specific, multi-disciplinary, provided external support services and, importantly, one in which there was more active involvement of doctors in discussing vaccination with them. The relationship with doctors was a recurring theme, with participants often discussing the limited interaction and communication they had with their doctors and how they wanted to hear more from the doctor, not others, about vaccination during pregnancy.
F32 "…somewhere, where you get treated like a person and not just a number I suppose, rather than a than being a cattle prod, so to speak. So get you in get you out… be done with it…. Someone that's gonna actually gonna look at it as a holistic not only just look at you for being there for a flu vax, ok, your pregnant lets tap into these services………… ok you've got umm diabetes, let's do this, let's do a health assessment, I want the full coverage for the best for myself, for my family for my baby…so I want a service that's actually going to provide a holistic point of care…….".
Discussion
This pilot study examined the uptake of influenza vaccine during pregnancy amongst Aboriginal and Torres Strait Islander women from two urban/inner regional communities in South East Queensland. We found that less than half of these women were offered influenza vaccine whilst they were pregnant and only a small proportion had actually been vaccinated. Forty-seven percent of this small sample of women indicated they would get vaccinated if they became pregnant again but this would still leave half of those women unvaccinated. Very few women reported that they thought influenza vaccination in pregnancy would prevent influenza during their pregnancy nor in their newborns. While our sample size was small our study confirms a need for larger studies.
Our findings with respect to low uptake of influenza vaccination in pregnancy are similar to studies conducted in other populations, albeit with coverage ranging from 10 to 60% [23][24][25][26][27]. Studies that have investigated differences in uptake within populations have reported lower uptake in minority groups (eg non-Hispanic African Americans compared to non-Hispanic Americans in the United States) and between socio-economic groups [24,[28][29][30][31]. Potential explanations for these disparities include access to health services, cost of vaccine and the logistics of being vaccinated and the lack of socio-culturally appropriate education. The qualitative data arising from the yarning circles seems to support the latter two factors as important determinants of vaccine uptake. A lack of adequate information about the vaccine, why it is needed and its safety in pregnancy have been identified as major factors behind pregnant women declining a vaccine offer [24,31,32]. The majority of both vaccinated and unvaccinated women in our study did not believe the vaccine would prevent influenza in pregnancy or that it would prevent influenza in their newborns. This suggests the information that is available to women may not adequately address these issues and that they may not be discussed in detail with providers at the time vaccine is recommended or offered.
How women feel about pregnancy may also be a factor in the conversations about health including immunisation. As one woman identified: "I just need basically someone like sit down to talk to every now and then"…. Further exploration is required about how the decision about influenza immunisation interplays with other life issues for women who are pregnant.
The importance of the recommendation for vaccination by a woman's practitioner, particularly the medical practitioner, has been documented in several studies [24,26,31,33]. The level of knowledge a physician has about influenza vaccination during pregnancy has also been reported to be associated with vaccine uptake [34]. Our focus group participants indicated the importance of the medical practitioner in discussing vaccination during pregnancy. Participants suggested they preferred to receive the information from their doctors than others and that doctors needed to be more involved in discussing vaccination with their patients.
Less than a quarter of the women in our study who were offered the vaccine (and would have accepted it) were actually vaccinated. While the survey did not ask why they were not vaccinated despite an offer of vaccine, our focus groups suggest that the vaccine not being immediately available at the time of the offer may play a role. The need to take a script to a pharmacy, collect the vaccine and return to a clinic for a second time to be vaccinated was a stated deterrent. Provision of vaccine at the time the recommendation or offer is made is likely to facilitate vaccine uptake, particularly in outpatient settings where accredited nurse immunisers are available to administer the vaccine [35].
The predominant strength of this study was the involvement of Aboriginal research staff within the two participating health services and the associated followup. This provided a culturally appropriate approach to data collection, capacity building for Aboriginal and Torres Strait Islander staff, students and health service providers, and identified factors that would need to be considered in future studies. The combination of both quantitative and qualitative data provided a richness of data that could not be achieved by one method alone.
The major limitation of this study is the small sample size. While larger numbers were planned for both the community survey and the yarning circles, this was not achieved within the available time frame. The predominant barriers to recruitment and to completion of the yarning circles were either ineligibility or competing priorities for potential participants. However the study was by design an exploratory pilot study that has identified key issues needing further investigation in larger studies.
A further weakness of this study is that we relied on self-report of vaccination status. There are limited studies that have evaluated the validity of self-reported antenatal immunisation amongst pregnant women [36]. The limited data available on the reliability of self-report, and none for Australian Aboriginal and Torres Strait Islander women, is a limitation to interpreting coverage data and monitoring the effectiveness of immunisation campaigns, particularly when population-based adult immunisation registers are unavailable. Further studies need to incorporate validation of self-report into study procedures.
Conclusion
While the findings of our small study cannot be considered representative of Australia's Aboriginal and Torres Strait Islander population, it suggests influenza vaccination uptake during pregnancy in Aboriginal and/or Torres Strait Islander women may be low. There is a lack of knowledge of the recommendations for vaccination, health service providers are not universally offering the vaccine and importantly, women are not being vaccinated even if the vaccine was offered. Women reported wanting to hear more from doctors in regard to influenza immunisation.
While our yarning circles suggest there is a lack of information available to women, and that the logistics of being vaccinated even if it is offered or they know it is recommended, and these are likely to play an important role in vaccine uptake, our findings point to an urgent need to repeat the study on a larger scale and in a broader cross-section of communities. In addition to exploring in more detail the reasons why women are not vaccinated even if offered the vaccine, an important question to address is the acceptance of influenza vaccination during pregnancy in Aboriginal and Torres Strait Islander women given approximately half of our study population indicated they would not be vaccinated in their next pregnancy.
|
v3-fos-license
|
2024-01-05T06:17:41.841Z
|
2024-01-03T00:00:00.000
|
266753512
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcnephrol.biomedcentral.com/counter/pdf/10.1186/s12882-023-03451-4",
"pdf_hash": "a3402a92df1b34ab38a60368620af7ad202dc9bf",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:664",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0710462e59ae3e16e719a8736dfa315c2859faab",
"year": 2024
}
|
pes2o/s2orc
|
Association between systemic immune inflammation Index and all-cause mortality in incident peritoneal dialysis-treated CKD patients: a multi-center retrospective cohort study
Background Chronic inflammatory disorders in peritoneal dialysis (PD) contribute to the adverse clinical outcome. Systemic immune inflammation index (SII) is the novel and convenient measurement that is positively associated with various diseases. However, scarce is known regarding the association between SII with all-cause mortality among PD patients. Methods In this multi-center retrospective cohort study, 1,677 incident patients with PD were enrolled. Eligible patients were stratified into groups based on SII level: tertile 1(< 456.76), tertile 2(456.76 to 819.03), and tertile 3(> 819.03). The primary endpoint was the all-cause mortality. Both Cox regression analysis and competing risk models were used to examine the association between SII and all-cause mortality. Subgroup analysis was performed to assess the influence of the SII tertiles on all-cause mortality in different subgroups. Results During the follow-up period of 30.5 ± 20.0 months, 26.0% (437/1,677) patients died, of whom the SII tertile 3 group accounted for 39.1% (171/437) of the deaths. Patients in the SII tertile 3 group had a higher all-cause mortality rate than patients in the SII tertile 1 and 2 groups (log-rank = 13.037, P < 0.001). The SII tertile 3 group was significantly associated with 80% greater risk (95% confidence interval:1.13 to 2.85; P = 0.013) compared with the SII tertile 1 group in multivariable Cox regression analysis. The competing risk model also indicated that the relationship between SII tertiles and all-cause mortality remains (subdistribution hazard ratio: 1.86; 95% confidence interval: 1.15 to 2.02, P = 0.011). Furthermore, the relationship between the log-transformed SII and all-cause mortality in patients with PD was nearly linear (P = 0.124). Conclusion A close relationship was observed between the SII and all-cause mortality in patients undergoing PD, suggesting that more attention should be paid to the SII, which is a convenient and effective measurement in clinical practice. Supplementary Information The online version contains supplementary material available at 10.1186/s12882-023-03451-4.
The pathogenic process of inflammation is typically complicated and multifactorial, involving both dialysisrelated and dialysis-unrelated factors, which may lead to malnutrition, an adverse cardiovascular prognosis, and mortality in patients undergoing PD [20].Certain inflammatory markers, such as C-reactive protein (CRP), neutrophil-to-lymphocyte ratio (NLR), and monocyteto-lymphocyte ratio (MLR), which represent the systemic inflammatory response, are independent predictors of mortality in patients with PD [5,18,21].Recently, a novel inflammatory marker, the systemic immune inflammation index (SII), which is inexpensive and easily accessible, has been a "hot issue", and associated with cognitive decline [22], cerebrovascular diseases [23], atrial fibrillation [24], and poor prognosis of cancer [25].Additionally, in the general population [23] of cardiovascular [26] or cerebrovascular disease patients [27], the correlation between SII and adverse overall survival has been found.A large, prospective, population-based study recently reported that compared with the lowest quantile of SII, the highest quantile of SII was associated with an approximate 24.6% greater risk of all-cause mortality.Thus, it may be as a useful marker of all-cause mortality in general population [23].Moreover, there is a nearly linear relationship between log-transformed SII and all-cause mortality in patients with acute ischemic stroke [27], but a non-linear relationship between the log-transformed SII and all-cause death in patients with CVD [26,28].
The SII may be better than other inflammation markers (such as CRP, NLR, and MLR) in clinical practice.First, as a new inflammatory biomarker, the SII can provide a more balanced and comprehensive assessment of immune and inflammatory responses [29].As the SII level increases, inflammatory activity in various diseases may increase, leading to poor clinical outcomes [30].Second, based on the counts of three types of circulating immune cells-neutrophils, platelets, and lymphocytes-the SII value reflects the inflammatory state and can present a better predictive value than the NLR and MLR [22,31].Third, compared with other inflammatory indicators such as CRP and interleukin-6 (IL-6), the SII can be easily obtained, routinely collected, and tested in hospitalized patients; thus, it might be useful in predicting poor prognosis.
Nevertheless, to the best of our knowledge, the association between the SII and all-cause mortality in patients undergoing PD has been elucidated only by a few papers.Therefore, we conducted this multi-center retrospective cohort study to test the hypothesis that the inflammatory status defined by the SII is associated with all-cause mortality in patients undergoing PD, ultimately laying the theoretical basis for its potential clinical assessment value.
Study population
A multi-center retrospective cohort study was conducted with 2,469 incident patients with PD between January 1, 2010, and December 31, 2018, who underwent PD treatment for at least 3 months at PD centers of six tertiary hospitals in China.Distributed in several provinces such as Guangdong, Shanghai, and Henan, each hospital has a specialized data registrar, follow-up management system, and electronic medical record system, and has over 400 patient undergoing PD.The main exclusion criteria were as follows: 1) patients with follow-up time less than 3 months (n = 81); 2) patients younger than 18 years or older than 80 years (n = 70); and 3) patients with a history of hematological diseases (n = 6), or rheumatic diseases (n = 25), with medication of corticosteroids (n = 40) or immunosuppressive agents (n = 11), as they could affect the lymphocyte and platelet counts.Patients without SII values were also excluded (n = 559).Finally, 1,677 patients were included in the study and followed up until the end of the study, or December 31, 2019.The flowchart is shown in Fig. 1.This study was approved by the investigational review board of Jiangmen Central Hospital (approval number 2022101) and was conducted in accordance with the Declaration of Helsinki in 2022.Written informed consent was not required because this retrospective study collected pre-existing hospital data.
Data collection
All baseline data were collected 3 months after the initiation of PD therapy.Baseline demographic data included age; sex; current smoking and drinking habits; the presence of diabetes, hypertension, stroke, and cardiovascular diseases; and medication use, which was recorded based on the prescriptions.Clinical and biochemical data included blood pressure, white blood cells, hemoglobin, platelets, neutrophils, lymphocytes, serum creatinine, urea nitrogen, serum albumin, alkaline phosphatase, total cholesterol, triglycerides, sensitivity C-reactive protein (s-CRP), liver function, magnesium, calcium, phosphorus, and intact parathyroid hormone (iPTH).The SII was calculated as platelet count × neutrophil count/lymphocyte count.The baseline residual renal function was assessed using the Chronic Kidney Disease Epidemiology Collaboration creatinine equation.Total Kt/V was calculated using PD Adequest Software 2.0 (Baxter Healthcare Corp., Deerfield, Illinois, USA).
Follow-up and study endpoint
The primary endpoint was the all-cause mortality rate.All patients were followed up until death, cessation of PD (switching to hemodialysis or renal transplantation), loss of follow-up, or the end of follow-up on December 31, 2019.
Statistical analysis
The eligible patients were divided into three groups based on the tertile values of the SII.The lowest (Tertile 1), middle (Tertile 2), and highest (Tertile 3) SII tertile had an SII value of < 456.76, 456.76 to 819.03, and > 819.03, respectively.Continuous variables with a normal distribution were expressed as mean ± standard deviation, and those with a skewed distribution were described as median (25%-75% interquartile range [IQR]).Categorical data are expressed as frequencies or percentages (%).The SII was converted into logSII by the log transformation because of the non-normal distribution.Differences among the three groups were assessed using a one-way analysis of variance, the Kruskal-Wallis test, or χ 2 test, Fig. 1 Flow chart of this study as appropriate.Cumulative survival curves were generated using the Kaplan-Meier method and compared using the log-rank test.Cumulative incident curves were evaluated using the Fine and Gray method.The associations between the SII tertiles and all-cause mortality were evaluated using the Cox proportional hazards and subdistribution hazards models for competing risk analysis.To account for the non-linear correlation between the SII and all-cause mortality in patients with PD, we also employed a restricted cubic spline regression model to evaluate non-linearity.
We apply the method of "Computer Simulation Statistical Efficiency" to evaluate the sample size, which was performed using Empower(R) (www.empow ersta ts.com, X&Ysolutions, Inc., Boston, MA) [32].The parameter settings are well documented according to the result of our study.The figure below shows the degree of research reliability corresponding to different sample size and odd ratio.The horizontal axis represents the sample size, the vertical axis represents the degree of reliability of the study.It can be seen that when the odd ratio value is 1.8 and the sample size is 1700, the test efficiency is over 0.95, which is already very reliable.Therefore, it can be considered that the sample size used in our article is appropriate.
Subgroup analyses
To evaluate subgroup modification effects on the relationship between SII stratification and all-cause mortality, we conducted subgroup analyses of the association between SII stratification and all-cause mortality stratified by male and female, less than 65 years and older than or equal to 65 years, with or without diabetes, with or without hypertension, and those with or without a history of pre-existing CVD.The interaction between the SII tertiles and subgroups on all-cause mortality was examined using a formal interaction test.
Comparisons of clinical parameters between groups
The baseline demographics and clinical and laboratory characteristics of the total study population after stratification by tertiles of SII values are summarized in Table 1.Patients in groups with a higher baseline SII level (tertile 3) compared with the reference group (tertile 1) tended to be male, older, and had higher levels of white blood cells, hemoglobin, platelets, neutrophils, monocytes, total cholesterol, triglycerides, calcium, and serum sCRP, but lower levels of lymphocytes, serum albumin, and iPTH (all P < 0.05).They also had a higher percentage of diabetes mellitus, angiotensin receptor blockers, statins, and aspirin use (all P < 0.05; Table 1).
Associations between SII and all-cause mortality
To confirm the association of the SII with all-cause mortality in the population, Cox regression analyses and a competitive risk model were utilized.Kaplan-Meier curves showed significant differences in all-cause mortality between the SII tertiles.Individuals in the SII tertile 3 group had a significantly higher death rate (SII tertile 3 group vs. SII tertile 2 group vs. SII tertile 1 group; 39.1% vs. 33.1% vs. 27.6%,respectively; P < 0.001, log-rank test; Fig. 2).In univariable Cox regression analyses, we found that compared with the lowest SII tertile group, the highest SII tertile group was associated with an approximate 54% greater risk of all-cause mortality (HR: 1.54, 95% CI: 1.21 to 1.95, P < 0.001).After a full adjustment for covariates, the association between SII tertiles and all-cause mortality in patients with PD remains (HR: 1.80, 95% CI: 1.13 to 2.85, P = 0.013).(Table 2).
In the Fine and Gray competing risk models, a higher SII was associated with a higher risk of all-cause mortality (gray value = 14.104,P < 0.001) (Fig. 3).After full adjustment, compared with the lowest SII tertile group, the highest SII tertile group was associated with an approximate 86% increased risk of all-cause mortality (SHR: 1.86, 95% CI: 1.15 to 2.02, P = 0.011).(Table 2).
Moreover, we used the cox regression model to analyze the relationship between CRP and mortality.The result showed that CRP > 3 mg /L was also correlated with mortality.However, when we adjusted for covariates, we found that CRP > 3 mg /L had nonsignificant effect on mortality, possibly due to the influence of CRP on mortality being influenced by other factors such as complications and nutritional status (Supplementary Table 1).
In addition, the restricted cubic splines showed that the risk of all-cause mortality increased gradually when log-transformed SII was > 2.78, while the risk of all-cause mortality remained unchanged when logtransformed SII was ≤ 2.78.The association between the log-transformed SII and all-cause mortality was nearly linear (P = 0.124, Fig. 4).
Relationship between SII stratification and all-cause mortality in different subgroups
We investigated the association between SII tertiles and all-cause mortality in different subgroups, including male or female, less than 65 years and older than or equal to 65 years, with or without diabetes, with or without hypertension, and those with or without a history of pre-existing CVD.In the subgroup analysis, the results indicated that compared to the SII tertile 1 group, the SII tertile 3 group was associated with a 1.06 times greater risk of all-cause mortality for less than 65-year-old patients, 93% for those with hypertension, 4.62 times for those with diabetes, and 1.54 times for those without a history of pre-existing CVD (all P < 0.05, Table 3).However, these trends were not observed among patients aged 65 years or older, without hypertension, without diabetes, or with a history of pre-existing CVD.Moreover, no interaction was observed between sex, age, diabetes mellitus, hypertension, pre-existing CVD, or all-cause mortality.
Discussion
It is estimated that over 272,000 patients receive PD as a treatment for end-stage renal disease globally [1], and the number is increasing, with a considerable risk of adverse clinical prognosis.This study demonstrated that during the follow-up period of 30.5 ± 20.0 months, 437 (26.01%) patients died, which was consistent with other previous studies [33,34], indicating that patients undergoing PD suffered an undesirable survival outcome.Thus, exploring the prognostic factors for survival in PD populations is crucial.Calculated as (platelet count × neutrophil count/lymphocyte count), the SII is based on peripheral blood parameters that reflect the balance between inflammation and immunity [35,36].Growing evidence has been observed that the SII, as a novel inflammatory indicator, is correlated with a variety of adverse clinical outcomes [22][23][24][25]31].Notably, only a few studies focusing on the general population or patients with vascular disease have found that SII is independently correlated with all-cause mortality [23,[26][27][28].Consistent with previous studies, our study showed an association between SII and all-cause mortality in patients undergoing PD and that a higher SII corresponded to an increased risk of death in the PD population.Patients in the highest SII tertile had a significantly higher mortality rate, accounting for 39.1%.Compared with the lowest SII tertile group, the highest SII tertile group was associated with an approximately 80% greater risk of all-cause mortality, suggesting that the SII may act as a potential predictor of all-cause mortality in patients undergoing PD.Currently, this is a study applied by competitive risk model to focus on patients with PD to evaluate the value of SII in predicting all-cause mortality.Unlike other studies, our study applied competitive risk models and RCS curves to further demonstrate the relationship between SII and all-cause mortality in patients undergoing peritoneal dialysis.Althougt there exist several related studies, our manuscript is novel to a certain extent.
However, the underlying mechanisms by which the SII affects survival in these populations remain unknown and may be complicated.There are several possible explanations for these findings.One of the possible mechanisms may be the manifestation of systemic inflammation, whose prevalence ranges between 12 and 65% in PD populations [37], and which is a well-known risk factor for PD related mortality and plays an important role in poor prognosis [38].SII is positively correlated with inflammatory indicators, including sCRP, NLR, and MLR, which have been regarded as prognostic biomarkers of poor survival in patients with PD [5,6,[18][19][20][21].Our study also found that patients in the groups with a higher baseline SII tended to have higher levels of serum CRP, which is a marker of inflammation and is significantly associated with a negative prognosis.Compared with composite indicators, such as NLR and MLR, a comprehensive indicator of neutrophils, platelets, and lymphocytes may be more stable, thereby increasing the Fig. 3 Cumulative incidence curves for all-cause mortality stratified by SII application value of predicting the prognosis of patients with PD.A higher SII may represent a higher number of neutrophils and platelets, which stimulate the release of inflammatory mediators [39].During inflammation, various pro-inflammatory cytokines and growth factors are released to stimulate the proliferation of macrophages, subsequently leading to the deterioration of body function [34].Moreover, the higher the SII, the more severe the imbalance between inflammation and immunity and the more serious the inflammatory response [23].
Another potential mechanism might be the risk of malnutrition underlying the SII.Our study found that patients in groups with a higher baseline SII tended to have lower levels of serum albumin, which is a parameter representing nutritional status, indicating that SII may be related to nutritional status to a certain extent.Inflammation and malnutrition interact with and promote each other in dialysis patients [40], leading to poor overall survival.
In the subgroup analysis of patients over 65 years of age, the results showed no significance.We speculate that patients over 65 years of age are more susceptible to the majority of comorbidities and that the cause of death may not be exclusively inflammatory.Furthermore, our study indicated that, compared to the SII tertile 1 group, the SII tertile 3 group was associated with a 4.62 times greater risk of all-cause mortality in those with diabetes.Other study reported a relationship Fig. 4 HR and 95% CI for the risk of all-cause mortality in PD patients along with the changes of logSII from the restricted cubic spline model between SII and diabetes [41].The inflammatory response is considered a potential mechanism involved in glucometabolic processes [41].However, the underlying mechanism remains unclear.
The SII may be a prognostic marker of survival due to the following potential reasons.First, compared to neutrophil, platelet, or lymphocyte counts alone, the SII is relatively more stable and less likely to be changed by different pathological circumstances.Second, the SII integrates inflammatory response and immune regulation.Finally, the SII is a low-cost, easily obtainable and practical parameter; thus, it might be useful in predicting mortality risk among patients with PD.
It is noteworthy that several limitations of current research also exist.First, the present study is a retrospective study, thus, bias was inevitable.Second, we only collected baseline value of SII.Dynamically monitoring how SII changes is of highly necessity and worthy of further research.Third, we did not measure the markers of oxidative stress or inflammation, such as superoxide dismutase, malonaldehyde, IL-6, or tumor necrosis factor α, at baseline, which may also related with adverse prognosis in patients undergoing PD.Finally, due to the limitations of retrospective research, we did not collect sufficient clinical data on comorbidities scores.Also, we did not collect the characteristics of different dialysis centers.
Conclusions
In summary, this study showed that systemic inflammation, defined by the SII, an inexpensive and easily accessible marker, may be a predictive factor for all-cause mortality in patients with PD.Improving the inflammatory status in patients with PD may provide clinical benefits for the long-term survival of the population; however, the mechanisms underlying SII are intricate and require further exploration.
Table 3 Subgroup analysis for the association of SII stratification and all-cause mortality in PD patients
Abbreviations: SII systemic inflammatory index, Total Kt/V K, dialyzer clearance of urea; t dialysis time, V volume of distribution of urea, RRF renal residual function, iPTH intact parathyroid hormone, HR hazard ratio, CI confidence interval; adjusted for age, sex, diabetes mellitus, hypertension, strokes, pre-existing cardiovascular disease, drug medication, total Kt/V RRF, hemoglobin, urea nitrogen, Serum creatinine, albumin, cholesterol, triglyceride, High density lipoprotein, Low density lipoprotein, alkaline phosphatase, magnesium, calcium, phosphorus, iPTH aspartate aminotransferase, alanine aminotransferase, total bilirubin
Fig. 2
Fig. 2 Kaplan-Meier curves for all-cause mortality stratified by SII
Table 1
Baseline characteristics of individuals stratified by tertiles of SIIAbbreviations: CVD cardiovascular diseases, CCB calcium channel blocker, ACEI angiotensin-converting enzyme inhibitor, ARB angiotensin receptor blocker, iPTH intact parathyroid hormone, sCRP sensitive C-reactive protein, Total Kt/V K, dialyzer clearance of urea; t dialysis time; V volume of distribution of urea, RRF renal residual function, SII systemic immune inflammation index
Table 2
The associations of SII stratification with all-cause mortality in PD patients
|
v3-fos-license
|
2024-02-06T14:09:39.560Z
|
2024-02-05T00:00:00.000
|
267411478
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "47acdd43514acb7e285a59377ef2044a842802cf",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:668",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"sha1": "3ec85e4c209e9ea754114effc5ef5fc03a6f696a",
"year": 2024
}
|
pes2o/s2orc
|
Decreased expression of the NLRP6 inflammasome is associated with increased intestinal permeability and inflammation in obesity with type 2 diabetes
Background Obesity-associated dysfunctional intestinal permeability contributes to systemic chronic inflammation leading to the development of metabolic diseases. The inflammasomes constitute essential components in the regulation of intestinal homeostasis. We aimed to determine the impact of the inflammasomes in the regulation of gut barrier dysfunction and metabolic inflammation in the context of obesity and type 2 diabetes (T2D). Methods Blood samples obtained from 80 volunteers (n = 20 normal weight, n = 21 OB without T2D, n = 39 OB with T2D) and a subgroup of jejunum samples were used in a case–control study. Circulating levels of intestinal damage markers and expression levels of inflammasomes as well as their main effectors (IL-1β and IL-18) and key inflammation-related genes were analyzed. The impact of inflammation-related factors, different metabolites and Akkermansia muciniphila in the regulation of inflammasomes and intestinal integrity genes was evaluated. The effect of blocking NLRP6 by using siRNA in inflammation was also studied. Results Increased circulating levels (P < 0.01) of the intestinal damage markers endotoxin, LBP, and zonulin in patients with obesity decreased (P < 0.05) after weight loss. Patients with obesity and T2D exhibited decreased (P < 0.05) jejunum gene expression levels of NLRP6 and its main effector IL18 together with increased (P < 0.05) mRNA levels of inflammatory markers. We further showed that while NLRP6 was primarily localized in goblet cells, NLRP3 was localized in the intestinal epithelial cells. Additionally, decreased (P < 0.05) mRNA levels of Nlrp1, Nlrp3 and Nlrp6 in the small intestinal tract obtained from rats with diet-induced obesity were found. NLRP6 expression was regulated by taurine, parthenolide and A. muciniphila in the human enterocyte cell line CCL-241. Finally, a significant decrease (P < 0.01) in the expression and release of MUC2 after the knockdown of NLRP6 was observed. Conclusions The increased levels of intestinal damage markers together with the downregulation of NLRP6 and IL18 in the jejunum in obesity-associated T2D suggest a defective inflammasome sensing, driving to an impaired epithelial intestinal barrier that may regulate the progression of multiple obesity-associated comorbidities. Supplementary Information The online version contains supplementary material available at 10.1007/s00018-024-05124-3.
Background
Chronic and unresolved inflammation is a hallmark of obesity that leads to the dysfunction of adipose tissue (AT), promoting the development of its associated comorbidities [1,2].Low-grade systemic inflammation involves an intricate network of pathways affecting not only the AT but also interconnecting metabolic organs including the gastrointestinal tract.Consequently, maintaining an intestinal barrier homeostasis is crucial for controlling the state of chronic inflammation [3][4][5].The gastrointestinal tract hosts a myriad of microorganisms that release pathogen-associated molecular patterns (PAMPs) and other metabolites [6], establishing the gut microbiota as a critical regulator of the immune system that also contributes to the metabolic health [7].However, profound compositional and functional alterations in the gut microbiota together with a defective and impaired intestinal barrier accompanied by an increased intestinal permeability have been reported in obesity and its related metabolic disorders, including type 2 diabetes (T2D) [4,[8][9][10].As a result of the intestinal barrier dysfunction, the altered microbiota or its metabolites translocate into the circulation instigating the low-grade inflammatory state [11].In this regard, the presence of bacteria and commensal DNA have been detected in the blood and different AT depots in obesity and T2D [12,13].Metabolic endotoxemia also influences the lipid and glucose metabolism as well as vascular inflammation [7,14].In addition, research has also elucidated that the intestinal epithelium functions not only as a semi-permeable physical barrier but also exhibit immunological properties [15].
Mechanisms linking gut microbiota shifts and intestinal permeability with inflammation in obesity include the activation of innate immune cells by the capture of bacterial antigens.This process of bacterial translocation requires specific receptors including toll-like receptors (TLR) [16] and nod-like receptors (NLR) [17].Opposite to the increased expression levels of TLRs and NLRs in innate immune cells, these receptors are generally expressed at low levels in enteroendocrine, goblet, Paneth and intestinal epithelial cells with a temporal and spatial regulation in response to diverse danger signals [18,19].The NLRP family is a subset of NLR characterized by an N-terminal pyrin domain and constituting important components of the inflammasomes, intracellular complexes that mediate innate immunity [20].Diverse types of inflammasomes have been reported based on their main component, exhibiting different and complex roles depending on the tissue or the period they are activated [21].Assembly of the NLRP3 inflammasome promotes the release of interleukin (IL)-1β and IL-18 by the activation of caspase-1 [20,21].Due to its potent inflammatory activity, IL-1 β contributes to the development of T2D [22].Recently, our group has described that the blockade of the NLRP3 inflammasome reduces AT inflammation with significant fibrosis attenuation [23].However, the role of the NLRP3 inflammasome in gut barrier disruption during intestinal inflammation in the context of obesity remains unsettled.While some studies have shown that Nlrp3 knockout mice featured exacerbated colitis accompanied with decreased intestinal integrity and increased bacteria translocation from the gut to the systemic circulation [24,25], other reports have demonstrated opposite results in disease severity [26].Regarding the NLRP6 inflammasome, it is highly expressed in the small and large intestine and exhibits essential roles in the maintenance of intestinal homeostasis [27].In this line, Nlrp6-deficient mice showed dysbiosis conferring increased susceptibility to colitis, colitis-associated colorectal cancer and metabolic syndrome development [11,[27][28][29].Mechanistically, the activation of the NLRP6 inflammasome by metabolites derived from commensal bacteria maintains the homeostasis of the intestinal environment by controlling the release of mucus and antimicrobial peptides thereby regulating the microbiota composition [11,30].In addition, the NLRP6 inflammasome upregulation promotes the release of IL-18, that regulates intestinal inflammation as well as epithelial repair and defense against infections [11].Interestingly, Seregin et al. [31] demonstrated that NLRP6 via IL-18 restricted the colonization of the mucolytic bacterium Akkermansia muciniphila, which is able to induce colitis in specific pathogen free and germ free-Il10-deficient mice.Oppositely, a well-documented role of A. muciniphila in maintaining the intestinal barrier integrity by modulating the host immune response and by regulating inflammation have been reported [32].
Current evidence shows that obesity-associated intestinal dysbiosis and permeability are crucial contributors of systemic chronic inflammation and end-organ dysfunction, also leading to metabolic diseases.In this context, the inflammasomes orchestrate either immunological tolerance or the induction of inflammatory responses to changes in gut microbiota.We hypothesized that intestinal permeability in obesity and T2D can dysregulate the host immune system through different mechanisms including the modulation of the inflammasome signaling contributing to metabolic inflammation.We therefore aimed to (i) analyze the potential changes in circulating concentrations of markers related to gut barrier dysfunction and its associated metabolic inflammation in a series of patients with obesity and T2D as well as the impact of weight loss; (ii) characterize the expression levels of the different components of the inflammasome, markers of integrity and inflammation of the intestinal epithelium in human jejunum samples in T2D, (iii) explore the effect of inflammation-related factors and bacteria-derived metabolites in the expression of inflammasomes in a noncancerous cell line derived from the small intestine and (iv) to evaluate the effect of both, A. muciniphila and the A. muciniphila-conditioned medium on the intestinal integrity and inflammatory response in a small intestine cell line.
Study population
Circulating levels of inflammation-and intestinal integrityrelated factors were evaluated in a case-control study including 80 samples obtained from 35 males and 45 females from healthy normal weight (NW) volunteers (n = 20) or patients with obesity (OB) (n = 60) at the Clínica Universidad de Navarra.Body mass index (BMI) was calculated dividing weight (kilograms) by the square of the height (meters) and obesity was determined as a BMI ≥ 30 kg/m 2 .Body fat (BF) was assessed by air displacement plethysmography (Bod-Pod®, Life Measurements, COSMED, Italy).Following the guidelines of the American Diabetes Association [33], patients with OB were subclassified into two groups: with normoglycaemia (NG, n = 21) or with T2D (n = 39).Patients with T2D were newly diagnosed and were not under any type of pharmacotherapy that could modify their endogenous insulin levels.To our knowledge, patients were not on treatments that could potentially alter the integrity of the intestinal barrier [antibiotics, non-steroidal anti-inflammatory drugs (NSAIDs), oral contraceptive pills, chemotherapy drugs or proton pumps inhibitors].Intraoperative jejunum biopsies were collected from patients with severe obesity undergoing a laparoscopic Roux-en-Y gastric bypass (RYGB) (NG, n = 5; T2D, n = 10) and were kept at − 80 °C.The collection of jejunum is clinically justified in this type of surgery since RYGB involves the creation of a small gastric pouch that is directly anastomosed to the jejunum.
To compare the impact of weight and fat loss achieved by RYGB or caloric restriction on circulating concentrations of intestinal dysfunction markers, blood samples from volunteers submitted to either RYGB (a subgroup of the previously described cohort, n = 20) or a conventional dietary treatment (n = 20) (both evaluated after 12 months) were used.Conventional dietary treatment consisted of a personalized diet prescribed by a physician in collaboration with a dietitian with planned regular follow-up visits to ensure a daily caloric deficit of 500-1000 kcal.
The protocol of the research was conformed to the guidelines of the Declaration of Helsinki and was approved by the Universidad de Navarra's Ethical Committee (protocol 2020.054).All the participants signed the written informed consent.
Histological analysis
CD68, NLRP3 and NLRP6 immunodetection was performed in sections of formalin-fixed paraffin-embedded jejunum (4 µm) using the indirect peroxidase method as described before [34].Jejunum sections from patients with obesity with and without T2D were incubated overnight at 4 °C with rabbit monoclonal anti-CD68 (ab213363, abcam, Cambridge, UK), monoclonal anti-NLRP3 or polyclonal anti-NLRP6 diluted 1:100 in Tris buffered saline (TBS, Merck).After washing, slides were incubated with anti-rabbit secondary antibodies conjugated with DAKO Real EnVision™ horseradish peroxidase (DakoCytomation, Glostrup, Denmark) for 1 h at RT.The peroxidase reaction was visualized using a 0.5 mg/mL diaminobenzidine/0.03%H 2 O 2 solution diluted in 50 mmol/L Tris-HCl, pH 7.36, and Harris hematoxylin solution (Sigma) as counterstaining.Sections observed under a Zeiss Axiovert CFL light microscope (Zeiss, Göttingen, Germany) at 100X and 200X.A negative control slide was included in which the primary antibody was replaced by TBS to assess nonspecific staining.ImageJ analysis software (NIH, USA, https:// imagej.net/ ij/ downl oad.html) was used for quantitative evaluation.In addition, immunofluorescence staining was performed to provide a more specific localization of NLRP3, NLRP6 and CD68 in jejunum biopsies.To accomplish the immunofluorescence, after washing with TBS, slides were incubated with the goat anti-rabbit IgG secondary antibody Alexa Fluor ® 555 conjugate (abcam) (1:100 in TBS) during 30 min at RT. Alexa Fluor ® 555 conjugate was a generous gift from Dr. Marián Burrell and Dr. Marina Martín of the University of Navarra.Finally, sections were washed in TBS and mounted using DAPI (Invitrogen) as an aqueous mounting medium.To study the localization pattern of NLRP3, NLRP6 and CD68, an inverted microscope Nikon Eclipse-T300 was used.Images were recorded by the Digital sight DS-5MC camera with NIS-D software.Negative control slides without primary antibody were included to assess non-specific staining.
Adipocyte and CCL-241 cell cultures
Stroma-vascular fraction cells were isolated from visceral AT from patients with OB and differentiated to adipocytes as described before [23].The adipocyte conditioned media (ACM) was prepared by collecting the supernatant of differentiated adipocytes and further centrifuged and diluted (20% and 40%).ACM was used to evaluate the effects of the adipocyte secretome on the expression of inflammasome components in CCL-241, a non-cancerous small intestine cell line.CCL-241 cell line was obtained from the ATCC® (Middlesex, UK) and cultured at 37 °C in RPMI-1640 (Merck) supplemented with 10% fetal bovine serum and 30 ng/mL of the epidermal growth factor (Merck).Differentiated CCL-241 cells were serum starved for 24 h and then treated with increasing concentrations of TNF-α (1, 10 and 100 ng/mL; Merck), IL-1β (1, 10 and 100 ng/mL; R&D Systems), glucose (5, 10 and 25 nM; Merck), insulin (1, 10 and 100 ng/mL; Merck), rosiglitazone (10, 20 and 50 µM; Merck) and with taurine (70 mM, Merck) and histamine (25 mM, Merck), well-known activator and inhibitor of the NLRP6 inflammasome, respectively.Finally, CCL-241 cells were cultured in the presence of lipopolysaccharide (LPS) (1000 ng/mL) and parthenolide (PTL, a herbal NF-κB inhibitory compound that also inhibit the activity of multiple inflammasomes) (10 nM, Merck) for 4 h as well as with LPS for 3 h followed the addition of PTL for another 4 h.
Akkermansia munciniphila culture and cells treatment
Akkermansia muciniphila (ATCC ® BAA-835™) was aseptically and anaerobically cultured in 6 mL tubes of brain heart infusion (BHI) broth (Becton Dickinson, Franklin Lakes, NJ, USA) at 37 °C for 7 days.Cultures were washed and concentrated in anaerobic phosphate buffered saline (PBS) (Merck) and heat-inactivated for 30 min at 70 °C.The bacteria-conditioned medium (BCM) was obtained by collecting the supernatant.The BCM was centrifuged and diluted at 40% in RPMI-1640 (Merck) (for the treatment of CCL-241 cells) and in DMEM/F-12 (for adipocyte treatment).CCL-241 cells and visceral adipocytes were serum-starved for 2 h and 24 h, respectively, and incubated with pasteurized A. munciniphila at a multiplicity of infection (MOI) of 100 as well as with the BCM (40%) for 4 h.The BHI medium (diluted at 40% in RPMI-1640 or in DMEM/F-12) was used as control medium.The conditioned media in both cell cultures were collected, centrifuged at 200 g for 10 min and stored at − 80 °C.A commercially available ELISA kit (Mybiosource, San Diego, CA, USA) to assess the concentrations of mucin-2 in the media was used according to the manufacturer's instructions.
Animal model of diet-induced obesity
Four-week-old male Wistar rats (n = 40) were caged individually under controlled temperature, humidity, ventilation and light-dark cycles as previously described [37].Animals were fed ad libitum either a chow normal diet (ND) (n = 10) (2014S, Harlan, Teklad Global Diets, Harlan Laboratories Inc., Barcelona, Spain) or a high-fat diet (HFD) (n = 30) (F3282, Bio-Serv, Frenchtown, NJ, USA) [38].Rats were sacrificed and the small intestine (duodenum, jejunum and ileum) and blood samples were collected.The procedures followed the European Guidelines for the care and use of Laboratory Animals (directive 2010/63/EU) and were approved by the Ethical Committee for Animal Experimentation of the University of Navarra (026-19).
Data and statistical analysis
The sample size was calculated using the G*Power 3.1.9.4 program (Franz Faul, University of Kiel, Germany) with preliminary data obtained in our own experience [23].Oneway ANOVA followed by Tukey's post hoc tests was used to analyze differences in the circulating levels of inflammation-and intestinal integrity-related factors and one-way ANOVA followed by Dunnett's post hoc tests to study differences in the in vitro experiments.Two-tailed unpaired Student's t tests were applied to determine differences in the gene expression levels in jejunum samples.Correlations between two variables were computed by Pearson's correlation coefficients.Calculations were performed with SPSS/Windows version 23 (Chicago, IL, USA) and graphs were created with GraphPad Prism version 8.3 (GraphPad Page 6 of 20 Software, Inc., San Diego, CA, USA).A P value < 0.05 was considered statistically significant.
Obesity and obesity-associated T2D drive an increase of intestinal damage markers
Clinical characteristics of the study population are summarized in Table 1.As expected, markers of adiposity were significantly higher (P < 0.001) in patients with OB compared to the volunteers with NW.In addition, OB was associated with adverse carbohydrate, lipid, inflammatory and hepatic profiles as well as systemic inflammation, being aggravated in patients with OB and T2D.
Regarding intestinal dysfunction markers (Fig. 1A-F), both groups of patients with obesity showed higher circulating levels of LPB (P < 0.01), lactoferrin (P < 0.05 for OB-NG and P < 0.01 for OB-T2D) and S100A8 (P < 0.01).Elevated concentrations of endotoxin (P < 0.01) and zonulin (P < 0.01) were found in patients with obesity-associated T2D.No effect of gender was observed for all the analyzed molecules.Correlation analyses revealed a significant association between the circulating levels of the analyzed intestinal dysfunction factors (P < 0.01) and also with anthropometric determinations (P < 0.01) (Table S2, Fig. 1M).Importantly, we also detected that circulating levels of endotoxin, LBP, zonulin and S100A8 were associated with insulin resistance, due to the positive correlation with insulin levels and HOMA index and the negative association with QUICKI index (Table S2, Fig. 1M).Strong negative associations (P < 0.01) between HDL-cholesterol and LBP, zonulin, lactoferrin and S100A8 levels were also found, suggesting a potential role of these molecules in the regulation of lipid metabolism (Table S2, Fig. 1M).The AST/ALT ratio was negatively associated with zonulin (P = 0.044), lactoferrin (P = 0.037) and S100A8 (P < 0.001) levels (Table S2, Fig. 1M).Circulating levels of the intestinal inflammatory factors CCL5 and IL-6 as well as the IL-18/IL-18BP ratio were increased (P < 0.05) in patients with obesity with and without T2D (Fig. 1G-L) being also significantly associated with endotoxin levels.
To analyse the impact of therapeutic interventions aimed at achieving fat loss and metabolic improvement, the effect of conventional diet and RYGB on circulating concentrations of intestinal dysfunction markers was evaluated.After an average post-treatment period of 12 months, patients showed a significant decrease in all anthropometric measurements (P < 0.001) as well as a significant improvement in insulin resistance (P < 0.01) in patients submitted to bariatric surgery (Table S3).Noteworthy, a significant reduction in the circulating levels of endotoxin (P < 0.05), LBP (P < 0.01) and zonulin (P < 0.05) was observed after bariatric surgery (Fig. S2).Serum concentrations of LBP also decreased (P < 0.05) after dietary treatment.
Decreased jejunal levels of NLRP6 in obesity-associated T2D
To verify whether obesity-associated T2D is involved in the regulation of the inflammasome in the small intestine, gene expression levels of the main components of the inflammasome and its major effectors were determined in human jejunum samples.Results showed decreased gene expression levels of NLRP6 (P < 0.05) and IL18 (P < 0.05) together with an upregulation of IL1B (P < 0.05) in patients with T2D (Fig. 2A).Although mRNA levels of NLRP1 and NLRP3 were decreased in T2D, differences were not statistically significant.Importantly, mRNA levels of NLRP6 were highly correlated with the expression of its main effector, IL18 (r = 0.77; P = 0.002) (Table S2, Fig. 1M).Gene expression levels of NLRP6 in the jejunum were negatively associated with circulating levels of endotoxin (r = − 0.68; P = 0.044) and LBP (r = − 0.77; P = 0.015).(Table S2, Fig. 1M), reinforcing the role of NLRP6 in the maintenance of the intestinal barrier integrity.We also found a negative association between the expression of NLRP6 and IL18 with insulin levels (P = 0.003 and P = 0.011, respectively) and HOMA index (P = 0.046 and P = 0.018, respectively).In addition, a positive correlation between IL18 mRNA levels and QUICKI index was found (P = 0.021) (Table S2, Fig. 1M).The expression of NLRP6 and IL18 was also associated with circulating levels of CCL5 (P = 0.014 and P = 0.033, respectively) (Table S2, Fig. 1M).Decreased expression of NLRP6 in the jejunum from patients with T2D was corroborated at the protein level by Western-blot (Fig. 2B) and by immunohistochemistry (Fig. 2C).Remarkably, a different pattern of staining was observed between NLRP3 and NLRP6, with NLRP3 being primarily localized in the intestinal epithelial cells and NLRP6 being expressed in goblet cells (Fig. 2C).To gain further insight, the presence of NLRP3 and NLRP6 in sections of human jejunum was confirmed by immunofluorescence (Fig. S3A and S3B).OB-NG patients were immunopositive for NLRP3 and NLRP6.Although NLRP3 and NLRP6 levels were readily evident in the epithelial cells, an increased immunostaining was detected in the apical region of cells.No severe intestinal histological damage was observed in the jejunum from patients with obesity and with obesity-associated T2D.However, a higher number of inflammatory infiltrating cells together with a slight mucosal sloughing were observed in patients with T2D.
Dysregulation of inflammatory and intestinal integrity markers in jejunum in T2D
Since mounting evidence relates intestinal inflammation with the occurrence and progression of T2D, we analyzed the expression of inflammation-related factors in the jejunum from patients with obesity with or without T2D (Fig. 3).An upregulation of TLR4 (P < 0.05), CD68 (P < 0.05), SPP1 (P < 0.05) and CCL2 (P < 0.05) together with a downregulation (P < 0.05) of ADIPOQ levels in the jejunum from patients with obesity-associated T2D were found (Fig. 3A).Increased expression levels of CD68 in patients with T2D also were confirmed by immunohistochemistry (Fig. 3C) and immunofluorescence (Fig. S3C), with a higher number of inflammatory cells being detected in patients with T2D.
In addition, mRNA levels of STEAP4, a metalloreductase involved in cellular copper and iron uptake in response to chronic inflammation and therefore, in maintaining gut homeostasis [39], was increased (P < 0.05) in the jejunum from patients with T2D.Gene expression levels of the calprotectin subunit S100A9 were also higher (P < 0.05) in T2D.No differences were detected in the expression of either MUC2 or the junction proteins OCLN1 and TJP1 but, unexpectedly, increased expression (P < 0.05) of CLDN1 was found in the jejunum from T2D volunteers (Fig. 3B).Circulating zonulin levels were associated with the expression of TLR4 (r = 0.82; P < 0.001), CLDN1 (r = 0.69; P = 0.013) and OCLN1 (r = 0.67; P = 0.017).
NLRP6 is regulated by glucose-related metabolic factors as well as by taurine and parthenolide in small intestine cells
We recreated an aspect of the intestinal inflammatory pathophysiology of the intestinal epithelium stimulating human CCL-241 enterocytes with the pro-inflammatory factors TNF-α and IL-1β as well as with the ACM obtained from patients with obesity, studying also the potential crosstalk between adipocytes and intestinal cells.As shown in Fig. 4A, TNF-α (P < 0.01) significantly increased the mRNA levels of NLRP3 without changes in NLRP6 expression.After TNF-α treatment, we also detected a strong increase (P < 0.0001) in the expression of IL1B together with a decrease (P < 0.05) in the mRNA levels of IL18 (Fig. 4A).IL-1β significantly upregulated (P < 0.0001) its own expression as well as MUC2 mRNA (P < 0.01) (Fig. 4B).After the treatment with the ACM, a slight decrease in the NLRP3 and NLRP6 inflammasome was observed but differences were not statistically significant (Fig. 4C).However, MUC2 and TJP1 expression levels were significantly downregulated (Fig. 4C).CCL-241 cells showed a significant increase (P < 0.01) in the mRNA levels of NLRP6 in response to glucose and oppositely, after the stimulation with insulin, gene expression levels of NLRP6 decreased (P < 0.05) (Fig. S5A, S5B).We also found that after the treatment with glucose and insulin, transcript levels of IL1B increased (P < 0.01 and P < 0.05, respectively) and the mRNA of MUC2 was downregulated (P < 0.05 and P < 0.01, respectively).In addition, the stimulation of the human intestinal cell line with rosiglitazone significantly increased (P < 0.01) the expression of NLRP6 (Fig. S5C).
In addition, we stimulated CCL-241 cells with an activator (taurine) and specific inhibitors (histamine and PTL) of the NLRPs.After the treatment with taurine, we found an increase (P < 0.05) in the expression of NLRP6 with also a strong upregulation (P < 0.001) of IL1B (Fig. 4D).No effect of histamine was found in the regulation of NLRP6 (Fig. 4D).Lastly, the treatment with PTL significantly inhibited (P < 0.05) the expression of NLRP3 and NLRP6 in the intestinal cell line and blocked LPS effects (Fig. 4E).mRNA levels of IL1B increased (P < 0.05) after LPS treatment and decreased (P < 0.05) with PTL but no differences were found in cells previously stimulated with LPS (Fig. 4E).TJP1 expression levels were decreased (P < 0.05) after PTL treatment (Fig. 4E).
NLRP6 knockdown resulted in decreased expression and release of MUC2
Since gene expression levels of NLRP6 were downregulated in the jejunum from patients with T2D, we reduced the constitutive expression levels of NLRP6 in CCL-241 cells using a specific siRNA to get insight into its mechanism of action (Fig. 5).Although no differences regarding the expression level of IL18 were found, a significant decrease (P < 0.01) in the expression of MUC2 was observed (Fig. 5A).No differences were found in the mRNA levels of ADIPOQ and TJP1, while the expression of IL1B was significantly increased (P < 0.05).Importantly, we measured the secretion levels of mucin-2 into the culture medium finding a significant reduction after siNLRP6 treatment (Fig. 5B).
Akkermansia muciniphila increased the release of MUC2
Akkermansia muciniphila has been associated with improvements in local and systemic inflammation as well as in intestinal barrier integrity [40,41].Therefore, both CCL-241 cells and adipocytes were treated with the heat-inactivated bacteria and with its conditioned media (Fig. 6).An increased (P < 0.01) gene expression level of NLRP6 after the incubation of CCL-241 cells with heat-inactivated A. muciniphila was observed, whereas no differences in NLRP3 or MUC2 expression levels were detected (Fig. 6A).However, higher release of mucin-2 into the culture media after the treatment with both the bacteria and the BCM was found (Fig. 6B).Similar results were obtained in visceral adipocytes with upregulated expression of NLRP3 and NLRP6 after the treatments and increased levels of mucin-2 into the culture media (Fig. 6C and D).After the incubation of CCL-241 cells with both the heat-inactivated A. muciniphila and the BCM, a strong upregulation (P < 0.01) in the gene expression levels of CLDN1 and OCLN1, two key molecules in the maintenance of the intestinal integrity, was observed (Fig. 6E).
Discussion
Obesity and T2D are well-recognized conditions involved in the dysregulation of gut inflammation, intestinal epithelial cells integrity and adhesion and, therefore, intestinal barrier function, favoring the translocation of bacteria or harmful exogenous factors into circulation and constituting an important hit of metabolic inflammation [8,42].The main findings of our study are as follows: (i) obesity and T2D increased circulating levels of markers of intestinal integrity damage and inflammation being associated with insulin resistance and lipid metabolism, (ii) expression levels of NLRP6 and IL18 were decreased in the jejunum from patients with obesity and T2D together with an upregulation of inflammatory markers and (iii) we further revealed that NLRP6 regulation is likely context-dependent, with taurine increasing and parthenolide decreasing its expression levels in human CCL-241 enterocytes.Additionally, we also demonstrated decreased expression levels of Nlrp6 in the small intestinal tract (duodenum, jejunum and ileum) from rats with DIO.
The determination of circulating bacteria-related components (such as LPS or flagellin) or direct intestinal barrier damage markers (zonulin, calprotectin or FABPs) constitutes an indirect measurement of increased intestinal permeability.Different studies have also corroborated the association between dysregulated intestinal permeability, increased visceral obesity and insulin resistance [9,10,43].In this sense, we found higher levels of the bacterial component LPS in patients with obesity-associated T2D, with LPB, that recognizes and binds LPS, being also increased in obesity with and without T2D.Results from large cohorts have described augmented levels of LPS and LBP in subjects with metabolic syndrome or T2D [44][45][46], proposing their role as triggering factors of the early development of metabolic diseases.Moreover, an influx of bacteria-derived LPS into the systemic circulation has been demonstrated in mice fed a HFD [47].In addition to LPS, zonulin is considered a biomarker of impaired gut barrier function being associated with chronic inflammation, insulin resistance and bacterial leakage due to its function as intercellular tight junctions regulator [48,49].In this regard, patients with obesity and T2D included in our study exhibited higher serum levels of zonulin.Other factors including calprotectin, a sensitive marker for mucosal inflammation of the intestine [50] or lactoferrin, a first-line defense protein for protection against microbial infections [51] were increased in our patients with obesity and T2D.Reportedly, pro-inflammatory chemokines exhibit important roles in the development of intestinal diseases even in colon carcinogenesis [52,53].An aberrant microbiota has been shown to induce the epithelial expression of CCL5, promoting a spontaneous and exaggerated autoinflammatory response [28].We found increased levels of CCL5 in patients with obesity and obesity-associated T2D, with also higher levels of IL-6 and the ratio IL-18/IL-18BP.Taken together, these results suggest that the altered intestinal barrier demonstrated by the increased levels of LPS and key intestinal damage markers (LBP, zonulin, calprotectin, lactoferrin) in patients with a compromised metabolic state may cumulatively worsen their condition due to the greater exposure to endotoxins and pro-inflammatory factors.In accordance with previous reports [3,10,12,47], the positive association found between the endotoxin LPS, LBP, zonulin and S100A8 with fasting insulin and the HOMA index as well as the negative correlation with the QUICKI underline the link between intestinal permeability and insulin resistance and, consequently, the development of T2D.Bariatric surgery directly alters the structure of the gastrointestinal tract from patients with obesity affecting the gut microbiome and the intestinal immune system [54].In this sense, a reduction in serum concentrations of intestinal damage markers has been reported after bariatric surgery [55,56].The decrease in the circulating levels of endotoxin, LBP and zonulin found in our patients after RYGB suggests an improvement of the impaired intestinal permeability probably related to the alleviation of the comorbid conditions.The opposite functions of the inflammasomes in the development of intestinal diseases are related with their roles in the protection/repair or damage of the intestinal mucosa, being the consequence of their opposed functions in different contexts and cellular types [24][25][26]57].While the proposed function of inflammasomes in intestinal epithelial cells consists in the regulation of the secretion of IL-18 by promoting the regeneration of the epithelial barrier, in hematopoietic cells, their activation may have a proinflammatory effect [57].Moreover, depending on the level of the intestinal tissue damage, a shift in the balance between protective and detrimental effects has been proposed.Specifically, NLRP6, highly expressed in the intestine, executes essential roles for intestinal mucosal self-renewal and proliferation also protecting the host against bacterial and viral infection [58].Elinav et al. [28] elegantly proposed that Nlrp6 deficiency prompts an impairment of the intestinal barrier function mainly due to changes in the microbiota partly regulated by the secretion of IL-18.Moreover, Il18 knockout mice are susceptible to the development of hyperphagia, obesity, and insulin resistance [59].We found decreased levels of NLRP6 and its main effector IL-18 in the jejunum from patients with T2D, suggesting a defective sensor system to detect PAMPS and DAMPS that drives to a damaged epithelial barrier that increases intestinal permeability and allows leakage of bacteria or bacterial products.In addition, the expression levels of NLRP6 and IL18 were highly associated between them and also with insulin resistance, strengthening their involvement in glucose metabolism.NLRP6 expression in adipose tissue and circulating levels of IL-18 were significantly upregulated in patients with NASH and portal fibrosis compared with patients without portal fibrosis [23,60].However, similar to NLRP6, the precise contributions of IL-18 to intestinal homeostasis and inflammation still remain controversial and unresolved.On one hand, complete loss of IL-18 predisposes mice to increased intestinal epithelial damage [28,61] and on the other hand, IL-18 is a potent pro-inflammatory cytokine able to induce inflammation-related mediators [62].Supporting the notion that CCL5 is upregulated in response to the altered microbiota in Nlrp6 −/− mice [28], we found an association between CCL5 circulating levels and gene expression levels of NLRP6 and IL18.The involvement of NLRP6 in epithelial integrity has been linked to the regulation of goblet cell function by controlling mucus secretion [30] and interestingly, we found that NLRP6 was mainly localized in goblet cells, whereas NLRP3 was located in the epithelial cells.Reportedly, Nlrp6-deficient mice exhibited a defective mucus layer accompanied with the subsequent failure to remove pathogens and, thus, increasing the susceptibility to infections [30].In line with the results found in human samples, a downregulation of Nlrp6 in the jejunum from rats with DIO was found, which is in agreement with previous results showing that the reduced gene expression levels of intestinal Nlrp6 in obesity has been efficiently reversed by RYGB but not by caloric restriction [63].
Parallel to obesity and insulin resistance is the low-grade inflammatory status [64].Recent evidence supports the concept that alterations in the microbiota due to obesity promote early inflammatory changes in the small intestine that also gives susceptibility to insulin resistance [65].Mice deficient in Nlrp6 are also more prone for intestinal inflammation and features of the metabolic syndrome mainly due to dysbiosis [28,29,66].We showed that gene expression levels of crucial inflammatory mediators including CCL2, CD68, IL1B, SPP1 and TLR4 were increased in the jejunum from patients with T2D with a downregulation of the anti-inflammatory marker ADIPOQ.Homeostasis of iron metabolism is of great importance for intestinal inflammation and an overexpression of the metalloreductase STEAP4 has been associated with aggravated inflammatory bowel disease [39].Accordingly, we detected higher levels of STEAP4 in the jejunum in T2D.Additionally, impaired expression and distribution of tight junctions in the epithelium of jejunum has been described as an early event in prediabetes development, occurring even without endotoxemia [67].We did not find changes in the expression of OCLN1 and TJP1 but upregulated levels of CLDN1 were observed in patients with T2D, probably as a compensatory response to prevent intestinal damage.Another essential target to avoid intestinal diseases is the mucus layer, with goblet cells being the specialized intestinal cells involved in the production and release of mucins, mostly MUC2 [30,68].However, no differences were observed in MUC2 levels in patients with T2D.
Host-and microbiota-derived metabolites participate in the control of NLRP6 inflammasome signaling [66].Indeed, potential metabolites that activate the inflammasomes included the bile acid derivate taurine, carbohydrates, and long-chain fatty acids, whereas histamine and spermine are considered robust inhibitors of IL-18 release [66].Neither the pro-inflammatory factors TNF-α and IL-1β nor the secretome from patients with obesity (that is enriched in inflammatory mediators) exhibited an effect on the modulation of the NLRP6 inflammasome in the human enterocyte cell line CCL-241.Importantly, the ACM obtained from patients with obesity downregulated the expression levels of TJP1 and MUC2, suggesting a crosstalk between adipocytes and intestinal cells in which the pro-inflammatory profile of adipocytes may impair the integrity of intestinal barrier.The increased expression of NLRP3 and its direct mediator IL1B together with the downregulation of IL18 mediated by TNF-α point to the effect of this cytokine in promoting intestinal inflammation.Peroxisome proliferator-activated receptor (PPAR)-γ is involved in intestinal homeostasis by preventing inflammation [69].Reportedly, the administration of rosiglitazone, a PPAR-γ agonist commonly used as an insulin-sensitizer in the management and treatment of T2D, to rodents exerted protective effects in chronic experimental colitis [70].According to our results, Caco2 cells showed a significant increase in NLRP6 mRNA that was dose dependent in response to rosiglitazone [71] suggesting that protective effects of PPAR-γ may be mediated by the expression of NLRP6.Taurine, previously proposed for increased NLRP6 activity [66], significantly enhanced the expression levels of NLRP6.The amino acid histamine had no effect on NLRP6 mRNA levels in CCL-241 cell line.Parthenolide, a strong inflammasome inhibitor independent of its inhibitory effect on the NF-κB pathway [72] induced a decrease in the expression levels of NLRP3 and NLRP6 in CCL-241 cells even being stimulated with LPS.
Although A. muciniphila has been associated with improvements in local and systemic inflammation, Nlrp6deficient mice are more susceptible to colitis due to an increase in A. muciniphila in the gut.We found an upregulation of NLRP6 after the incubation of CCL-241 cells with heat-inactivated A. muciniphila whereas no differences in NLRP3 or MUC2 expression levels were detected.However, a higher release of mucin-2 into the culture media was found after the treatment with both the bacteria and the BCM.These results confirm the complex regulatory pathways of the inflammasomes and highlight the importance of the relative contribution of each metabolite to determine the overall activation of the inflammasomes and the production of their downstream targets.
Nlrp6 knockout mice exhibited a defect in the exocytosis of mucin granules due to reduced autophagy and hyperplasia of goblet cells, resulting in a thin mucus layer and higher susceptibility to infections [30,73].In this sense, we observed a decrease in the expression and release of MUC2 after the treatment with siNLRP6, strengthening the role of NLRP6 in mucin release and, therefore, in the maintenance of gut homeostasis.The important regional differences found along the proximal-distal axis in the gut regarding cellular composition and gene expression levels, highlight the importance of regional selection when studying the gut [74].In this sense, a specific part of the jejunum has been collected in the patients included in our study.However, to further clarify the exact intestinal region and cellular type responsible for the activation of the inflammasomes will help to understand how inflammation affects the intestinal epithelium and will be a guidance for future precision medicine approaches.In addition, transepithelial electrical resistance functional assays to
Conclusions
Clinical and translational studies have provided evidence that the dysregulation of the inflammasomes in the gastrointestinal tract may play an important role in obesity and metabolic disorders.Collectively, the increased circulating levels of intestinal damage markers together with the downregulated expression of NLRP6 and IL18 and increased levels of pro-inflammatory factors in the jejunum from patients with obesity-associated T2D suggest a defective inflammasome sensing, driving to an impaired epithelial intestinal barrier and uncontrolled inflammation that may regulate the progression of multiple obesity-associated comorbidities (Fig. 7).Further research is warranted to understand the cell-, tissue-and time-specific functions of NLRPs and to apply our knowledge of inflammasome biology to diminish the inflammation-induced tissue injury in different conditions including obesity, inflammatory bowel disease or even inflammation-associated cancers.Since conflicting results of NLRP6 in the control of the intestinal integrity, in the responses against microbial pathogens and in the inner colonic inner mucus layer formation have been proposed, studies to understand the dichotomic roles in inflammation mediated by NLRP6 are also essential [75,76].Our data suggest that to increase the reduced intestinal expression of NLRP6 in patients with obesity-associated T2D may be a potential therapeutic intervention.
Acknowledgements
The authors gratefully acknowledge the valuable collaboration of all the members of the Multidisciplinary Obesity Team, Clínica Universidad de Navarra, Pamplona, Spain and Dr. Marián Burrell and Dr. Marina Martín of the University of Navarra, Spain.
Authors contributions G.F. designed the study, collected and analyzed data, contributed to discussion, and reviewed the manuscript.J.G.-A., B. R., S.B., A.R., A.M., M.C., and G. R. collected and analyzed data, contributed to discussion, and reviewed the manuscript.V.V., C. S., J.B., R.M and J.E. enrolled patients, collected data, contributed to discussion, and reviewed the manuscript.V.C. designed the study, collected and analyzed data, wrote the first draft of the manuscript, contributed to discussion, and reviewed the manuscript.
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.This study was funded by Plan Estatal I + D + I from the Spanish Instituto de Salud Carlos III-Subdirección General de Evaluación y Fomento de la Investigación-FEDER (grants number PI20/00080, PI20/00927 and PI22/00745) and by CIBEROBN, ISCIII, Spain.Funding sources had no role in manuscript writing or the decision to submit the manuscript for publication.
Fig. 3
Fig. 3 Bar graphs show the mRNA levels of A key intestinal inflammation-related genes and B markers associated with the integrity of the epithelial intestinal barrier in jejunum samples from patients with obesity with normoglycemia (OB-NG) (n = 5) and with obesity-associated type 2 diabetes (OB-T2D) (n = 10).C Representative immunostaining (n = 3) and quantification (n = 5 per group) for CD68 in jejunum samples from patients with OB-NG and with OB-T2D [scale bar (100 × : 50 µm; 200 × : 25 µm)].Values are the mean ± SEM.
Fig. 6
Fig. 6 Gene expression levels of NLRP3, NLRP6 and MUC2 in A CCL-241 cells and C visceral adipocytes after the incubation with heat-inactivated Akkermansia muciniphila and with the bacteriaconditioned medium (BCM) (40%) for 24 h.Mucin-2 concentrations in the culture media of B CCL-241 cells and D visceral adipocytes incubated in the presence of heat-inactivated A. muciniphila and with the bacteria-conditioned medium (BCM) (40%) for 24 h.E Gene expression levels of CLDN1 and OCLN in CCL-241 cells after the
Fig. 7
Fig.7 The downregulated expression of NLRP6 and IL18 and increased levels of pro-inflammatory factors (TLR4, CCL2, SPP, and CD68) in the jejunum from patients with obesity-associated T2D suggest a defective inflammasome sensing, driving to an impaired epithelial intestinal barrier, also evidenced by the increased circulating levels of intestinal damage markers (endotoxin, LBP, zonulin, lactoferrin, S100A8) and inflammatory factors (IL-6, CCL5) favoring the development of multiple obesity-associated comorbidities.In addition the secretome of adipocytes from patients with obesity-associ-
|
v3-fos-license
|
2018-12-08T00:42:27.540Z
|
2016-01-01T00:00:00.000
|
54777436
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/mpe/2016/1604824.pdf",
"pdf_hash": "dcec3988cf03dab5da30b5c442405e441768a62c",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:672",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"sha1": "dcec3988cf03dab5da30b5c442405e441768a62c",
"year": 2016
}
|
pes2o/s2orc
|
A New Algorithm for Distributed Control Problem with Shortest-Distance Constraints
This paper investigates the distributed shortest-distance problem of multiagent systems where agents satisfy the same continuoustime dynamics. The objective of multiagent systems is to find a common point for all agents to minimize the sum of the distances from each agent to its corresponding convex region. A distributed consensus algorithm is proposed based on local information. A sufficient condition also is given to guarantee the consensus. The simulation example shows that the distributed shortest-distance consensus algorithm is effective for our theoretical results.
Introduction
In recent years, distributed control of multiagent systems has attracted considerable attention within control community because of its important applications including distributed task allocation, distributed motion planning, and distributed alignment problems [1][2][3][4][5][6][7][8].For example, in [1], Nedić et al. introduced a distributed projected consensus algorithm for discrete-time multiagent systems where each agent lies in a closed convex set and gave corresponding convergence analysis on dynamically changing balanced graphs.Founded on the work of [1], [5,6] considered the networks of fixed and switching topologies.In [7], Matei and Baras proposed the consensus-based multiagent distributed subgradient method to solve the collaborative optimization of an objective function.In [8], Lin and Ren studied the constrained consensus problem in unbalanced networks with communication delays.
In this paper, we will study the distributed control problem with shortest-distance constraints.Distributed shortestdistance consensus problem is one important problem in the distributed control of multiagent systems.The objective of multiagent systems is to find a common point for all agents to minimize the sum of squared distances from each agent to its corresponding convex region.For example, [9] investigated consensus and optimization problems for directed networks of agents with external disturbances.Currently, most of the existing related works concentrate on the case where the intersection set of all convex regions is nonempty [1,[10][11][12].A projected consensus algorithm was proposed to solve the constrained consensus problem where each agent is restricted in its own convex set [1].Reference [10] proposed a class of subgradient-based methods, where some estimate of the optimal solution can be delivered over the network through randomized iteration.In [11], Johansson et al. introduced a subgradient method based on consensus steps to solve coupled optimization problems with fixed undirected topology.In [12], Lou et al. proposed an approximately projected consensus algorithm to achieve the intersection of convex sets.In [13], Wang and Elia proposed a distributed continuous-time algorithm to achieve optimization by controlling the sum of subgradients of convex functions.However, the case where the intersection set of all convex regions is empty is rarely concerned.In [14], the case of no intersection is studied, but the sign functions are used to make the system nonsmooth.Reference [15] investigated a distributed optimization problem and proposed a subgradient projection algorithm for multiagent systems subject to nonidentical constraints and communication delays under local communication.Comparing with [1,12], this paper focuses on the constrained 2 Mathematical Problems in Engineering problem where all convex regions have no intersection and the undirected graph is connected.Following the work of [14], we investigate the distributed control problem with shortestdistance constraints and propose a new distributed shortestdistance consensus algorithm.By a Lyapunov approach, a sufficient condition is given to make all agents converge to the optimal set of the shortest-distance problem.Finally, we provide a simulation example to show that the distributed shortest-distance consensus algorithm is effective for our theoretical results.Different from [14], we calculate the difference and the Euclidean norm between two different agents and use the ratio of them to replace the sign function.It makes the system smoother than that of [14].
Preliminaries
2.1.Graph Theory.Let G(, E, ) be an undirected graph of node , where V = { 1 , . . ., } is the set of nodes and ⊆ V × V is the set of edges.The node indexes belong to a finite index set I = {1, 2, 3, . . ., }.An edge of G is denoted by = ( , ), where node can obtain information from node .The weighted adjacency matrix is denoted by = { }, where = 0, = > 0 if and only if ( , ) ∈ and ̸ = .Since the graph is considered undirected, the adjacency matrix is a symmetric nonnegative matrix.The set of neighbors of node is denoted by = { ∈ V; ( , ) ∈ E}.The in-degree and out-degree of node are defined as ( ) = ∑ =1 and ( ) = ∑ =1 .Then, the Laplacian corresponding to the undirected graph is defined as = [ ], where = ( ) and = − , ̸ = .Obviously, the Laplacian of any undirected graph is symmetric.A path is a sequence of ordered edges of the form ( 1 , 2 ), ( 2 , 3 ), . .., where ∈ I and ∈ V.If there is a path from every node to every other node, the graph is said to be connected [16].
Lemma 1 (see [16]).If the undirected graph G is connected, then the Laplacian of G has the following properties: (1) Zero is an eigenvalue of , and 1 is the corresponding eigenvector; that is, 1 = 0.
(2) The rest − 1 eigenvalues are all positive and real.
Convex Theory.
Let dist(, ) be the standard Euclidean distance of a vector from a set ; that is, The projection of vector on a closed convex set is denoted by the projection term (); that is, where ‖‖ denotes the standard Euclidean norm, ‖‖ = √ [17].
Lemma 2 (see [1]).Suppose that is a nonempty closed convex set in R , the squared distance function where , ∈ R , ∈ , and (⋅) is differential operator.
Lemma 3 (see [18]).LaSalle's invariance principle: consider an autonomous system of the form ẋ = (), with () continuous, and let () : R → R be a scalar function with continuous first partial derivatives on If Ω extends to the whole space R , then global asymptotic stability can be established.Define = { ∈ R : V() = 0}.Then if contains no other trajectories other than = 0, then the origin 0 is asymptotically stable.In summary, (1) if () is a negative semidefinite in a region Ω, where () ≤ 0, then a solution starting in the interior of Ω remains there; (2) if, in addition, no solutions (except the equilibrium = 0) remain in (the subset of Ω where V() = 0), then all solutions starting in the interior of Ω will converge to the equilibrium.
Problem Description and Results
3.1.Problem Description.The multiagent system under consideration contains agents, where each agent corresponds to a certain bounded convex set, denoted by .Our objective is to design a distributed consensus algorithm for the system to make all agents able to reach consensus and minimize the sum of squared distances between the global point and the convex sets.
We assume that the closed set ∈ R is nonempty, R is the set of all dimensional real column vectors, and ⋂ =1 = 0.In other words, we need to find a global optimal point that minimizes the sum of squared distances form to its all closed convex sets.Each agent is assumed to have the following continuous-time dynamics: where ∈ R is the state of the th agent, I is the index set {1, 2, . . ., }, and () is the control input of the th agent.
A New
According to (1/2)‖ − ‖ 2 / = ( − ) ( ẋ − ẋ ) and ( 5)-( 6), we get Since undirected graph G(, E, ) is connected, the distance between node V and node V is less than the sum of squared distances between agent and its neighbors; that is, Combining ( 13)-( 15 Thus That is, V ()/√ () ≤ −√2/.And integrating both sides of this inequality from 0 to , we have Since () ≥ 0, it is clear that () vanishes to zero in finite time and hence all agents reach a consensus in finite time.Thus, there is a constant > 0 such that () = 0 and () = () for all , ∈ I and all > .Thus Note that From Lemma 4 and (25), the convex function ( 4) is minimized as → +∞.Remark 8.In Theorem 7, we only discuss the undirected connected graphs and our future work will be directed to the general jointly connected graphs.
Simulation
In this section, we provide a numerical example to show the effectiveness of Theorem 7. We use the fixed topology which is showed in Figure 1 x 2 (0) x 3 (0) x 4 (0) Figures 3 and 4 show the position trajectories, respectively.It is obvious that all agents reach a consensus point that is the optimal point of the function (4).This is consistent with Theorem 7.
Conclusion
In the paper, we purpose a new distributed shortest-distance algorithm for multiagent systems.The objective of multiagent systems is to find a common point for all agents to minimize the sum of squared distances from each agent to its corresponding convex regions.A sufficient condition also is given to guarantee the consensus.And the simulation example is given to show that the distributed shortestdistance consensus algorithm is effective for our theoretical results.
|
v3-fos-license
|
2018-11-10T14:01:42.357Z
|
2018-10-23T00:00:00.000
|
53241661
|
{
"extfieldsofstudy": [
"Computer Science",
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinejour.journals.publicknowledgeproject.org/index.php/i-jep/article/download/8980/5201",
"pdf_hash": "d99febaa2eafdffa7e0f7098045ff300cb61ff80",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:673",
"s2fieldsofstudy": [
"Education"
],
"sha1": "d99febaa2eafdffa7e0f7098045ff300cb61ff80",
"year": 2018
}
|
pes2o/s2orc
|
Managing the Senior Capstone Design Project for Undergraduate Students at King Abdulaziz University
Graduates from engineering colleges lacks the practical experience and leadership skills that are needed in the labor market. Throughout engineering academic curriculum, it is rarely to find courses that fulfill these needs. Senior capstone design project (SCDP) and Internship training can give students lot of practical experience and skills if they are well managed and assessed. Therefore, mechanical engineering department at King Abdulaziz University (KAU) led the effort in organizing the SCDP course to inherit the students with the practical engineering experience and leadership skills to prepare them well for the labor market. This paper presents an overview of a SCDP experience that has been practiced well for two KAU engineering colleges in Rabigh and Jeddah, Saudi Arabia for many years. This paper delineates the process, leadership skills, ABET involvement and evaluation. The SCDP presented here has provided overall satisfaction for faculty members, students and labor market. The working opportunities for students have been increased. Moreover, course management improvements led to a more cost-efficient program. Keywords— Capstone; Senior Project; Engineering
Introduction
A senior capstone design project (SCDP), sometimes referred to as the capstone design project or simply "Capstone", is the pinnacle achievement of any graduating college of engineering senior, who has completed the course.Figure 1 symbolizes the final design project by featuring it as the crowning capstone of a pyramid built on the foundation of other academic courses.In the Capstone, the students utilize all the knowledge and practice gained during their previous or current studies to create a project worthy of the Capstone designation.The SCDP is usually a mandatory course that students take before they finish their bachelor's degree in most engineering col-leges.While some engineering colleges offer it for one semester; others have it set up for two semesters.Usually the SCDP course counts as four credits.It is not always mandatory.Some institutes present the course as an option for both its students and faculty members.Most students contact a faculty member they like and register the course with him or her.Then, all tasks, assignments and evaluations come from their senior capstone design project advisor (SDPA), who is also the professor who teaches the SCPD class.The department may or may not assign an examining committee (ExC) to evaluate the work for its students at the end of the year.This group's role is to give their feedback to the SCDP program students, advisors, sponsors, and other associated faculty.However, some issues can arise.Students may not be able to find a faculty member available to register with.There may not be enough in the budget to support the project.Moreover, unfairness may occur in evaluating a student's work.
The standard practices and current state of capstone design education throughout many universities have been described [1] and several surveys were conducted [2][3][4] to evaluate Capstone teaching.Engineering courses should conduct project-based learning [5][6][7].Industry should involves in Capstone projects and has its input on course design [8].However, Hoole [9] stated that engineering colleges should dispense with senior projects and "should concentrate on teaching the theory, leaving the completion of the engineer's education to industry" [9].He justified this statement by saying: In providing in-house senior design programs, universities have imposed the fiction that they provide true industrial experience and have encroached into what industry can do better.-S.R.H. Hoole Multidisciplinary engineering capstone design courses were discussed by [10,11], and all agree that leadership and other skills should be provided to students in the designed courses to satisfy industry needs [12][13][14].Nevertheless, the issue of managing the Capstone is seldom mentioned in most engineering colleges and needs to be addressed [15].Weak management of the Capstone can lead to an imbalance of the skills needed by graduate engineers.The management of the Capstone must include timing, selection, advertising the projects, one-on-one supervising, coordinating, assessment, and materials submission such as project portfolio, meetings minutes, presentation and final report.This paper combines the academic requirements with leadership features into a well-organized plan to get the best practice in SCDP.
In the Jeddah and Rabigh engineering colleges at King Abdelaziz University (KAU), some of the issues mentioned here still need to be addressed.Moreover, SCDP should follow one of the worldwide accreditation bodies, such as the National Commission for Academic Accreditation and Assessment (NCAAA), the Institution of Chemical Engineers (IChemE) or the Accreditation Board for Engineering and Technology (ABET).Since the engineering colleges at KAU are all accredited by ABET, the SCDP there was required to follow the ABET criteria.The main objective of this paper is to delineate the process, leadership skills, ABET involvement and evaluation that are needed to manage the SCDP course successfully to achieve the required outcomes from it.The following methodology is followed to achieve the objective of the paper.
• Delineate the SCDP' requirements • Develop flow chart for the SCDP life.
• Define the skills that students need during the course.
Project Requirements
To fulfill industries' demand criteria, the university must prepare its students to achieve these criteria.In the SCDP, students and professors begin with the requirements, which the Accreditation Board for Engineering and Technology (ABET) has set.
Selection of the project
The project should reflect a real-life design problem related to industry or society.A situation should be clearly described by the advisor(s) or customer.The design problem should be defined by the students and should involve coaching from the advisors.The project should involve a problem that has no single solution.More than one solution should be discussed by the students for a situation.A comparison should be performed between the alternatives in a methodical way (e.g., quality function deployment (QFD)).The roadmap of thinking and the rationale of the selected design project should be clarified (high-level plan).Students and advisor(s) should summarize on one sheet of paper the curriculum sources that contributed to the accumulated knowledge used to address the design project problem.Each project should include a section to assess the impact of the project on the environment including, but not limited to, air, water, soil, etc.The final product in some projects might have a direct or indirect short-, medium-or long-term impact on some sector(s) from the local, national and/or international society.In this case, the project report should assess the acceptability of the proposed design by the neighboring and/or end-user society.Each project should include a cost estimate of the design and its implementation, including time and material.Each project should address the marketability of the product, which could be a manufactured product or service product.
2.2
Project Supervising: Each project should have at least one advisor from the KAU faculty and, preferably, one advisor from an industry.Adopted design specifications, regulations and standards should be clarified in each design project and documented.Professor(s) should emphasize teamwork among students.
Managing the project
The simple one-page project management (OPPM) sheet or mind mapping (MM) that can be used for a capstone design project is shown in Figure 2. Either one can help to draw a road map for the project life and should contain all the required activities.
Project Evaluation
The project is evaluated by three partners, 1) the senior capstone design project advisor (SCDPA), the senior capstone design project committee (SCDPC), and the department-appointed examining committee (ExC).The ExC consists of a department chair-appointed consultant (could be a faculty member), a representative from the industry planning team to participate, the adviser, and perhaps a couple of students who are like team captains or a graduate student who is working with the professor.The final evaluation committee would have the authority to approve.
Professional ethics:
General requirements for all kinds of projects can be summarized as follows • All work should be original and not copied from others.
• In the case of the project team's work scope, the work should be divided equally between all members.• Grade should be given on individual basis based on the effort and performance of each student, as well as on the team level.• All reference materials should be documented.
• Professional ethics should be implemented and enforced by the professor(s) and students.
Final Product:
• Either a full-scale prototype or a scaled model of the product must be manufactured and tested.
• A technical report should be written in clear English.
• A multimedia presentation should be prepared.
• A poster should be prepared including an executive summary, the problem statement, design approach, and important findings with illustrations.
Process and Procedure
The project usually starts by outlining the method which the project will follow based on the project team formulation.The process of the senior project design, shown in Figure 3 starts when the students register for the course.A kick-off meeting is conducted for the registered students and advisory committee.The high-level requirements and detailed constraints are explained in the meeting.The students are asked to prepare a business problem, which is evaluated by the advisory committee.The project can be an industrial project or non-industrial project.Either one is fine for a project to be considered.However, the industrial project will require a further project process as shown in Fehler!Verweisquelle konnte nicht gefunden werden.The senior project department committee will examine the team and evaluate the project.The senior capstone design project coordinator (SCDPC) will conduct an almost weekly meeting with students to enhance their administrative and leadership capabilities.They will be instructed on how to manage the project and achieve the team's business and technical goals.During the project, students must conduct several meetings to achieve the required milestones of the project.The team must meet with their advisor weekly, as well as before and after the advisor meeting to prepare and follow up, respectively.Students are expected to use the available form in Table 1 for minutes and meeting preparation.To have fruitful meetings, the team should prepare the points that must be discussed such as technical issues related to design, operation, and inventory (items that need to be purchased or fabricated).The place and time should be carefully selected to suit all team members and the project advisor.The team leader has to assign a time keeper and a recorder for the meeting.The team meeting should preferably be conducted in the place of the experiment or in the software availability lab if it is a simulation project.All meeting minutes should be kept in the project's portfolio after they are signed by the SCDPA and SCDPC.The project advisor should prepare project progress reports and keep them in his course file.The weekly project status report is expected to meet the preset objectives of the final report.
Project Student Portfolio
Each team should prepare a project portfolio as shown in Tables 2 and 3 to organize their projects.The project portfolio should be organized as follows:
1.
Cover page should contain the following data:
Leadership Skills
One of the most important elements of any project is to adopt leadership skills.Those skills start from the initiation of the project team and end with submitting the final requirement of the project and getting the grade.During the work flow, important issues related to team formation are highlighted.
Team Formation
Team formation is the first important step in determining the future success of the project.Being able to move the team towards the goals is not an easy task.Many elements must be considered in the team building process.The team leader and members are encouraged to participate in a short training course about forming an effective team, which is available though many websites [16][17][18] or through some university programs.
Project Management: Three constraints that might slow down or even halt the project are cost, time, and work scopes.More importantly, a few core pillars need to be raised for any project to succeed.
Pillar #1: Stakeholders-The first pillar is the stakeholders of the project, which include mentor/s, team members, the department chairman, and the college's vice dean, dean, engineers, technicians etc.In addition, anyone who can affect the project, in either a positive or negative manner, is also considered a stakeholder.Dealing with stakeholders in the correct manner will determine the success of the project.The owner of the project (students) must know how to approach and deal with these stakeholders based on their authority and contribution to the project.These are skills that students can acquire from this project.These skills are very important and will become even more valuable after the students graduate and go to work in a company environment.Figure 7 shows the stakeholder approach and is based on keeping each person's influence and interest.
Pillar #2: Benefits-The second pillar of the project is the benefits of the project.Each member in the team has to be convinced about the importance of the project and its expected benefits.By understanding this, each member in the team will work hard and contribute towards achieving the objective of the project.
Pillar #3: Work Scope-The third pillar is the work scope, which must be realistic and clear with frequent reviewing.It is better to have work scope statements which include all elements of the project than a few broad goals that leave too much to the imagination.
Pillar #4: Risk Management-The fourth pillar is risk management.Risks are a part of any project and should be expected.How participants deal with the risks will determine the successfulness of the project.Defining a risk from the beginning can help the team avoid it.Preparing suitable solutions for them using a SWOT analysis is a helpful tool for this issue.Here, you have to define strength (S), weakness(W), opportunities (O) and threats (T) before the start of the project and deal with them accordingly.Figure 7 illustrates the commonly used SWOT analysis template.
Pillar #5: Schedule Adherence-The fifth pillar is to build and stick with the schedule your team develops.A Gantt chart (Fig. 8) is a very helpful tool, which can clearly show the progress of the project, i.e., whether it is on track or not.By having an updated Gantt chart, immediate action can be taken to keep the project on track or identify an immediate solution to the potential problems that might drag the project off schedule.A Gantt chart can be built using Microsoft Project Manager. Figure 9 shows a sample Gantt chart built with a plan and activity monitoring.Pillar #6: Team Performance-The final pillar is the team performance.The leader of the group should watch all team members and evaluate their performance.Tasks should be clearly defined to assure success in achieving the team's objective.Figure 9 shows the project management flow from start to the end.
ABET Involvement
ABET has constraints that students should be aware of during their project.Products must be: The product intent and workplace protocol must meet industry standards, which pertain to: • Ethics • Health and safety ABET has set outcomes mentioned for the engineering program, which are shown in Table 4.When the student graduates, he is expected to fulfil most of the ABET outcomes.Most of the engineering courses cover certain technical outcomes, namely those shown in Table 5, lines a, b and e.However, lessons in ethics and leadership are not a required component of regular engineering courses.
In the Capstone, students often spend several hours perfecting the design of the project and interacting regularly with other team members and the advisor.This is when the other ABET outcomes such as those shown in Table 4, lines d, f and g are fulfilled.The evaluators of each outcome are also shown in Table 4.The SCDPA is the senior capstone design project advisor, the SCDPC is the senior capstone design project coordinator.The ExC is the department-appointed examining committee.Each outcome has Key Performance Indicators (KPI), which are also the essential elements as shown in Table 5. Perform iterative analysis until all potential improvements are achieved 5 Project Evaluation
Presentation of the Projects
The students should prepare a report and present their work to the exam committee assigned to them by the department.Their report should follow the university's standard format [20].The presentation should be clear, organized and presented in the English language.The draft report and presentation are evaluated equally with 50% of the grade depending on each (50% from report and 50% from presentation).The score breakdown is shown in Table 6.
Score Distribution
A student's final grade depends on his or her overall activity during the course.The senior capstone design project coordinator (SCDPC) will evaluate the teams for their discipline in submitting the papers related to the SP portfolio, such as minutes of the meeting, proposal, Gantt chart and the overall appearance of the portfolio.The department appointed examining committee.evaluates the team for their performance during the presentation and the draft report.The senior capstone project administrator (SCDPA) will evaluate the student work during the whole course and will also evaluate the final report.All percentages of score values based on their weighted distribution in evaluation are shown in Figure 10.
Achievement
The SCDP allows senior-level students to gain professional engineering design experience through an opportunity to practice teamwork, quality principles, communication skills, life-long learning skills, realistic constraints and awareness of current domestic and global challenges.Throughout the successive design weekly reports while following the Gantt chart, the students are required by the end of the course to communicate, clearly and concisely, the details of their design both orally and in writing through a functional artifact/prototype (if any), a design notebook (if any), an A0 project poster, a final oral presentation, and a final report.[21] After two years of Capstone study and activities, implementing these features had positive impacts on students, faculty and the department.Graduates students from the engineering colleges were hired by well-known companies in the region and have interacted well with employers and fellow employees.Plus, many of them were able to continue their studies and practice in the academic field.In 2012, 15 students produced acceptable journal papers that showed an awareness of both contemporary issues and ethical responsibility.[22] In addition, for engineering faculty at UKA-Rabigh, the Capstone effect was more noticeable, since the time gap between learning the aspects of the program and practicing them were very short.Figure 11 shows some advantages of involving students in the Capstone program.A notable increase of faculty involvement occurred in each department.It also shows the number of students reduced to between 2 and 3 members per team.Moreover, the students' scores, honor and awards increased during the first years of implementing the new aspects of SCDP.
Conclusions
The capstone design project at KAU focus on preparing the students for their future career.This paper provides a complete instruction guide to managing the Senior Capstone Design Project (SCDP) course.The process for all the steps required for SCDP are provided.The leadership skills that a SP team needs were emphasized.The team formation process was defined.The importance of frequent meetings and well-kept records as well as the constraints set by ABET and the outcomes were delineated in the narrative and supported by several figures and tables designed to capture the essential components needed to achieve successful SPD outcomes.The program is always being updated to comply with the latest requirements, especially from industries.
Fig. 1 .
Fig. 1.Capstone design project acts a cap for the engineering curriculum course
2 .Miscellaneous 3 . 4 .
c) Names, ID and signatures of team members d) Advisor(s) name(s) e) Semester and year Divide the portfolio into the following sections, using separators, and organize them as follows: a) Cover page b) Project proposal c) Weekly project status report d) Project plan on Gantt Chart: i.Current week expanded ii.Other sections suppressed a) Advisor meeting minutes b) Team members meeting minutes c) Client meeting minutes (If the project is sponsored by a client) d) Technical report i. Business problem ii.Project charter iii.Final design iv.Manufacturing and testing v. Conclusion and recommendations vi.Technical drawings e) Assembly drawings f) Sub-assembly drawings g) Working drawings vii.Presentations, six slides/page viii.Draft work of the team ix.Update the work in each section weekly and arrange the updated materials in each section such that the recent work is always on top.Project portfolio should be submitted weekly to the course coordinator in his office right after Sunday lecture, and it should be picked up from his office on Wednesday.
Fig. 5 .
Fig. 5. Sample of the cover for the Portfolio
Table 1 .
Time Table for the Weekly coordinator meeting
Table 2 .
Meeting minutes form
Table 3 .
Weekly Project Status Report
Table 5 .
Sample of KPI for ABET outcomes (rubrics)
|
v3-fos-license
|
2024-05-11T15:32:26.375Z
|
2024-05-07T00:00:00.000
|
269708214
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aisy.202300761",
"pdf_hash": "361372cdb162d123d428012074d695e622be397b",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:674",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"sha1": "1023b2ca2dc977a043ae072e131237e2d4c468e5",
"year": 2024
}
|
pes2o/s2orc
|
Toward Collaborative Multitarget Search and Navigation with Attention‐Enhanced Local Observation
Collaborative multitarget search and navigation (CMTSN) is highly demanded in complex missions such as rescue and warehouse management. Traditional centralized and decentralized approaches fall short in terms of scalability and adaptability to real‐world complexities such as unknown targets and large‐scale missions. This article addresses this challenging CMTSN problem in three‐dimensional spaces, specifically for agents with local visual observation operating in obstacle‐rich environments. To overcome these challenges, this work presents the POsthumous Mix‐credit assignment with Attention (POMA) framework. POMA integrates adaptive curriculum learning and mixed individual‐group credit assignments to efficiently balance individual and group contributions in a sparse reward environment. It also leverages an attention mechanism to manage variable local observations, enhancing the framework's scalability. Extensive simulations demonstrate that POMA outperforms a variety of baseline methods. Furthermore, the trained model is deployed over a physical visual drone swarm, demonstrating the effectiveness and generalization of our approach in real‐world autonomous flight.
Introduction
Multitarget search (MTS) and navigation with intelligent systems like autonomous drones [1] are highly demanded in critical applications, [2] such as disaster rescue in earthquakes, surveillance and reconnaissance in hazardous areas, and parcel management in cluttered warehouses (see Figure 1).The utilization of multiagent systems (MAS) has been considered a more efficient way with more information exchange to conduct such missions while enhancing the resilience and robustness of the search operations. [3,4]However, ensuring effective collaboration among these agents is a challenging task, especially in obstaclerich environments with multiple targets.This involves not only information exchange but also synchronization of actions and strategies among all participating agents.Moreover, coordinated interaction is fundamentally required where agents work together toward a common goal, sharing information and making decisions that benefit the overall mission.
Traditional approaches mainly address multitarget search (MTS) problems via heuristic algorithms [5] or subtask optimization [6] in centralized or decentralized manners.Centralized approaches, such as task assignment from the central node, rely on a central controller to allocate tasks to individual nodes and execute their actions based on global information.Gábor et al. [6] proposed a centralized optimization algorithm that considers the spatial constraints of the environment and the capabilities of drones to control a swarm of drones to achieve coordinated flocking in constrained environments.While these methods are easy to deploy, they are vulnerable to single-node failures and lack scalability.Once the central node is attacked or disturbed, [7] the entire system will cease to function.Meanwhile, the complexity of communication and computation increases rapidly with the number of nodes in the system, which hinders the extension to large-scale scenarios.Decentralized approaches, such as distributed predictive control [8] and decentralized trajectory planner, [9] have been proposed to overcome the limitations of centralized approaches.These methods rely on local observation and communication among agents to achieve the desired performance but suffer from state consensus problems. [10]Besides, these approaches often struggle to adjust to uncertainties and complexities when operating in real-world environments since simplified environments and simulated sensor data are assumed.Furthermore, they do not always fully leverage the potential for collaboration when multiple agents and targets are involved as all agents are computed to take the optimal strategy individually.
Multiagent reinforcement learning (MARL) [11][12][13][14][15][16] has recently emerged as a powerful tool for tackling uncertainties and generating effective collaboration among agents in sophisticated tasks such as path planning and navigation in cluttered environments.These methods leverage reinforcement learning (RL) algorithms to find the optimal policy for agents with intentionally designed reward functions that aim to encourage collaboration among agents and benefit the entire team.However, these methods either require global observations from the environment [11] or rely on bird-view grid states and stable communication, [12,15] which are infeasible to achieve fully decentralized planning in an uncertain real-world environment such as the MTS.In this article, we aim to solve the collaborative multitarget search and navigation (CMTSN) problem in 3D space with local visual perception.There are several challenges to achieving fully decentralized decision-making for this task: (1) balancing exploration and exploitation in a 3D sparse reward space; (2) addressing efficient credit assignment problems within a team; and (3) improving the scalability of the trained model.An adaptive curriculum embedded multistage learning framework [17] is proposed to address the first challenge, yet it was only for the single target search.Multiagent posthumous credit assignment [18] solved the posthumous credit assignment by considering the group contribution among multiple agents and introducing the attention mechanism to the centralized critic network.However, MA-POCA ignores the individual's motivation and does not consider the variable observation during execution, which is critical for local perception and scalability since the detected signal may be lost.Besides, existing MARL and MTS approaches rarely consider the physical implementations for CMTSN, which poses more challenges in communication and observation synchronizations when the number of found targets changes.
In this work, we propose a POsthumous Mix-credit assignment with Attention (POMA) framework to address the remaining challenges, namely efficient credit assignment and variable observation.POMA first combines adaptive curriculum learning (ACL) and mixed individual-group credit assignment based on MA-POCA, and then introduces the attention mechanism into observations to improve the scalability of the framework when variable local observations are involved.We validate our framework through extensive simulation and real-world flight experiments.
Our main contributions include: 1) we address the complex CMTSN task by formulating it as a partially observable MARL problem and proposing a novel MARL approach, POMA, to tackle the critical challenges of credit assignment and model scalability in a 3D sparse reward environment.We demonstrate its significant performance improvement over other wellestablished MARL approaches such as MADDPG, [19] MA-POCA, and IPPO; [20] 2) we develop a real drone swarm equipped with local visual perception capabilities for CMTSN in indoor, obstacle-rich environments (see Figure 2), utilizing an attention mechanism to enhance collaboration and manage variable observations, and target status synchronization to update the global information; 3) to the best of our knowledge, this is the first work to deploy a deep reinforcement learning (DRL) model on a physical drone swarm for CMTSN tasks, proving its effectiveness in real-time flight scenarios and highlighting its practical applicability, thereby establishing a new benchmark in robot swarm intelligence.
MTS
MTS has attracted significant attention due to its critical importance in various missions such as search and rescue (SAR).
Figure 1.A drone swarm is inspecting parcels in a warehouse where efficient collaborations are required. [47]rget 2 MTS involves exploring the environment and finding multiple, initially unknown targets.Distinct from mere environment exploration, [21,22] which aims at mapping explored areas to a unified representation, MTS focuses on finding all targets efficiently, prioritizing high success rates and minimal search time.Probabilistic search (PS) methods, [23] which leverage probabilistic models such as probability hypothesis density (PHD) [24] or belief [5] to estimate target locations and derive optimal search paths, are prevalent in MTS.Commonly, these PS challenges are reduced to particle swarm problems and tackled using heuristic strategies like particle swarm optimization (PSO), [25,26] ant colony optimization (ACO), [5] artificial bee colony (ABC), [27] or brain storm optimization (BSO). [28]However, these studies typically rely on simplified 2D maps and simulated sensor data, limiting their applicability in real-world scenarios.For instance, ref. [5] utilized an enhanced ACO technique for MTS within a 2D, obstacle-free grid cell environment, employing simulated radar sensors, which simplifies the real-world complexity.
To overcome these limitations, some researchers have framed MTS as a partially observable Markov decision process (POMDP), solved using search algorithms [29,30] or RL. [31]For instance, ref. [30] modeled MTS in 3D space using a POMDP framework and addressed it with the adaptive belief tree (ABT) algorithm.In an attempt to maintain multi-UAV connectivity, ref. [31] adopted a RL model, utilizing omnidirectional perception in a 2D grid environment, and applied a convolutional neural network (CNN) to train policies based on image representations of trajectory histories and connectivity states, using DQN. [32]These approaches aim to more accurately reflect MTS challenges but still fall short of real-world deployment due to their reliance on grid maps and simulated sensors.
To better address the model uncertainties in real-world applications and facilitate collaboration, we propose a novel MARL approach to solve the CMTSN problem, which involves multiple agents and obstacles, using a real visual drone swarm.As demonstrated in Table 1, which summarizes recent MTS works, our work is unique in utilizing a real visual drone swarm for MTS in a high-fidelity 3D continuous environment, distinguishing it from the predominantly 2D, simulation-based approaches.Our work bridges this gap by leveraging a novel MARL framework validated through physical experiments, marking a significant advancement in real-world CMTSN applications.
Our approach, POMA, is specifically designed to address inherent challenges in applying MARL to CMTSN, such as learning in sparse reward spaces, resolving individual-group credit conflicts, and managing variable observations with inactive agents during task execution.
MARL
Recently, MARL [33][34][35] has been widely applied in path planning, navigation, and target search for intelligent systems.From the perspective of the training framework, MARL can be categorized mainly into three structures, namely, centralized training and execution (CTE), decentralized training and execution (DTE), and centralized training with decentralized execution (CTDE).The CTE [36] contains a central controller that can collect all the state observations and actions from the agents and share them with the group.The controller not only trains the policy but also executes the policy with perfect information.However, CTE is not scalable due to the curse of dimensionality and is limited in environments where perfect information is not accessible.DTE provides a scalable way to search for the optimal policy for each agent via local observations, without communicating with other agents.PPO [37] is a common algorithm to generate individual policies but can be well extended to MARL within the DTE framework. [14]However, the agents are simply existing together without significant collaboration.Hence, CTDE is the preferred framework in most research works, such as MADDPG [19] with centralized critics, COMA [38] with centralized joint policy search, and QMIX [39] with value function factorization.CTDE allows involved agents to collaborate during training while maintaining scalability during execution, as it combines the advantages of CTE and DTE.
However, to effectively apply the CTDE framework to the CMTSN tasks, we need to address the sparse reward space and credit assignment issues during the training, especially for agents with only local visual perceptions.PRIMAL [12] and PRIMAL2 [40] managed to combine imitation learning and RL to train fully decentralized policies for multiagent path finding in sparse reward worlds and highly structured warehouses, respectively.To better utilize the information from the neighbors, Li et al. [15] developed a GNNHIM framework based on the graph neural network to extract the information from multihop neighbors.However, all these methods assume 2D grid environments and known target information, which is not practical for CMTSN with visual drones in our case.
Credit Assignment
Credit assignment is a crucial issue in cooperative MARL problems as it determines the contribution of each agent to the group's success or failure.In our CMTSN problem, where agents work together to find all targets, it is essential to identify which action by which agent led to a higher search success rate.
Various works have been done to address the credit assignment issue in MARL.To estimate the contribution of each agent, Devlin S. et al. [41] calculated the difference between the global reward and the estimated reward if ignoring this agent.However, this method suffers from the computational cost since the combinations of each agent's absence are simulated at every step.COMA [38] used a centralized critic to estimate the actionvalue function, and for each agent, it computes a counterfactual advantage function to represent the value difference, which can determine the contribution of each agent.However, COMA still does not fully address the model complexity issue.QMIX [39] is well developed to address the scalability issue by decomposing the global action-value function into individual agent's value functions.However, QMIX assumes that the global action-value function is a monotonic function of individual value functions, which is hard to meet in practice [18] and may not be able to directly handle scenarios with continuous action space.
Another issue regarding credit assignment is the variable number of agents, which induces the posthumous credit assignment problem.For instance, in the CMTSN task, drones may crash into an obstacle or other drones during the flight, but the remaining drones are required to continue the search task.MA-POCA [18] provides valuable insights for addressing the posthumous credit assignment via introducing a self-attention mechanism across value function estimation and counterfactual baseline [42] but it lacks consideration of the individual's motivation.This would bring down the performance of MA-POCA, especially for environments with sparse reward space and multiple targets like CMTSN.
To address these challenges, in this article, we combine the advantages of MA-POCA and PPO to balance group contribution and individual motivation through well-designed group reward and individual reward functions.We further introduce the attention mechanism to encode the local observations, which significantly improves the inner collaboration and the scalability of the policy in various scenarios.
MARL
The CMTSN problem can be characterized as a decentralized POMDP defined by M∶ N , S, fO i g i∈N , fA i g i∈N , P, R, ρ 0 , where N ¼ f1, : : : , Ng represents the set of finite N ≥ 1 agents.The state of the environment at time t is denoted by s t ∈ S. The local observation and action of agent i related to s t are defined by o i t ∈ O i , a i t ∈ A i , respectively.Let us define joint action space the transition function from state s t ∈ S to the subsequent state s tþ1 ∈ S given the joint action a t ∈ A with probability Pðs tþ1 js t , a t Þ.The reward function shared by all agents is given by R∶S Â A ↦ R with rðs t , a t Þ representing the received reward for all agents.To be simple, r t is used to denote rðs t , a t Þ. ρ 0 is the distribution of initial states.γ ∈ ½0, 1Þ is the discount factor for calculating the cumulative rewards. Let is the joint policy with π i ða i t jo i t Þ representing the policy of each independent agent i ∈ N .T is the maximum allowable steps in an episode.In the CTDE framework, the centralized state-value function for state s t is described as: The centralized state-action value function is: The objective of MADRL is to find an optimal joint policy π maximizing the state-value of the team starting from the initial state s 0 .To evaluate the contribution of each agent, the counterfactual baseline [38] b i ðs, aÞ is computed by marginalizing the action of agent i from the team.
Here, a 0 is the action of agent i and a Ài is the joint action without a 0 .With the baseline, the advantage of agent i can be defined by: The optimal policy for each agent i is obtained by iteratively updating the policy with the gradient: The advantage function considers the contributions of individual agents to the shared team reward by measuring the value difference with or without the agent's involvement.
To address the posthumous credit assignment issue, MA-POMA [18] utilized observation entity encoders and self-attention modules to handle the varying number of agents at every time step.First, to obtain the Q value, the attention-masked centralized state value function parameterized with ϕ in MA-POMA is: where ½k t ∶1 ≤ k t ≤ N is the number of active agents at time step t; g i ∶O i ↦ E is the observation entity encoder for agent i, and RSA is the self-attention module used.Similarly, the counterfactual baseline parameterized with ψ for agent i to calculate b i ðs, aÞ is: here, f Ài ∶O Ài  A Ài ↦ E is the encoding network for observation-action pairs from agents except for i.
Attention Mechanism
The attention mechanism [42] has been widely applied in various deep learning architectures.In DRL, attention has been used to create a weighted sum of observations, where the weights reflect the relevance of each observation to the task. [43]Let's consider a partial observable task where we have an uncertain observation sequence o i t ∶ ¼ fs j t jj ∈ N Ài g for agent i and we want to encode the observation into a fixed-length context vector c i t with attention weights w ij : The attention weights can be calculated by: where scoreð⋅Þ is the scoring function to calculate the relevance of the neighbor states s j t to the ego state s i t .The piece-wise dot product is widely adopted as the score function.W v , W q , and W k are the parameters to be trained to encode the states.
Problem Formulation
CMTSN with autonomous agents is highly demanded and has been investigated in various applications such as disaster SAR and warehouse management in complicated environments.Unlike the two-dimensional (2D) discrete grid scenario considered in other works, [15,40] in such real-life environments, QR code distribution and prior goal information are infeasible in most cases.
In this article, we consider a bounded obstacle-rich threedimensional (3D) continuous space D ∈ R 3 with the size of L  W  H, where L, W, and H are the length, width, and height of the environment space, respectively.Assume that we have N agents N ¼ f1, : : : , Ng that can move in 3D with continuous action, and M blocks B ¼ fB 1 : : : B M g with size of l j  w j  h j , B j ∈ B. Each agent starts from a preset initial position p i ini and attitude q i ini with random position perturbation p i w and rotation perturbation q i w for i ∈ N .The CMTSN task is defined as that agents need to find and approach all G targets T ¼ fT 1 , : : : , T G g with the shortest time.All targets will be marked with found F or not-found NF.The status of each target is denoted as ST k ∈ fF, NFg k∈T .Note that, during this task, the agent can always continue finding other targets unless it crashes or all targets are found.
Observation and Action Space
In the partial observable RL task, each agent draws actions from its individual policy on the condition of the current local observation, i.e., a i t $ π i ðo i t jθ i t Þ with θ i t the trainable parameter.In this article, for each agent i, the local observation includes the visual observation o i v,t , i.e., RGB image I i t ∈ R 3Â224Â224 , the ego-state observation o i e,t , i.e., position p i t ∈ R 3 , rotational quaternion q i t ∈ R 4 , normalized forward direction n i W and previous action a i tÀ1 ∈ R 4 , and the ego-centric observation o i c,t , i.e., the relative position toward neighbors p ij t ¼ p j t À p i t for j ∈ N Ài .Since the number of agents is not fixed in our task, a maximum number of observable neighbors N max is preset, and hence p ij t ∈ R N max Â3 .The attention mechanism is adopted to calculate the scores of observable neighbors, which is discussed in the following proposed POMA framework.The visual observation I i t is encoded into a vector o i v,t ∈ R 256 with a simple CNN.To align with the real operation of drones, the action space of agents consists of four continuous velocity commands, namely, the translational velocity vB x , vB y , vB z and the yaw speed ωz in the body-fixed frame.The action space of agents is constrained with jjv B jj ≤ v max and Àω max ≤ ωz ≤ ω max .
State Transition Function
Assuming the low-level proportional-integral-derivative actuator controller can track the actions generated with acceptable delay, we consider the high-level policy for the drone.The 4-degree-offreedom dynamics with the aforementioned velocity commands can be described as: where s ¼ ½x, y, z, ψ T with x, y, z the world position coordinates and ψ the heading angle.gðsÞ is the transition matrix from bodyfixed frame Ω B to world frame Ω W and a ≔ ½v B x , vB y , vB z , ωz T is velocity commands (generated actions) sent to each drone.ω represents the process noise and uncertainties, which satisfy the Gaussian distribution.Assuming no significant tilting perturbation, gðsÞ is: Note that the transition function (10) applies to each agent and determines the next state s tþ1 in the discrete space given the current state s t and the joint action fa i t g ∈ A with stochastic noise ω.Hence, the policy is not deterministic in our task.
POMA
In this section, we describe our proposed POMA framework (see Figure 3) extended from MA-POCA (keep the same posthumous credit assignment mechanism), which leverages ACL, mixed credit assignment (MCA), and attention enhanced observation (AEO) to improve the performance and generalization capability of MA-POCA over CMTSN tasks.
ACL
Similar to automatic curriculum learning, [44] the motivation of our ACL module is to gradually guide the agent learning to search for targets from near locations (simple tasks) to far locations (difficult tasks) considering the sparse rewards in our scenario.Our ACL adjusts the task difficulty level (TDL) with on-policy training but only controls the initial state ρ 0 .Hence, it can be regarded as one specific instance of automatic curriculum learning.
During each episode beginning, the targets are randomly spawned in three space sets based on the search distance (L direction), namely, the near set C N , the middle set C M , and the far set C F , with probabilities of PrðC N Þ, PrðC M Þ, and PrðC F Þ, respectively.Note that PrðC Since the agents are expected to find the far targets more difficult, the TDL is denoted by ε ¼ 1 À PrðC N Þ ∈ ð0, 1Þ which represents the difficulty of finding all targets.The main performance metric for our task is the average success rate (ASR) of the team over a moving time window T w : where, 1ð1Þ ¼ 1, otherwise 1ð0Þ ¼ 0 and OutðiÞ denotes if the task is successful.Our ACL adjusts TDL to change against the variation direction of the success rate achieved by the team with clipped control law: where η is the rate that controls the update speed and ξ is the coefficient that controls the update range of ε. ε 0 and ASR d are the initial TDL value and the desired ASR in one task.The basic idea is that when the ASR is low at the beginning, the TDL drops to the lower boundary of 0.5 À ξ and when the ASR is greater than the desired ASR, the TDL will increase to give more challenges to the team but bring down the ASR.The feedback from the ASR aims to adjust the TDL in real-time and ε converges to ε Ã with any ASR.
MCA
Different from existing baselines (IPPO [20] using only individual critics to evaluate individual agent's behaviors or MA-POCA [18] only considering a group critic to evaluate each individual agent's behavior toward the group objectives), the proposed MCA module combines the individual critics and a group critic to optimize the policy with individual and group rewards, as illustrated in Figure 4. Within the MCA module, the individual critic aims to encourage the agent to search for as many targets as possible with individual effort while the objective of the group critic is to collaborate with other agents to find all targets efficiently.Compared to MA-POCA, an additional individual critic is added in POMA to provide more information and criticism for agents to make better-informed decisions.The individual critic is defined by: where Q i and V i are the state-action value function and state value function kept for the agent i. r I t ðs t , a i t Þ is the individual reward.The generalized advantage estimation for this individual critic is: where δ tþj ¼ Q i ðs tþj , a i tþj Þ À V i ðs tþj Þ denotes the temporal difference error.γ is the discount factor and λ is the hyperparameter.Since each actor now has two separate critics (an individual critic and a group critic) providing guidance, the dilemma of pursuing individual or group rewards is posed to the actor.Compared to IRAT in ref. [45] trying to clip the difference between the individual policy and group policy for each agent i, which brings heavy trainable parameters, especially for largescale MAS, our MCA module aims to shape the advantage function from the individual rewards and group rewards straightforwardly.We combine the advantage function from individual reward and group reward for agent i as: where Âi is the advantage function calculated from ( 2), (4), and ( 7) and λ c is the hyperparameter to adjust the group critic an individual critic.Note that the reward in ( 2) is the group reward r G t ðs t , a t Þ.The updated gradient to iterate the policy for agent i is: To better utilize the MCA module, the reward functions need to be well-designed to balance individual effort and team contribution in different tasks.In this article, individual rewards r I are given only when the agent finds a target or crashes.This encourages competitive behaviors within the team and ensures that the entire group is not punished due to a particular agent's mistake.By adding group rewards for finding all targets and all drones crashing, the group rewards provide better feedback to the group on collaborative search behavior while avoiding higher crashes.Beyond these task rewards, the existential reward r e ¼ À1.0=T and the collision reward r c are added for all methods.Here, where we define p ig T as the displacement from the collision position to the target position for agent i, and p ig ini as the initial displacement value.The norm of a vector is denoted by k ⋅ k, and the inner product of two vectors is denoted by ⋅ ð Þ.We introduce two weights, α ∈ ð0, 1 and β ∈ ð0, 1, which are utilized to adjust the penalty associated with the navigation distance and the penalty arising from the forward direction.The individual rewards and group rewards are listed in Table 2 where G means group reward, while I means individual reward.
AEO
To address the generalization issue of policy with variable egocentric observations, the relative position toward the agent with the minimum distance is set as the ego-centric observation in ref. [17].However, such a design ignores the information from other agents.To better utilize all observable neighbors, the attention mechanism is adopted.The ego-centric observations are first padded with zeros to I i with a fixed size of N max  3. The padding observation is then embedded into vectors query Q, key K, and value V with linear normal layers LNðI i jΘ i Þ of embedding size d k =m.The output of the AEO is: where m is the number of heads and d k is the dimension of padding observation.The output is further processed with residual modules and batch normalization as with the Transformer architecture. [42]
Network Architecture
The actor network architecture comprises a visual encoder and a policy network.The visual encoder is implemented as a simple CNN with two convolutional layers (16 Â 8 Â 8j32 Â 4 Â 4).The activation function used in the visual encoder is Leaky ReLU.Its purpose is to encode the raw image input into a compact 256-dimensional vector representation.Additionally, the visual encoder is followed by the concatenation of ego-state observation o e (C 1 ∈ R 270 ) and then cross-encoded with ego-centric observation (o c ∈ R N max Â3 ) with a multihead attention network, resulting in a vector C 2 ∈ R 398 with concatenation from C 1 .In contrast, the policy network consists of two fully connected (FC) layers, each consisting of 256 nodes.The final layer of the policy network comprises 4 nodes, responsible for generating four continuous action values to control the agent.Both the individual critic and group critic networks are the same with two FC layers of 256 nodes and a final value layer.
CTDE with Behavior Manager
The proposed framework is trained with CTDE, where the critic network collects all agents' trajectories, and each agent executes the same actor network.A centric behavior manager is designed to manage the group behavior of all objects of interest (OOIs) (targets, agents, and obstacles) such as the group registration and the number of OOIs while the agent's behavior like dynamics is controlled by its own behavior script.The group rewards are also defined in the behavior manager and triggered in the agent behavior script.The detailed training process for POMA is listed in Algorithm 1.
The behavior manager controls the initialization and updates of the environment with the TDL rule (13).Each agent is updated with individual Monobehaviour to obtain the observations and transit to the next time step with dynamics and actions from the current policy.The policy is updated for each episode with the policy gradient (17).13), the TDL ε converges to ε Ã ∈ ½0.5 À ξ, 0.5 þ ξ with any given desired ASR, ASR d , and initial TDL ε 0 .
Proof.Assume that any ε 0 ∈ ð0, 1Þ is given such that ASR < ASR d over finite episodes ½E, then ε t !0.5 À ε within time steps E Â T such that ASR converges to ASR 0 since ASR ξ and ASR 00 > ASR d .If ε 0 is given so that ASR > ASR d over finite episodes ½E, let ASR 0 ¼ ASR be denoted; similarly, it follows the conditions discussed and This completes the proof.
Theorem 4.1 provides theoretical support to ensure the convergence of training with ACL if the basic policy improvement is convergent.
Settings
Our simulation environments are developed with the Unity game engine.The proposed POMA framework is coded based on the Unity ML-Agents Toolkit Release 20. [46] In the training process, two visual drones with a field of view (FOV) of 82.5 deg are randomly spawned to search for two targets (G ¼ 2) in a room of size 5m  5m  2m with 2 obstacles.The initial TDL is set to ε 0 ¼ 0.1.All OOIs are randomly spawned and pre-checked to avoid any collisions at the beginning of each episode.The ASR is calculated in 500 time steps (T w ¼ 500) after each episode and the TDL is adaptively adjusted according to the rule (13) with ξ ¼ 0.4, η ¼ 0.001, ASR d ¼ 0.7.For each episode, a maximum of 5000 time steps are set and we train the policy for 10 million episodes.We set N max ¼ 5 since the room size limits the number of agents in the team.The training is accelerated with 2 parallel environment instances with each of the 6 scenarios inside.
Considering IPPO has shown similar or superior performance over both MADDPG and QMIX in complex tasks, [14] and the better performance of MA-POCA than COMA, we use the IPPO [20] and MA-POCA [18] as the baselines and test their performance with different missions such as 1 drone versus 2 targets (1D2T), 3D3T, etc. in the 3-obstacle room and set the TDL ε ¼ 0.2.Note that MADDPG [19] is also compared to further support our baseline selection based on the project https://github.com/4kasha/Multi_Agent_DDPG.The ablation study is further conducted to examine the effectiveness of the designed modules.
Environment Setup
In this article, the policy is first trained in simulated environments and then transferred to the physical world.The training scenario is developed with a Unity simulation engine, as illustrated in Figure 5, where the drones are trained to search for several target boxes covered with AprilTag and meanwhile to avoid obstacles (blocks and walls).To better the generalization capability of the policy, the size of blocks is randomized at the beginning of each training episode.Each block is randomly placed in space D and cannot overlap with other blocks.updated TDL ε (make sure all OOIs are not overlapped); All OOIs are meshed with colliders (for collision detection) and labeled with agent, target, and obstacle tags.The agent disappears and the number of agents N decreases by one if the agent collides with other agents or obstacles.Similarly, the target disappears if the target is approached by any agent, and the number of targets G decreases by one.The training episode terminates if N ¼ 0 or G ¼ 0 or the maximum allowable steps T is reached.
Hyperparameters for RL
Since all of our implementations are extended from the PPO and MA-POCA, they share the same policy update mechanism and hyperparameters for RL.The critics and actor networks share the same network size.Table 3 lists the hyperparameters used for all training.
Hyperparameters for POMA
The parameters used in this work are optimized with trials and tests in several modules.For instance, the changing rate η is chosen to reduce the training vibration.Table 4 summarizes other hyperparameters specifically for POMA in the training environment.
All training and testing are carried out on a workstation with the configuration of Ubuntu 20.04, CPU AMD ZEN 3 RYZEN 9 5900X, and GPU RTX 3090 Ti.Unity version 2021.3.11,mlagents 0.27.0,Pytorch 1.8.2, and Python 3.8 are used to develop the simulation and algorithm.
Domain Randomization
To improve the generalization of policies during the training process, several domain randomization techniques are adopted.Table 5 lists the parameters for different types of domain randomization.Note that Uðx 1 , x 2 Þ means the uniform distribution over ½x 1 , x 2 .Bðc, rÞ denotes the sphere uniform distribution with center position c and radius of r. x l , x r , z l , and z u are the left side, right side, low side, and upper side of the room, respectively.The position of targets depends on the TDL.When ε is larger, the mean distance between the drones and targets is further in the L direction.The light intensity randomization provides generalizations for visual perception.
Main Results
Table 6 lists the performance of our trained model in comparison with baseline methods and ablation methods over different testing missions with 3 obstacles.Their performance is evaluated with metrics including ASR and average time steps (ATS) over 500 episodes.More results are provided in Figure 7 and Table 7.
Baseline Comparison
In all multidrone missions, POMA consistently exhibits the highest ASR and lowest ATS, outperforming MA-POCA and IPPO.This can be greatly attributed to its superiority in addressing the sparse rewards and in capturing the effects of variable neighbors.IPPO is better in single-agent missions but with the highest ATS since only individual contributions are considered.However, IPPO does not encourage collaboration as their success rate decreases in 3-drone missions compared to 2-drone missions.This is because of the limited room space when more drones are involved.On the contrary, MA-POCA and POMA can handle collaborations in constrained spaces since their success rate increases when more drones are involved.Besides, as expected, MADDPG shows the worst performance in all missions due to its inefficiency in handling posthumous credit assignments.
Ablation Study
From the ablation study, we can observe that the AEO module improves the POMA framework in multiagent missions as the performance of POMA decreases a lot without the AEO module.
The attention mechanism considers neighbors with scores and hence, improves collaborations.However, the AEO module downgrades the performance in single-agent missions since the model achieves the best in single-agent missions without the AEO module.It is the side-effect of introducing an extra Transformer network.Without the MCA module, the POMA shows worse performance in single-agent missions, which verifies the effectiveness of the designed MCA module.In No ACL training, we fix ε ¼ 0.2.From the results, the ACL module significantly addresses the sparse reward issue and improves training efficiency as illustrated in Figure 7.
Qualitative Analysis
The snapshots from the simulation are illustrated in Figure 6.
From (a) to (b), Agent 2 sees the two targets and navigates toward them, while Agent 3 is searching for other targets in front, and Agent 1 stays behind.From (c) to (d), Agent 2 finds the last target when Agent 1 is moving forward to search.To avoid collisions, Agent 1 moves back to provide space for Agent 2 to navigate to the final target.These maneuvers show significant collaborations and the advantages of using a visual drone swarm to search targets when prior information is not available.
Quantitative Analysis
The training curves regarding the ASR are illustrated in Figure 7.
From the curves, MADDPG shows the worst training performance, and IPPO shows higher learning efficiency, while POMA obtains the highest success rate even under a higher TDL.Without the ACL module in the sparse reward space, it is hard to find the optimal policy as observed in the POMA w/o ACL curve.Compared to the baseline MA-POCA, all designed modules have contributed to the improvement of POMA.controlled from decentralized edge computers via WIFI communication.To label if the target is found, the status of targets is broadcast from a centric computer.The positions of OOIs are captured with a motion capture system and streamed from the centric computer.Implementation details are provided in the following.We tested our model in 1D1T, 1D2T, 2D2T, and 3D3T missions.
Physical Implementation
The policy model is trained in the simulation (2D2T mission) and deployed over several edge computers.The framework of the control is illustrated in The Robotic Operating System Melodic is deployed on all computers with the centric computer as the master node.The edge computer subscribes to the pose topic from the centric computer and the video stream from the connected drone.The obtained information is further processed and passed into the loaded policy model with the ONNX runtime package on the edge computer.The output actions drive the Tello Edu drones to search and navigate to targets with the Tello Edu SDK 2.0.To label the status of targets, a target-status topic (vector) is created and maintained on the centric computer side and updated by all drones.If one target is found (distance to the drone is within 5 cm), the status of the target is labeled with 1 and its position is set to infinity.When all targets are found, the drones stop flying.This can avoid repetitive searching.
Main Results and Discussion
Figure 2 illustrates the performance of our trained model in a typical 3D3T mission.Three drones fly to search for targets while avoiding obstacles with real-time video stream (see Figure 2a).Their trajectories show significant collaborative behaviors as Tello 3 attempts to navigate to Target 1 when Tello 2 has already found Target 3 and Target 2 (see Figure 2b).Meanwhile, Tello 1 tries to find other hidden targets.These collaborations improve search and navigation efficiency, especially in time-critical missions such as SAR. Figure 10 illustrates the trajectories of drones in different missions (1D1T, 1D2T, and 2D2T), demonstrating the effectiveness of Sim2Real with the trained model in different missions.Compared to traditional approaches, the DRL-based policy can be seamlessly applied to various missions without fine-tuning.The trained policy has showcased outstanding adaptiveness and scalability in different scenarios when a variable number of targets and agents are involved.
Note that in the simulation, the target disappears when touched by drones, while the target is still observable in physical experiments when it is labeled "found".This definitely affects the search performance of drones (the drone may hover some time in front of the target) even though we designed the mechanism to avoid repetitive searching (setting the found target's position to infinity).To address this issue, we can add some rules in the loop to guide the drone.In addition, the unstable position stream from the motion capture system also affects the search behavior as we can observe that some drones may hesitate to move at the corners of the room.However, the search policy still shows the resilience to finish the task and can be extended to various missions.More physical experiments are provided in Video, Supporting Information.
Conclusion
This article presents the POMA framework, an advanced solution that leverages MADRL for the complex problem of CMTSN.Our proposed method incorporates novel ACL and mixed individual-group credit assignment mechanisms that balance individual and group contributions in sparse reward environments.Simultaneously, the attention mechanism in POMA refines variable local observations, leading to significant enhancements in collaboration and scalability.Experimental results conducted on drone simulations demonstrate that our model attains superior performance over baseline methods.
We further deploy the trained model over a visual drone swarm and conduct physical tests in different missions.Real-world flight experiments in various missions demonstrate the effectiveness and generalization capability of our approach.The framework's ability to balance individual and group contributions in sparse reward environments, coupled with its demonstrated effectiveness in drone swarm management for real-world applications, underscores its potential to enhance efficiency, decision-making, and adaptability across different diverse and critical fields when collaborative behaviors are required.In future work, the ease of communication loss and human guidance in the loop will be investigated to improve the performance of our policy in more complex scenarios.
Figure 2 .
Figure 2. a) Physical experiments on 3 drones versus 3 targets (3D3T) mission.The initial positions are labeled with directions.b) Trajectories of 3 Tello drones from motion capture system.Their trajectories show significant collaborative behaviors as Tello 3 attempts to navigate to Target 1 considering Tello 2 already found Target 3 and Target 2. Meanwhile, Tello 1 tries to find other hidden targets.
Figure 3 .Figure 4 .
Figure 3. Overview of our POMA framework.A centric behavior manager controls the TDL of the environment with an ACL feedback module.The MCA module integrates the individual critic into the group critic to address the individual-group conflicts.The AEO encoder with a self-attention mechanism improves the generation capability of policy with variable neighbors.
Figure 5 .
Figure 5.The simulation environment for the CMTSN with 3 drones, 3 targets (3D3T), and 3 obstacles (3O) of different sizes.The task is successful only if all targets are found and approached.The left 3 views are drones' visual perception.
Figure 9 .
Each edge computer (Jetson Orin NX) is connected to the WIFI hotpot from the Tello Edu drone (AP mode with different names) and meanwhile wire-connected to the TP-Link router with different fixed IP addresses (192.168.0.121 -192.168.0.123).The centric computer is wire-connected to the OptiTrack system and streams the pose information via WIFI connected to the TP-Link router (192.168.0.100).
Table 1 .
Comparative analysis of different methods for MTS.
Table 2 .
Reward function used for various methods.
Table 3 .
Hyperparameters for all RL algorithms.
Table 4 .
Parameters for the POMA and environment.
Table 6 .
Comparison with baselines and ablation models over different testing missions with 3 obstacles (ASR "/ATS #, the best performance is highlighted with bold).
Table 7 .
Performance comparison in large-scale missions (ASR "/ATS #, the best performance is highlighted with bold).
|
v3-fos-license
|
2024-04-10T15:15:22.812Z
|
2024-04-01T00:00:00.000
|
269023929
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1944/17/7/1701/pdf?version=1712570033",
"pdf_hash": "9c94c929e3adcb19e447095124743036f055c846",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:675",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"sha1": "7797c4a26d720024c27b91bc188aecce8d75993e",
"year": 2024
}
|
pes2o/s2orc
|
Effect of Rare Earth Elements on Microstructure and Tensile Behavior of Nb-Containing Microalloyed Steels
The present investigation endeavors to explore the influence of rare earth elements on the strength and plasticity characteristics of low-carbon microalloyed steel under tensile loading conditions. The findings from the conducted tensile tests indicate that the incorporation of rare earths leads to a notable enhancement in the yield strength, ultimate tensile strength, and ductility properties of the steel. A comparative analysis of the microstructures reveals that the presence of rare earths significantly refines and optimizes the microstructure of the microalloyed steel. This optimization is manifested through a reduction in grain size, diminution of inclusion sizes, and a concomitant rise in their number density. Moreover, the addition of rare earths is observed to foster an increase in the volumetric fraction of carbides within the steel matrix. These multifaceted microstructural alterations collectively contribute to a substantial strengthening of the microalloyed steel. Furthermore, it is elucidated that the synergistic interaction between rare earth elements and both carbon (C) and niobium (Nb) in the steel matrix augments the extent of the Lüders strain region during the tensile deformation of specimens. This phenomenon is accompanied by the effective modification of inclusions by the rare earths, which serves to mitigate stress concentrations at the interfaces between the inclusions and the surrounding matrix. This article systematically evaluates the modification mechanism of rare earth microalloying, which provides a basis for broadening the application of rare earth microalloying in microalloyed steel.
Introduction
High-strength microalloyed steel (HSLA) is widely utilized as the main material for offshore platforms, offshore wind power generation, and other marine engineering equipment based on its high strength, high toughness, and excellent fatigue properties [1][2][3].Conventional HSLA steels are mainly alloyed with alloying elements such as Nb, Ti, and Cr to improve the strength and plasticity of the materials.The formation of diffusely distributed nanosized MC carbides through microalloying can contribute to grain refinement, inhibition of dynamic recrystallization, and strengthening of the matrix [4][5][6].The addition of strong carbide-forming elements (Nb, Ti, Mo, etc.) affects the diffusion and distribution of carbon (C) atoms, which in turn affects carbide formation.In order to further improve the strength of microalloyed steels, modern industry has improved the material properties by adding rare earth elements to modify the inclusions in the steel [7,8].The oxygen and sulfur content of steel can be successfully reduced by rare earth (RE) metals due to their great affinity for these components [9].Reducing the inclusions and the stress concentration between the matrix is made possible by the control of oxygen sulfide in RE elements, which results in small-sized spherical RE-O-S or RE-S inclusions [10,11].The impact of inclusions modified with rare earth elements on the tensile characteristics of steel has been the subject of numerous investigations [12][13][14].
In addition, small amounts of rare earth elements are dissolved in microalloyed steels by vacancy diffusion, occupying lattice cross-section points.In manganese steels, rare earth elements interact with carbon atoms and influence the phase transformation and precipitation of carbides [14][15][16].Rare earth atoms with large diameters and high distortion energies tend to polarize at the ferrite-carbide interface [16,17].The microalloying effect of rare earths in steel is becoming increasingly significant as the cleanliness of steel continues to improve.By adding rare earths to AISI D2 steels, M 7 C 3 carbides were refined and uniformly distributed, resulting in a 75% increase in impact toughness [18,19].Although rare earths have been extensively studied in the field of microalloyed steels, there have been fewer studies devoted to explaining the mechanism for the effect of rare earths on plasticity.Moreover, previous studies have mainly focused on explaining the increase in yield strength by rare earths and have shown that rare earths have less of an effect on tensile strength [20][21][22].
Rare earth microalloyed steel is essential for the improvement of material properties.This study focuses on comparing the effects of rare earth additions on material properties.The microstructures of steels with and without rare earths were systematically studied with characterization tools such as scanning electron microscopy and electron microscopy.Microstructural analysis provides a complete explanation of the increase in the strong plasticity (yield strength, tensile strength, and plasticity) as well as the tensile behavior (the Lüders effect/yield-point phenomenon) exhibited by the materials during the tensile process.
Experimental Details 2.1. Experimental Materials and Processes
The steels designed for the experiment were 0.05C and RE-0.05C low-carbon steels, and their chemical compositions are given in Table 1.The research steels were prepared using normal metallurgical processes, including iron pre-treatment, basic oxygen furnace (BOF) treatment, ladle arc refining furnace (LF) treatment, RH vacuum degassing, and continuous casting processes.To minimize the different impurity elements in the raw materials, the pure iron operation was selected to manufacture high-quality low-carbon steels.Initially, the hot-rolled steel plates were alloyed with ferrosilicon after being deoxidized, mostly to prevent the addition of carbon.The basic chemical composition of the hot-rolled steel plates was measured by the optical emission spectroscopy (OES; ARL 3460, Kronach Zahner, Germany) technique.When rare earth elements are added to steel in a vacuum atmosphere, the oxygen content of the steel is reduced to a low level (20-40 ppm) using a rapid oxygen determination probe.To assure uniformity of the material, rare earths were added to the steel using secondary refining and an argon blowing process at the bottom of the ladle.The final thickness of the steel sheet material was 10 mm.Inductively coupled plasma (ICP-MS; ICAPQ, Kronach Zahner, Germany) was used to measure the amounts of rare earths still present in the castings [23,24]; the results are shown in Table 1.
Microstructural Characterization
The steel samples were prepared according to standard metallographic procedures prior to microstructural examination.The scanning electron microscopy samples were prepared by grinding at different magnifications and polishing with diamond gesso.Specimens for microstructural characterization were etched using 4% nitric acid alcohol, and EBSD samples were electrolyzed on the sample surface by means of a 5% ethanol perchlorate solution.In order to observe the three-dimensional morphology of inclusions, the specimen preparation for the observation of inclusions in steel was performed by electrolyzing a 10 × 10 × 2 mm 3 sample in a non-aqueous solution.The specimens were electrolyzed for 10 min in a non-aqueous solution (pH ≈ 8) from 0 to 5 • C. A portion of the matrix was electrolyzed to expose the inclusions.After electrolysis, the specimens were passed through alcohol several times to remove the residual electrolyte from the surface of the specimens.The inclusions of the samples under polishing conditions were observed with a scanning electron microscope (SEM; QUANTA 450, Lincoln, NE, USA) in secondary electron (SE) and backscattered electron (BSE) modes to obtain a better contrast of the inclusions.The microstructure and niobium-containing second-phase precipitates were examined using an HRTEM JEM-2100 (Akishima, Japan) equipped with an Energy Dispersive Spectrometer (EDS) (test voltage: 200 kV).The samples used were made of 3 mm diameter discs that had been ground to a thickness of about 80 µm.The small discs were electrolytically polished at room temperature and at a voltage of 40 V. Dislocation density was quantified by X-ray diffraction (XRD) using a Cu-Kα radiation source (λ = 1.5405Å).The scan range was between 40 • and 100 • , with a step size of 0.02 • .
Tensile Tests
The mechanical properties of the tensile samples were tested using an MTS E45.305 (Meters Industrial Systems, Shenzhen, China) tensile testing machine with a strain rate of 2 mm/min at room temperature.Three tensile tests were performed on each sample to ensure the accuracy of the tests.Small tensile specimens were prepared in accordance with ASTM E8-04 [25] and were carefully ground prior to testing to remove any scratches remaining from the specimen preparation process.During the tensile tests, load-displacement curves were continuously recorded and converted to engineering stress-strain curves.The recorded displacements corresponded to the displacements determined by the movement of the machine beam.
Microstructure Characteristics of RE-Containing and RE-Free Steels
The microstructure of microalloyed steel directly affects the mechanical properties of the steel, and the refinement of ferrite grains and pearlite is beneficial with respect to the tensile and toughness properties [26,27].Figure 1 shows the microstructure consisting of a two-phase organization of ferrite + pearlite.A finer distribution of pearlite can be observed in the microstructure of the rare earth microalloyed steel (Figure 1c) compared to the base steel (Figure 1a).Based on the statistical data from SEM images, the area fraction of pearlite in the base steel is about 21.6%, while that in the rare earth microalloyed steel is about 21.1%.The characterization data of the microscopic volume fraction for microstructure statistics are presented in Table 2.It indicates that the addition of rare earths can effectively optimize the ferrite and pearlite.It has been observed that the mechanism of refinement by rare earths is primarily explained by the fact that rare earth inclusions act as heterogeneous nucleation sites during solidification, which can effectively refine the austenite grain size (PAGS) [28,29].In order to verify the organization of the materials even further, the EBSD data with and without rare earth materials were compared (Figure 2). Figure 2a,c show a small-change microstructure distribution with the addition of rare earths.According to the EBSD grain-size statistics (Figure 2b,d), the average grain sizes of the base steel and the rare earth microalloyed steel are about 5.01 µm and 4.47 µm, respectively, which indicates the refinement behavior in RE microalloyed steel; this is similar to other reports in the literature [30].In addition, the shape, size, and distribution of inclusions in microalloyed steels have a significant effect on the material properties [31,32].Figure 1b,d illustrate the presence of inclusions with sizes around 3 µm in two microalloyed steels.Since the organization was obtained by corrosion with acidic solution, the full shape of the inclusions obtained could not be observed.To further investigate the effect of rare earth microalloying on the organization, the inclusions were partially exposed by electrolysis of the substrate with nonaqueous solution, and the EDS technique was used to characterize the inclusions in the microalloyed steels, which will be discussed in the next section.In addition, the shape, size, and distribution of inclusions in microalloyed steels have a significant effect on the material properties [31,32].Figure 1b,d illustrate the presence of inclusions with sizes around 3 µm in two microalloyed steels.Since the organization was obtained by corrosion with acidic solution, the full shape of the inclusions obtained could not be observed.To further investigate the effect of rare earth microalloying on the organization, the inclusions were partially exposed by electrolysis of the substrate with nonaqueous solution, and the EDS technique was used to characterize the inclusions in the microalloyed steels, which will be discussed in the next section.In addition, the shape, size, and distribution of inclusions in microalloyed steels have a significant effect on the material properties [31,32].Figure 1b,d illustrate the presence of inclusions with sizes around 3 µm in two microalloyed steels.Since the organization was obtained by corrosion with acidic solution, the full shape of the inclusions obtained could not be observed.To further investigate the effect of rare earth microalloying on the organization, the inclusions were partially exposed by electrolysis of the substrate with non-aqueous solution, and the EDS technique was used to characterize the inclusions in the microalloyed steels, which will be discussed in the next section.
Characteristics of Inclusions
Inclusions have a significant impact on the performance of microalloyed steel, and rare earth elements can effectively modify inclusions [14,33].In order to study the effect of rare earths on inclusions, by comparing Figure 3a,b, it can be found that the inclusions in the steel gradually tend to become spherical after the addition of rare earths.Analyzing the composition of inclusions in the microalloyed steel against EDS shows that the inclusions in the base steel consist of (Al, Ca)O oxides as the core, with CaS and TiN attached to the oxides to form the nucleus.The addition of rare earths changes the composition of inclusions.Features such as the number density and particle size of inclusions in microalloyed steels were statistically characterized using the statistical program for inclusions in scanning electron microscopy [30].The statistical results, including the number densities and particle sizes of inclusions in the basic and rare earth microalloyed steels, obtained by means of SEM are shown in Figure 4.The overall area during SEM scanning was 27.46 mm 2 , and the inclusions' particle size statistics were above 1 µm.Figure 4 reveals that the inclusions in both microalloyed steels are mainly in the range of 1-2.5 µm.The inclusions in the rare earth microalloyed steel have the smaller particle size of 2.07 µm, while the inclusions in the base steel have a particle size of 2.4 µm.In addition, the rare earth microalloyed steel has a higher density of inclusions (13.5/mm 2 ) compared to 10.5/mm 2 the base steel.This indicates that the inclusions in the rare earth microalloyed steel are finer and more densely distributed, such that they can effectively pin down the austenite grain boundaries and refine the organization [34].
Characteristics of Inclusions
Inclusions have a significant impact on the performance of microalloyed steel, and rare earth elements can effectively modify inclusions [14,33].In order to study the effect of rare earths on inclusions, by comparing Figure 3a,b, it can be found that the inclusions in the steel gradually tend to become spherical after the addition of rare earths.Analyzing the composition of inclusions in the microalloyed steel against EDS shows that the inclusions in the base steel consist of (Al, Ca)O oxides as the core, with CaS and TiN attached to the oxides to form the nucleus.The addition of rare earths changes the composition of inclusions.Features such as the number density and particle size of inclusions in microalloyed steels were statistically characterized using the statistical program for inclusions in scanning electron microscopy [30].The statistical results, including the number densities and particle sizes of inclusions in the basic and rare earth microalloyed steels, obtained by means of SEM are shown in Figure 4.The overall area during SEM scanning was 27.46 mm 2 , and the inclusions' particle size statistics were above 1 µm.Figure 4 reveals that the inclusions in both microalloyed steels are mainly in the range of 1-2.5 µm.The inclusions in the rare earth microalloyed steel have the smaller particle size of 2.07 µm, while the inclusions in the base steel have a particle size of 2.4 µm.In addition, the rare earth microalloyed steel has a higher density of inclusions (13.5/mm 2 ) compared to 10.5/mm 2 the base steel.This indicates that the inclusions in the rare earth microalloyed steel are finer and more densely distributed, such that they can effectively pin down the austenite grain boundaries and refine the organization [34].
Characteristics of Inclusions
Inclusions have a significant impact on the performance of microalloyed steel, and rare earth elements can effectively modify inclusions [14,33].In order to study the effec of rare earths on inclusions, by comparing Figure 3a,b, it can be found that the inclusion in the steel gradually tend to become spherical after the addition of rare earths.Analyzing the composition of inclusions in the microalloyed steel against EDS shows that the inclu sions in the base steel consist of (Al, Ca)O oxides as the core, with CaS and TiN attached to the oxides to form the nucleus.The addition of rare earths changes the composition o inclusions.Features such as the number density and particle size of inclusions in microal loyed steels were statistically characterized using the statistical program for inclusions in scanning electron microscopy [30].The statistical results, including the number densities and particle sizes of inclusions in the basic and rare earth microalloyed steels, obtained by means of SEM are shown in Figure 4.The overall area during SEM scanning was 27.46 mm 2 , and the inclusions' particle size statistics were above 1 µm.Figure 4 reveals that the inclusions in both microalloyed steels are mainly in the range of 1-2.5 µm.The inclusion in the rare earth microalloyed steel have the smaller particle size of 2.07 µm, while the inclusions in the base steel have a particle size of 2.4 µm.In addition, the rare earth micro alloyed steel has a higher density of inclusions (13.5/mm 2 ) compared to 10.5/mm 2 the base steel.This indicates that the inclusions in the rare earth microalloyed steel are finer and more densely distributed, such that they can effectively pin down the austenite grain boundaries and refine the organization [34].
Calculation of Dislocation Density
The interaction of nanocarbides with dislocations is crucial for the strengthening of microalloyed steels.To investigate the effect of rare earths on the strengthening of microalloyed steels, the dislocation densities of basic and rare earth microalloyed steels were analyzed.The dislocation density of ferrite grains was measured by X-ray diffraction (XRD) according to the method originally developed by Williamson and Hall [35,36].Assuming that the strain within the grain is generated by dislocations only, the dislocation density (ρ) was evaluated by the modified Williamson-Hall (MWH) method [35,36]: where K = 2sin θ/λ is a reciprocal of the lattice spacing, ∆K = 2cos θ(∆θ)/λ is the half-height width of the corresponding peaks plotted according to K, θ is the Bragg diffraction angle, b is the Burgers vector, ∆θ is the half-height width of the peaks plotted according to θ, and λ is the wavelength of the X-rays.L is the average size of coherent scattering domains, which can be considered to be the average size of the sub-structure of a dislocation.A is a constant determined by the outer cutoff radius of the dislocation [30].
The representative XRD profile (Figure 5a) reveals that the characteristic peaks are mainly shown as four bcc α-Fe peaks ({110}, {200}, {211}, and {220}) of the matrix organization.Detailed XRD peak profiling was utilized to determine the dislocation density in Figure 5b according to the modified Williamson-Hall (MWH) method (Equation ( 2)) [35,37].The results indicate that the dislocation density is × 10 14 m −2 in the base steel and 3.31 × 10 14 m −2 in the rare earth microalloyed steel, suggesting that the addition of rare earths has an effect on the dislocation density in the steel.
Calculation of Dislocation Density
The interaction of nanocarbides with dislocations is crucial for the strengthening of microalloyed steels.To investigate the effect of rare earths on the strengthening of microalloyed steels, the dislocation densities of basic and rare earth microalloyed steels were analyzed.The dislocation density of ferrite grains was measured by X-ray diffraction (XRD) according to the method originally developed by Williamson and Hall [35,36].Assuming that the strain within the grain is generated by dislocations only, the dislocation density (ρ) was evaluated by the modified Williamson-Hall (MWH) method [35,36]: where K = 2sin θ/λ is a reciprocal of the lattice spacing, ΔK = 2cos θ(Δθ)/λ is the halfheight width of the corresponding peaks plotted according to K, θ is the Bragg diffraction angle, b is the Burgers vector, Δθ is the half-height width of the peaks plotted according to θ, and λ is the wavelength of the X-rays.L is the average size of coherent scattering domains, which can be considered to be the average size of the sub-structure of a dislocation.A is a constant determined by the outer cutoff radius of the dislocation [30].
The representative XRD profile (Figure 5a) reveals that the characteristic peaks are mainly shown as four bcc α-Fe peaks ({110}, {200}, {211}, and {220}) of the matrix organization.Detailed XRD peak profiling was utilized to determine the dislocation density in Figure 5b according to the modified Williamson-Hall (MWH) method (Equation ( 2)) [35,37].The results indicate that the dislocation density is 3.15 × 10 14 m −2 in the base steel and 3.31 × 10 14 m −2 in the rare earth microalloyed steel, suggesting that the addition of rare earths has an effect on the dislocation density in the steel.
Nanoprecipitation Characterization
The microstructural characterization of the matrix steel (Figure 6a,c-e) and rare earth microalloyed steel (Figure 6b,f-h) was exhibited by transmission electron microscopy.Both Figure 6a,b show the interaction of nanoprecipitates with dislocations in the steel, which plays a vital role in strengthening the steel.The study of nanocarbides revealed that both steel carbides are MC carbides.Figure 6c-e show the morphology and crystal diffractograms of the nanocarbides in the base steel and the selected diffraction patterns (Figure 6d,e).The nanocarbides exhibit an axial relationship with the matrix as [11 ̅ 0]MC//[1 ̅ 01]α-Fe.Figure 6f,h show the morphology and crystal diffractograms of nanocarbides in the rare earth microalloyed steels, which demonstrate that the nanocarbides exhibit an axial relationship with the matrix as [110]MC//[111]α-Fe.The statistical volume fraction of the precipitates in the rare earth microalloyed steel was obtained as 0.17% by counting several sets of TEM images.However, the volume fraction of MC precipitates in the matrix steel was
Nanoprecipitation Characterization
The microstructural characterization of the matrix steel (Figure 6a,c-e) and rare earth microalloyed steel (Figure 6b,f-h) was exhibited by transmission electron microscopy.Both Figure 6a,b show the interaction of nanoprecipitates with dislocations in the steel, which plays a vital role in strengthening the steel.The study of nanocarbides revealed that both steel carbides are MC carbides.Figure 6c-e show the morphology and crystal diffractograms of the nanocarbides in the base steel and the selected diffraction patterns (Figure 6d,e).The nanocarbides exhibit an axial relationship with the matrix as [110]MC//[101]α-Fe.Figure 6f,h show the morphology and crystal diffractograms of nanocarbides in the rare earth microalloyed steels, which demonstrate that the nanocarbides exhibit an axial relationship with the matrix as [110]MC//[111]α-Fe.The statistical volume fraction of the precipitates in the rare earth microalloyed steel was obtained as 0.17% by counting several sets of TEM images.However, the volume fraction of MC precipitates in the matrix steel was only 0.11%, and the size of the precipitates increased slightly (Figure 6c,f).This result demonstrates that RE promotes the precipitation of nanocarbides within the grains, which effectively hinders dislocation movement.Chun et al. [38] suggested that nanoscale precipitation can strengthen the matrix in two ways, i.e., preventing dislocation migration through the pinning effect and strengthening through dispersion.
Materials 2024, 17, x FOR PEER REVIEW only 0.11%, and the size of the precipitates increased slightly (Figure 6c,f).Th demonstrates that RE promotes the precipitation of nanocarbides within the grains effectively hinders dislocation movement.Chun et al. [38] suggested that nanoscale itation can strengthen the matrix in two ways, i.e., preventing dislocation migration the pinning effect and strengthening through dispersion.
Effect of RE on Tensile Properties of Microalloyed Steel
Figure 7a presents the engineering stress-strain curves of the matrix steel rare earth microalloyed steel at room temperature.From Figure 7a, it can be seen tensile curves of both steels start from the elastic region and then reach the yiel After these different upper and lower yield points, the tensile curves show compl logical behavior.The shape of this region (the Lüders region [36]) varies dependin steel under study.The phenomenon of the yield point in tensile testing is often to as the Lüders effect, through which the plastic deformation zone is regarded Lüders zone [39].After traversing the Lüders band, the curve persists in its typica ior, the stress level progressively increasing until it attains the pinnacle referred t ultimate tensile strength (UTS).After that, the stress level decreases during the process until it finally fractures.
Effect of RE on Tensile Properties of Microalloyed Steel
Figure 7a presents the engineering stress-strain curves of the matrix steel and the rare earth microalloyed steel at room temperature.From Figure 7a, it can be seen that the tensile curves of both steels start from the elastic region and then reach the yield point.After these different upper and lower yield points, the tensile curves show complex rheological behavior.The shape of this region (the Lüders region [36]) varies depending on the steel under study.The phenomenon of the yield point in tensile testing is often referred to as the Lüders effect, through which the plastic deformation zone is regarded as the Lüders zone [39].After traversing the Lüders band, the curve persists in its typical behavior, the stress level progressively increasing until it attains the pinnacle referred to as the ultimate tensile strength (UTS).After that, the stress level decreases during the necking process until it finally fractures.
Discussion
Figure 7 displays the mechanical properties of the two microalloyed steels, indicating that the addition of rare earths increased the total elongation of the steels and increased the yield and tensile strengths.In addition, both steels exhibited discontinuous yielding and the Lüders effect, as shown in Figure 8a.In line with earlier research, it has been observed that rare earth microalloyed steels exhibit a broader Lüders zone.It was demonstrated that the Lüders strain tends to increase as the ferrite grain size decreases at a specific temperature [40,41].Furthermore, Varin et al. [42] demonstrated that the presence and distribution of fine precipitates widens the Lüders zone.Both results can explain the wider region of Lüders deformation obtained for rare earth microalloyed steel steels.In addition, the yield strength of the rare earth microalloyed steel was about 50 MPa higher than that of the base steel.It is well known that the yield strength of a material is influenced by the solid-solution element content, the size of the phases, and the dislocation density, as well as the carbide size and density.Based on the previously described differences between the two microalloyed steels, the main factor responsible for the increase in the strength of rare earth microalloyed steels is grain refinement with an enhancement of carbide and dislocation interactions [43][44][45].Similarly, it has been noted that the addition of rare earths increases the precipitation of NbC nanoprecipitates and contributes to the precipitation of carbides in microalloyed steels [46].Pinning of dislocations by niobium nanoprecipitates has been reported to increase the stress in the Lüders zone, contributing to the formation Figure 7b indicates that the yield strength, tensile strength, and elongation of the matrix steel were 425.1 ± 6.2 MPa, 610.4 ± 11.3 MPa, and 34.6 ± 0.8%, respectively.In contrast, the yield strength, tensile strength, and elongation of the rare earth microalloyed steel reached 450 ± 6.0 MPa, 679.6 ± 5.9 Mpa, and 39.1 ± 1.0%, respectively, indicating that the addition of rare earth elements increased the yield strength, tensile strength, and elongation of the steel [36].
Discussion
Figure 7 displays the mechanical properties of the two microalloyed steels, indicating that the addition of rare earths increased the total elongation of the steels and increased the yield and tensile strengths.In addition, both steels exhibited discontinuous yielding and the Lüders effect, as shown in Figure 8a.In line with earlier research, it has been observed that rare earth microalloyed steels exhibit a broader Lüders zone.It was demonstrated that the Lüders strain tends to increase as the ferrite grain size decreases at a specific temperature [40,41].Furthermore, Varin et al. [42] demonstrated that the presence and distribution of fine precipitates widens the Lüders zone.Both results can explain the wider region of Lüders deformation obtained for rare earth microalloyed steel steels.In addition, the yield strength of the rare earth microalloyed steel was about 50 MPa higher than that of the base steel.
Discussion
Figure 7 displays the mechanical properties of the two microalloyed steels, indicating that the addition of rare earths increased the total elongation of the steels and increased the yield and tensile strengths.In addition, both steels exhibited discontinuous yielding and the Lüders effect, as shown in Figure 8a.In line with earlier research, it has been observed that rare earth microalloyed steels exhibit a broader Lüders zone.It was demonstrated that the Lüders strain tends to increase as the ferrite grain size decreases at a specific temperature [40,41].Furthermore, Varin et al. [42] demonstrated that the presence and distribution of fine precipitates widens the Lüders zone.Both results can explain the wider region of Lüders deformation obtained for rare earth microalloyed steel steels.In addition, the yield strength of the rare earth microalloyed steel was about 50 MPa higher than that of the base steel.It is well known that the yield strength of a material is influenced by the solid-solution element content, the size of the phases, and the dislocation density, as well as the carbide size and density.Based on the previously described differences between the two microalloyed steels, the main factor responsible for the increase in the strength of rare earth microalloyed steels is grain refinement with an enhancement of carbide and dislo- It is well known that the yield strength of a material is influenced by the solid-solution element content, the size of the phases, and the dislocation density, as well as the carbide size and density.Based on the previously described differences between the two microalloyed steels, the main factor responsible for the increase in the strength of rare earth microalloyed steels is grain refinement with an enhancement of carbide and dislocation interactions [43][44][45].Similarly, it has been noted that the addition of rare earths increases the precipitation of NbC nanoprecipitates and contributes to the precipitation of carbides in microalloyed steels [46].Pinning of dislocations by niobium nanoprecipitates has been reported to increase the stress in the Lüders zone, contributing to the formation of Lüders bands at higher stress levels and leading to higher yield points [47,48].
In addition, rare earths have an important effect on the phase transition point [49], while the addition of rare earth elements narrows the range of the pearlitic transition, which makes it easy to form finer pearlites (Figure 1).The cleaning of grain boundaries by rare earths increases the strength of grain boundaries and changes the fracture mode of a material [50].All these factors favor the increase in the strength of the material.Thus, the increase in the yield strength of steel with added rare earths is due to the synergistic effect of multifaceted factors.
Figure 8b unveils a significantly more pronounced enhancement in the UTS of the rare earth microalloyed steel, depicting a notable increase of 70 MPa.To evaluate the difference in tensile strength between the two steels, the work-hardening rates of the two steels were calculated using Equation (2) [51].
where σ and ε are the true stress and the true strain, respectively.Figure 9 presents the workhardening rate versus the true-strain curves for the two groups of specimen steels.The work-hardening rate curves are divided into three main parts.The hardening rate of both steels decreases rapidly in the first stage, which is related to the tendency of dislocations to slip in a single system.The decrease in the work-hardening rate is greater in the RE-added steel compared to the matrix steel.This is mainly due to the difference in the precipitate content between the two steels, with the volume fraction of precipitates in the RE-added steel being higher than that in the matrix steel.When dislocations break the blockage of the hard MC carbides and slide (i.e., yield) in the soft ferrite, the stress decreases.As a result, the decrease in the work-hardening rate is greater in RE-added steels.
Materials 2024, 17, x FOR PEER REVIEW 9 of 13 In addition, rare earths have an important effect on the phase transition point [49], while the addition of rare earth elements narrows the range of the pearlitic transition, which makes it easy to form finer pearlites (Figure 1).The cleaning of grain boundaries by rare earths increases the strength of grain boundaries and changes the fracture mode of a material [50].All these factors favor the increase in the strength of the material.Thus, the increase in the yield strength of steel with added rare earths is due to the synergistic effect of multifaceted factors.
Figure 8b unveils a significantly more pronounced enhancement in the UTS of the rare earth microalloyed steel, depicting a notable increase of 70 MPa.To evaluate the difference in tensile strength between the two steels, the work-hardening rates of the two steels were calculated using Equation (2) [51].
where σ and ε are the true stress and the true strain, respectively.Figure 9 presents the work-hardening rate versus the true-strain curves for the two groups of specimen steels.The work-hardening rate curves are divided into three main parts.The hardening rate of both steels decreases rapidly in the first stage, which is related to the tendency of dislocations to slip in a single system.The decrease in the work-hardening rate is greater in the RE-added steel compared to the matrix steel.This is mainly due to the difference in the precipitate content between the two steels, with the volume fraction of precipitates in the RE-added steel being higher than that in the matrix steel.When dislocations break the blockage of the hard MC carbides and slide (i.e., yield) in the soft ferrite, the stress decreases.As a result, the decrease in the work-hardening rate is greater in RE-added steels.In the second stage (S2), the work-hardening rate fluctuates considerably for both steels due to the Lüders strain effect, with the difference that it fluctuates even more for the matrix steel due to the uneven distribution of nanoprecipitates (Figure 6) and interstitial atoms (e.g., C atoms) in the matrix steel.
In the third stage (S3) of the work-hardening curve, dislocations and precipitated particles induce a pinning effect, which causes dislocations to continuously slide from the In the second stage (S2), the work-hardening rate fluctuates considerably for both steels due to the Lüders strain effect, with the difference that it fluctuates even more for the matrix steel due to the uneven distribution of nanoprecipitates (Figure 6) and interstitial atoms (e.g., C atoms) in the matrix steel.
In the third stage (S3) of the work-hardening curve, dislocations and precipitated particles induce a pinning effect, which causes dislocations to continuously slide from the intracrystalline space to the grain boundaries, resulting in dislocations aggregating and entangling at the grain boundaries [52].However, rare earth atoms also interact with carbon atoms and dislocations, and free electrons move from the lattice compression region to the stretching region, forming localized electric dipoles [8].Unlike the valence electrons of iron atoms, these free electrons form positive ions that interact with dislocations via short-range electrostatic forces [12].In addition, interactions between REs, carbon atoms, and dislocations lead to localized aggregation in the extended dislocation-forming layer, hindering dislocation migration [15].As a result, the work-hardening index of the rare earth microalloyed steel is higher than that of the base steel, which increases the tensile strength of the rare earth microalloyed steel [53].
Figure 7b demonstrates that the addition of rare earths can increase the total elongation of microalloyed steel from 34.6% to 39.1%, where grain size has a significant effect on the elongation of microalloyed steel.It has been suggested that the uniform elongation decreases with decreasing grain size [54,55].However, according to a previous study [54], the grain size of microalloyed steel decreased after the addition of rare earths.Therefore, besides considering the effect of grain size on the elongation of microalloyed steels, it has been suggested that the morphology and size of inclusions in microalloyed steels also affect the elongation of steel [56].The significant stress concentration occurring between irregular inclusions and the matrix renders the material susceptible to premature failure under external stresses, consequently diminishing its elongation.Spherical inclusions that keep good contact with the matrix favor steel elongation.
Figure 10a,b show the microscopic morphology of the base steel after stretching, where the deformation behavior of ferrite and pearlite provides a favorable deformation mechanism for higher plasticity.Figure 10b shows the crack initiation between inclusions and the base after stretching, which leads to the early fracture behavior of the material.Figure 10c,d show the port microstructure of the base and rare earth steels after tensile fracture, respectively.Both steels show deep dimples, demonstrating the good plasticity of the material.of iron atoms, these free electrons form positive ions that interact with dislocations via short-range electrostatic forces [12].In addition, interactions between REs, carbon atoms, and dislocations lead to localized aggregation in the extended dislocation-forming layer, hindering dislocation migration [15].As a result, the work-hardening index of the rare earth microalloyed steel is higher than that of the base steel, which increases the tensile strength of the rare earth microalloyed steel [53].Figure 7b demonstrates that the addition of rare earths can increase the total elongation of microalloyed steel from 34.6% to 39.1%, where grain size has a significant effect on the elongation of microalloyed steel.It has been suggested that the uniform elongation decreases with decreasing grain size [54,55].However, according to a previous study [54], the grain size of microalloyed steel decreased after the addition of rare earths.Therefore, besides considering the effect of grain size on the elongation of microalloyed steels, it has been suggested that the morphology and size of inclusions in microalloyed steels also affect the elongation of steel [56].The significant stress concentration occurring between irregular inclusions and the matrix renders the material susceptible to premature failure under external stresses, consequently diminishing its elongation.Spherical inclusions that keep good contact with the matrix favor steel elongation.
Figure 10a,b show the microscopic morphology of the base steel after stretching, where the deformation behavior of ferrite and pearlite provides a favorable deformation mechanism for higher plasticity.Figure 10b shows the crack initiation between inclusions and the base after stretching, which leads to the early fracture behavior of the material.Figure 10c,d show the port microstructure of the base and rare earth steels after tensile fracture, respectively.Both steels show deep dimples, demonstrating the good plasticity of the material.However, the broken inclusions deep in the dimples of the base steel indicate that there is a large stress concentration between the inclusions and the matrix, which reduces the plasticity of the material.Rare earth inclusions have better plasticity [9], and there is a gap between the dimples and the inclusions, which provides plastic space in the tensile process.The modification of inclusions by rare earth elements effectively reduces the size of the inclusions and reduces the stress concentration between the inclusions and the ma- However, the broken inclusions deep in the dimples of the base steel indicate that there is a large stress concentration between the inclusions and the matrix, which reduces the plasticity of the material.Rare earth inclusions have better plasticity [9], and there is a gap between the dimples and the inclusions, which provides plastic space in the tensile process.The modification of inclusions by rare earth elements effectively reduces the size of the inclusions and reduces the stress concentration between the inclusions and the matrix, which helps to increase the plasticity of the material.In addition, the Cottrell atmosphere formed by the aggregation of carbon atoms facilitates the expansion of the Lüders strain region to increase the ductile shape of the material [28].
Conclusions
In this study, the influence of rare earth elements on the properties of microalloyed steel is systematically analyzed with a comparison of the differences in the organization and characteristics of microalloyed steel with and without rare earths.The correlation of rare earth elements with the organization and properties of the steel was explored through the organizational analysis of grain size, inclusions, and carbides in the microalloyed steel combined with an analysis of the mechanical properties.The main conclusions are as follows: 1.
The addition of rare earths effectively improves the mechanical properties (yield strength, tensile strength, and plasticity) of microalloyed steel.2.
The addition of rare earth elements contributes to the refinement of the organization, the modification of inclusions, and the increase in the carbide volume fraction in microalloyed steel.The combined effect of the multiple factors increases the yield strength of the material.
3.
The interaction of rare earth elements with atoms (Nb, C, etc.) in microalloyed steels affects the slip of the dislocations in place, which in turn increases the rate of work hardening of the material and improves its tensile strength.4.
The addition of rare earths increases the volume fraction of carbides in microalloyed steels, as a result of which the pinning effect on dislocations can be increased, increasing the Lüders zone area, which affects the plasticity of the material.In addition, the decrease in the size of inclusions also increases the plasticity of the material.
Figure 2 .
Figure 2. Inverse pole diagrams and grain size distributions of base steel and rare earth microalloyed steel: (a,b) base steel microstructure; (c,d) rare earth microalloyed steel microstructure.
Figure 2 .
Figure 2. Inverse pole diagrams and grain size distributions of base steel and rare earth microalloyed steel: (a,b) base steel microstructure; (c,d) rare earth microalloyed steel microstructure.
Figure 2 .
Figure 2. Inverse pole diagrams and grain size distributions of base steel and rare earth microalloyed steel: (a,b) base steel microstructure; (c,d) rare earth microalloyed steel microstructure.
Figure 4 .
Figure 4. Statistical particle size distribution of inclusions in microalloyed steel.
Figure 4 .
Figure 4. Statistical particle size distribution of inclusions in microalloyed steel.
Figure 4 .
Figure 4. Statistical particle size distribution of inclusions in microalloyed steel.
Figure 5 .
Figure 5. (a) Measured X-ray diffraction patterns.(b) Williamson-Hall plots of the diffraction patterns for the test steel.
Figure 5 .
Figure 5. (a) Measured X-ray diffraction patterns.(b) Williamson-Hall plots of the diffraction patterns for the test steel.
Figure 10 .
Figure 10.The fractographical microstructure after stretching: (a,b) cross-section morphology of the basic steel; (c,d) fracture morphology of the basic steel and the rare earth steel, respectively.
Figure 10 .
Figure 10.The fractographical microstructure after stretching: (a,b) cross-section morphology of the basic steel; (c,d) fracture morphology of the basic steel and the rare earth steel, respectively.
Table 1 .
Chemical composition of the base steel and RE steel (Fe to balance).
Table 2 .
The microstructural characteristics of the normalized samples.
Table 2 .
The microstructural characteristics of the normalized samples.
Table 2 .
The microstructural characteristics of the normalized samples.
|
v3-fos-license
|
2018-04-03T04:30:20.099Z
|
2007-04-01T00:00:00.000
|
35545056
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://europepmc.org/articles/pmc2989131",
"pdf_hash": "477fb9f8a18890605f7435d6d8b2b3c7e6d011c9",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:676",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6fcb1817f21125c728e490610fd56c207a1e1fe1",
"year": 2007
}
|
pes2o/s2orc
|
Treatment of giant cell tumor of bone: Current concepts
Giant cell tumor (GCT) of bone though one of the commonest bone tumors encountered by an orthopedic continues to intrigue treating surgeons. Usually benign, they are locally aggressive and may occasionally undergo malignant transformation. The surgeon needs to strike a balance during treatment between reducing the incidence of local recurrence while preserving maximal function. Differing opinions pertaining to the use of adjuvants for extension of curettage, the relative role of bone graft or cement to pack the defect and the management of recurrent lesions are some of the issues that offer topics for eternal debate. Current literature suggests that intralesional curettage strikes the best balance between controlling disease and preserving optimum function in the majority of the cases though there may be occasions where the extent of the disease mandates resection to ensure adequate disease clearance. An accompanying treatment algorithm helps outline the management strategy in GCT.
G iant cell tumor (GCT) of bone is one of the TREATMENT commonest benign bone tumors encountered by an orthopedic surgeon. The reported incidence of The treatment of GCT is directed towards local control GCT in the Oriental and Asian population is higher than without sacrificing joint function. This has traditionally that in the Caucasian population and may account for 20% been achieved by intralesional curettage with autograft of all skeletal neoplasms. 1,2 It has a well-known propensity reconstruction by packing the cavity of the excised tumor for local recurrence after surgical treatment.
with morsellised iliac cortico-cancellous bone. Regardless of how thoroughly performed, intralesional excision leaves Current recurrence rates between 10-20% with meticulous microscopic disease in the bone and hence has a reported curettage and extension of tumor removal using mechanized recurrence rate as high as 60%. 3 Although a marginal or wide burrs and adjuvant therapy are a vast improvement on excision of the involved bone is curative if contamination is the historically reported recurrence rates of 50-60% with avoided, it is associated with reconstruction and disability curettage alone.
problems. In order to counter the above problems, a great deal of effort has been expended on attempting to Certain controversies in the treatment of GCT continue "extend" the curettage or intralesional excision by chemical to intrigue treating surgeons. Do adjuvants like phenol or or physical means. cryotherapy for extension of curettage have any benefit; is it better to pack the defect with bone graft or cement; should Intralesional curettage a recurrent lesion be curetted again or widely excised; does one contemplate joint salvage or resection especially in large GCTs? These are some of the issues that offer topics for eternal debate.
This article endeavors to outline the principles of management of giant cell tumor of bone and addresses current opinion regarding some of these dilemmas. The key to ensuring an adequate curettage with complete removal of tumor is obtaining adequate exposure of the lesion. This is achieved by making a large cortical window to access the tumor so as to avoid having to curette under overhanging shelves or ridges of bone. Use of a head lamp and dental mirror combined with multiple angled curettes helps to identify and access small pockets of residual disease which may otherwise result in recurrence. A high power burr to break the bony ridges helps extend the curettage and is recommended. A pulsatile jet lavage system used at the end of the curettage helps to bare raw cancellous bone and physically wash out tumor cells. • Immediate structural support and rapid weight-bearing Even pathological fractures through a giant cell tumor ambulation are not a contraindication to treatment by curettage and cementation. 10,11 Cryosurgery using liquid nitrogen first
Drawbacks of cementing
propagated by Marcove, though used in some centers, is • Not a biological material. Cement though strong in associated with a high incidence of local wound and bone compression is relatively weak when subjected to complications. 12, 13 shear and torsional forces. Hence its use in lesions involving the head and neck of the femur may result Do These Adjuvants Help?
in an increased chance of fractures through cement. Some recent studies though, have questioned the role of • Fear about long-term degeneration of articular cartilage adjuvants and filling agents in reducing the recurrence rate in subchondral lesions in weight-bearing areas of giant cell tumors. Adequate removal of the tumor seems to be a more important predictive factor for the outcome Recent studies have demonstrated the efficacy of bone of surgery than the use of adjuvants. The study by Trieb substitutes like calcium phosphate as a filling agent. 16 et al demonstrated that local recurrence rate of giant cell the patient is treated by cementation, there is a belief that tumors located in long bones treated with or without phenol it is necessary to remove the cement after an appropriate is similar. [14] Prosser et al retrospectively reviewed 193 passage of time (to be reasonably certain that local relapse patients treated during a 27-year period and compared their is not going to develop). The defect is subsequently results with historic controls. One hundred and thirty-seven reconstructed with autograft on the subchondral portion patients had curettage as a primary treatment and of these, of the repair supplemented with allograft to prevent late • 26 (19%) had local recurrences. The local recurrence rate articular degeneration. However, studies have shown that of giant cell tumors confined to bone (Campanacci Grades joint function is not compromised in time even after the use I and II) was only 7% compared with 29% in tumors with of subchondral cement. 17,18 There is an interesting report If extraosseous extension (Campanacci Grade III). They recommended primary curettage for intraosseous giant cell tumors without adjuvant treatment or filling agents, but tumors with soft tissue extension or with local recurrence may require more aggressive treatment. 15
Reconstructing the residual defect
Reconstructing the defect after curettage can be quite challenging. In case the gap left behind after the curettage is small and does not jeopardize the structural integrity of the bone it can be left alone and the cavities fill up with blood clot which then gets ossified to form bone. 15 For larger on two cases by Tejwani et al, both with symptomatic full-thickness tibial articular cartilage loss and one with a meniscal tear, after curettage, phenol cautery and PMMA reconstruction of giant cell tumor of the proximal tibia. Arthroscopic chondroplasty and planing of the exposed cement was performed in both cases, theoretically reducing focal areas of stress concentration that could lead to further meniscal damage and injury to the femoral condyle articular surface in weight-bearing. 19 To try and forestall this potential problem of late articular degeneration in subarticular lesions where the amount of residual subchondral bone after an extended curettage is less than 5 mm, a multilayer reconstruction technique is recommended. A mixture of morsellized auto and allograft (about 5-8 mm thick) is packed adjacent to the subarticular surface. A layer of gelfoam is layered over this and the remaining cavity is packed with cement [ Figure 2]. This helps reduce heat damage from the curing cement, and Gelfoam Cement the subarticular bone graft after consolidation should theoretically prevent articular degeneration. 20 Another perceived advantage is that should recurrence occur, the danger of damage to articular cartilage during removal of cement is reduced [ Figure 3].
Occasionally, Steinmann pins have been used to reinforce the bone cement used to fill the large subchondral defects following intralesional curettage. However, whether this is of real benefit in improving the stability of the defect is controversial. 21 Large lesions can cause weakening of the structural stability of bone. Depending on the residual structural integrity of the host bone it may be necessary to augment the construct with internal fixation.
Wide resection and subsequent reconstruction
In a study of 38 patients with giant cell tumor in the knee region Chen et al measured the area of affected subchondral bone radiographically using plain radiographs, CT and MRI and correlated it with the mean Enneking functional score at follow-up. In patients initially treated with curettage and bone grafting, the mean area of initially affected subchondral bone was 18.6% with a linear trend showing that the larger the area of affected subchondral bone, the worse the functional score. Among patients initially treated with wide resection, the mean area of affected subchondral bone was 68.2%. 20 Thus occasionally, even in benign tumors, resection may be the preferred option when bone salvagibility by intralesional methods would result in such
Lower end radius lesions
There is some debate regarding the management of GCT in the lower end radius. Some authors have reported a high rate of local recurrence in GCT of the distal radius and recommend that they should be treated more aggressively. Today the consensus of opinion would state that curettage should be attempted for the majority of patients with severe mechanical compromise that skeletal integrity is unlikely to be maintained or unlikely to be restored after healing, leading to a compromise in ultimate function 21 [ Figure 4]. In certain bones like the lower end ulna, upper end fibula etc. excision may be attempted as the treatment If marginal / wide local excision is elected as the treatment of the lesion, either primarily or in recurrence, then reconstruction necessarily implies reconstruction of the joint surface, since GCT invariably involves the end of a long bone and causes significant dysfunction of the joint surface. 22,23
The options include
Megaprosthetic joint replacement: These afford stability and mobility, however, are prone to ultimate loosening, wear or breakage and require revisions. Biologic reconstruction: These are technically demanding, but durable procedures affording stability at the cost of mobility. They include: • autograft arthrodesis (knee, wrist, shoulder) with internal / external fixation 24 • live microvascular fibula reconstructions (e.g., around knee and shoulder, distal radius reconstruction, distal fibula GCT with ankle reconstruction) 25-27 • Ilizarov method of bone regeneration 28,29 GCT of the distal radius [ Figure 5] but some form of stabilization may be required in the presence of extensive bone destruction. 32 Cheng et al. state that intralesional excision should not be excluded as a possible treatment of even Grade III lesions. They recommend Grade III lesions be treated with curettage when the tumor does not invade the wrist, destroy more than 50% of the cortex or break through the cortex with an extraosseous mass in more than one plane. 31
LOCAL RECURRENCE IN GCT
Local recurrences appear to be related to the surgical margin and are clinically characterized by pain and radiologically IJO -April -June 2007 / Volume 41 / Issue 2 by progressive lysis of the bone graft or the adjacent cancellous bone. Following curettage and cementation an osteolytic zone caused by thermal injury measuring 2 mm surrounds the cement. This radiolucent zone is bordered by a thin outer sclerotic rim for about six months. Lysis or failed development of the sclerotic rim between the cement and cancellous bone may suggest recurrence. 33
CHEMOTHERAPY AND RADIOTHERAPY
Occasional GCT of bone demonstrate profound responses to chemotherapy but these cases are anecdotal and their incidence is disappointing. At the present time there are no recognized effective chemotherapeutic agents available for the management of these tumors. The literature documents a close association of secondary sarcomatous transformation in the region of GCTs treated by radiation therapy. Though surgery remains the treatment of choice, (TACP) could be used as a tumor marker for monitoring radiotherapy is recommended when complete excision response to the treatment of GCT. Total serum acid or curettage is impractical for medical or functional phosphatase level in GCT patients correlated with tumor reasons (generally for lesions of the spine and sacrum) or size. The high preoperative TACP values in GCT patients for aggressive, multiply recurrent tumors. [39][40][41][42] In lesions became normalized after surgery but reappeared in three involving the axial skeleton, with the exception of the of five patients with local recurrence. 34 sacrum, excision with stabilization of the spine and biologic reconstruction of the anterior column 43 Though the majority of recurrences usually occur within the reduced levels of irradiation (45 Gy in 4.5 weeks), on the first two years, late recurrences are known and long-term assumption that you are dealing with microscopic residual surveillance is recommended in these patients. 35,36 Even tumor only, would offer the patient the best chance of though the increasing grade from I to III is not a reflection long-term local control. The use of modern-day techniques of the biologic aggressiveness of the tumor, various authors and megavoltage radiation may help to reduce the rate of have documented an increased rate of recurrence in Grade malignant transformation that was seen during the earlier III lesions. 15,18 This could be due to the difficulty in achieving era of orthovoltage radiation. 39 complete clearance once the tumor has breached its normal anatomic boundaries and extended into soft tissue.
EMBOLIZATION
The principles of management remain the same even in Unresectable GCTs (e.g., certain sacral and pelvic tumors) recurrent tumors. Steyern et al. retrospectively studied can be managed with transcatheter embolization of (n=137) local recurrence of GCT in long bones following their blood supply. Since flow reconstitution invariably treatment with curettage and cementing. They concluded occurs, embolization is performed at monthly intervals that local recurrence after curettage and cementing in long until significant pain palliation is achieved. Subsequent bones can generally be successfully treated with further embolizations are performed when there is symptomatic curettage and cementing, with only a minor risk `of or radiographic relapse of the tumor. 3,44 Tumors in areas increased morbidity. 37 This suggests that more extensive amenable to surgical resection also benefit by preoperative surgery for the primary tumor in an attempt to obtain embolization in an attempt to reduce the amount of wide margins is not the method of choice, since it leaves intraoperative blood loss. the patient with higher morbidity with no significant gain with respect to cure of the disease. A recent study has shown that rinsing of morcellized bone grafts with bisphosphonates prevents resorption and is likely to reduce the risk of mechanical failure. Though this was studied during revision total hip replacement using morcellized compacted bone allograft, the same principle may possibly be applicable to bone grafts used to fill defects after curettage. 47 .
METASTASIS IN GCTS
The incidence of metastases is estimated to be from 1-6%. The metastatic lesions are histologically identical to the primary lesions, showing no tendency to dedifferentiate. The majority of metastatic lesions are to the lung. Solitary metastasis to regional lymph nodes, the mediastinum and the pelvis have been reported, as has involvement of the scalp, bone and paraaortic nodes. [48][49][50][51][52] The mean interval between the onset of the tumor and the detection of lung metastases
MALIGNANT GCT
There are mainly two kinds of malignant GCT. The primary a high-grade osteosarcoma, MFH or fibrosarcoma. These have a poor prognosis, particularly the radiation-induced sarcomas.
GCT though benign is locally aggressive and the surgeon needs to strike a balance during treatment between reducing the incidence of local recurrence while preserving maximal function [ Figure 6]. In 1912 Joseph Bloodgood was the first to refer to this lesion as "giant cell tumor". His suggestions that this tumor was preferably treated by curettage with chemical cauterization and bone grafting are still widely followed. 54 Current literature too suggests that intralesional curettage strikes the best balance between controlling disease and preserving optimum function in the majority of the cases though there may be occasions where extent of the disease mandates resection to ensure adequate disease clearance.
Between 10-20% of tumors would still recur in spite of our IJO -April -June 2007 / Volume 41 / Issue 2 best efforts. The principles governing the management of recurrent tumors remain the same as it is believed that more extensive surgery in an attempt to obtain wider margins leaves the patient with higher morbidity with no significant gain with respect to cure of the disease.
|
v3-fos-license
|
2018-12-16T01:53:09.155Z
|
2013-01-01T00:00:00.000
|
54682499
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2013/20/epjconf_ifsa2011_17012.pdf",
"pdf_hash": "f5537d832fcaf62d99bd2e10e06866a05401c3e8",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:677",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"sha1": "f5537d832fcaf62d99bd2e10e06866a05401c3e8",
"year": 2013
}
|
pes2o/s2orc
|
Efficient ion generation in laser-foil interaction
A remarkable improvement is presented on the energy conversion efficiency from laser to protons in a laser-foil interaction by particle simulations. The total laser-proton energy conversion efficiency from laser to protons becomes 16.7%, though a conventional plane foil target serves a rather low efficiency. In our 2.5-dimensional particle-in-cell simulations the Al multihole structure is also employed, and the laser absorption ratio reaches 71.2%. The main physical reason for the enhancement of the conversion efficiency is a reduction of the laser reflection at the target surface area;
When an intense short-pulse laser illuminates a thin foil target, electrons are first accelerated by the laser, and oscillate or move around the thin foil.The electrons form a high current and generate a magnetic field.In the laser-foil interaction, the ion dynamics is affected directly by the behavior of the electrons [12,13].The electrons form a strong electric field, and the ions are accelerated by the electric field.When a foil target has backside multiholes at the target rear side (see Fig. 1(c)) and the hole size is the order of the laser spot size, the holes help to produce a collimated ion beam [13].
On the other hand, the foil target surface reflects a significant part of the laser energy.The energy conversion efficiency from laser to protons tends to be low.The subwavelength-multiholes transpiercing the planar target enhance the laser-proton energy conversion efficiency [21][22][23].The subwavelength microstructured targets [21][22][23][24][25] are propitious to enhance the laser energy absorption.In this paper a e-mail: kwt@cc.utsunomiya-u.ac.jpThis is an Open Access article distributed under the terms of the Creative Commons Attribution License 2.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.we employ the subwavelength-multihole microstructure to increase the laser-proton energy transfer efficiency at the target rear side.
Throughout the paper wider holes at the rear side are employed to ensure the collimated proton beam generation [13] (see Figs. 1(a)-(c)).The Al width in the right hand side is 1.0 for the proton beam collimation.
EFFICIENT LASER ENERGY CONVERSION TO IONS IN LASER HOLE-TARGET INTERACTION
We perform 2.5-dimensional (x, y, v x , v y , v z ) particle-in-cell simulations.Figure 1(a) shows a conceptual diagram of the multihole target.The laser intensity is I = 1.0 × 10 20 W/cm 2 , the laser spot diameter is 4.0 , and the pulse duration is 20 fs.The laser transverse profile is in the Gaussian distribution, and the laser temporal profile is Gaussian.The laser wave length is = 1.053 m.The ionization degree of Al layer substrate is 11, and the Al layer density is the solid density.
We study the role of the Al width and the Al wing length in the multihole target shown in Fig. 1(a).At the target rear side the additional wider holes are installed to ensure the proton beam collimation based on our previous study [13] throughout the paper.Thin foil tailored targets, which have a subwavelength structure, enhance the laser energy absorption [21][22][23][24].
Figure 1(a) shows the tailored multihole target.Firstly the Al width of the multiholes at the laser side is changed from 0.1 to 0.5 .For a comparison, we also simulate the target which has no multihole Al (see Fig. 1(b)).At the subwavelength-structured Al surface, seen by the illuminating laser, the laser is partly reflected.When a laser interacts with a wire target, a large number of electrons are expelled out from the target and accelerated efficiently.Therefore, when the Al is thinner in Fig. 1(a), the laser enters into the target Al microstructure and generates hot electrons effectively.So the thinner Al-layer target produces the higher acceleration electric field.Figure 2 shows the total energy histories of the proton for each case.When the Al width is 0.1 , the total energy of the protons reaches about 4.5 × 10 3 J/m in the multihole target at 300 fs.The maximum proton kinetic energy is 12.2 MeV.For the other cases it is 9.51 MeV for 0.3 and 7.48 MeV for 0.5 .When the Al width becomes thin, the Al surface area seen by the laser is reduced and the laser reflection is also reduced.However, if the target has no Al microstructure at the laser side, it just becomes a plain target.The energy laser-proton conversion efficiency is 13.3% for 0.1 , 8.67% for 0.3 and 6.28% for 0.5 , though it is 5.74% for the target in Fig. 1(b).Moreover, the laser Al-ion energy conversion efficiency is 38.0% for 0.1 .The optimal width of the Al microstructure in Fig. 1(a) must be the order of the skin depth to reduce the laser reflection as low as possible.The simulation results present that the proton total number for the target of 0.1 is about 2.0 times large compared with that for 0.5 .In the conventional target shown in Fig. 1(c) the laser-proton energy conversion efficiency was 1.81%.The results in this paper demonstrate a significant effect of the Al microstructure width on the ion generation efficiency.As discussed above, the physics of the increase in the proton generation efficiency comes from mainly the reduction of the laser reflection at the left surface area [21,22].The reduction of the Al reflection area, that is the reduction of the Al width in Fig. 1(a), results in the significant increase in the laser energy conversion to the protons.In addition, the microstructure holes contribute to an increase of the laser interaction surface area, and laser also reflects multiple times inside the holes.
We also examine the role of the Al wing length on the laser-ion energy conversion.We change the Al wing length of the multihole target from 0.1 to 0.5 (see Fig. 1(a)).At 0.2 protons obtain the larger energy from the laser.At the Al wing length of 0.2 , the total energy of the protons reaches the maximum and about 5.5 × 10 3 J/m in the multihole target.The energy conversion efficiency from the laser to the protons becomes 16.7%.The maximum proton kinetic energy is 12.4 MeV for the case of 0.2 , 10.4 MeV for 0.1 , 13.0 MeV for 0.3 and 12.2 MeV for 0.5 at 300 fs.The energy conversion efficiency is 38.7% for laser Al-ions and 39.0% for laser-electrons.The laser absorption ratio is 71.2% for 0.2 .Figure 3 shows (a) the maximal acceleration electric field Ex for the multihole target, and the distributions of the high-energy electron density at (b) 0.2 and (c) 0.5 .The hot-electron highdensity cloud is well located at the right space of the proton layer in the case of 0.2 for the effective proton acceleration.
EPJ Web of Conferences
When the Al wing length is too short, the total electron number in the Al layer is reduced to create the proton acceleration field at the target rear side.The proton acceleration field, created by the hot electrons at the rear side, is produced by the hot electrons.When the total number of the electrons, which are mainly originated from the microstructured Al at the laser side, becomes small, the acceleration field at the rear side becomes weak.On the other hand, when the Al wing length is too long, the hot electron cloud cannot fully reach the target rear side (see Fig. 3(c)).In this case with the long Al wing the Al layer originally contains a sufficient number of electrons, and the sufficient number of hot electrons appears.However, the hot electron cloud experiences the interaction with the target plasma for a longer distance than that in the optimal case, and does not reach the correct position of the target rear side to create a strong acceleration electric field at the target rear side, as presented in Fig. 3(c).However, for the optimal case, that is, the case with the Al optimal wing length of 0.2 , the hot-electron cloud location is appropriate to create the high acceleration field Ex for the effective proton acceleration as shown in Fig. 3(b).
CONCLUSIONS
In this paper, we presented an efficient energy conversion method from laser to ions in laser foil interaction.We investigated and clarified the role of the hole width and width of the tailored multihole target at the laser side in the laser-proton energy conversion.The conversion efficiency was enhanced significantly to 16.7% in the optimal microstructured target from a few percent in a planar target without the microstructure.For practical applications of the laser-ion accelerator, the issues contain the efficient ion generation as studied in this paper, the ion beam quality improvement in the energy spectrum control including a mono-energy ion beam generation, neutralized or unneutralized ion beam transportation for a long distance, laser multi-stage post acceleration, etc.The multihole target presented in this paper may serve a new way to create proton beams efficiently in the future laser proton accelerator.
Figure 1 .
Figure 1.Thin-foil targets: (a) a multihole target (b) a thin foil target without an Al hole layer and (c) a planar target (a conventional target with a plain Al layer).
IFSA 2011 Figure 2 .
Figure 2. Total energy histories of the protons accelerated for the structured target with the Al width in 0.5 , 0.3 and 0.1 and for the target without the mictostructure in Fig. 1(b).
Figure 3 .
Figure 3. (a) The maximal acceleration electric field Ex for the multihole target, and distributions of the high-energy electron density at (b) 0.2 and (c) 0.5 .
|
v3-fos-license
|
2024-03-15T16:27:28.879Z
|
2024-03-01T00:00:00.000
|
268394151
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/24/6/1764/pdf?version=1709911521",
"pdf_hash": "5cd2441596d0f7007ff6bb74253350680108c949",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:679",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"sha1": "39de79d1003b052aaa5e92786940d02ea9266b4b",
"year": 2024
}
|
pes2o/s2orc
|
A Multi-Agent System for Service Provisioning in an Internet-of-Things Smart Space Based on User Preferences
The integration of the Internet of Things (IoT) and artificial intelligence (AI) is critical to the advancement of ambient intelligence (AmI), as it enables systems to understand contextual information and react accordingly. While many solutions focus on user-centric services that provide enhanced comfort and support, few expand on scenarios in which multiple users are present simultaneously, leaving a significant gap in service provisioning. To address this problem, this paper presents a multi-agent system in which software agents, aware of context, advocate for their users’ preferences and negotiate service settings to achieve solutions that satisfy everyone, taking into account users’ flexibility. The proposed negotiation algorithm is illustrated through a smart lighting use case, and the results are analyzed in terms of the concrete preferences defined by the user and the selected settings resulting from the negotiation in regard to user flexibility.
Introduction
The term "smart" or "intelligent" when referring to a home, building, space, or environment is often used interchangeably with ambient intelligence (AmI), a concept that has been explored for more than two decades and was first mentioned in the late 1990s [1].The size of the global ambient intelligence market speaks for itself, as it is estimated to be worth USD 21.61 billion in 2023, while the forecasted revenue for 2030 is USD 99.43 billion [2].Ducatel et al. [3] emphasize the focus of ambient intelligence on user-friendliness and empowerment, while Dunne et al. (2021) highlight the role of artificial intelligence (AI) and the Internet of Things (IoT) in the realization of ambient intelligence [4].The IoT, with its connected sensors and actuators, is essential to achieve context awareness, a key aspect of AmI.Adding cognitive capabilities to IoT devices is the focus of the Cognitive Internet of Things (CIoT) paradigm presented by Wu et al. [5].The development of the CIoT incorporates numerous techniques from various fields, such as machine learning, artificial intelligence, context-aware computing, cybernetic-physical systems, pattern recognition, and speech recognition [6,7].AmI and CIoT are intertwined, with overlapping application scenarios [8][9][10][11].Although the distinction between the terms may be subtle, Jamnal and Liu [12] argue that CIoT extends the capabilities of AmI in smart environments, going beyond monitoring and supporting people's tasks to proactively influencing users' plans and intentions.
Numerous scientific works have explored smart homes, emphasizing the importance of recognizing and accommodating the needs and preferences of users.However, practical implementation still encounters several challenges, especially in multi-user scenarios.While the optimization of comfort for the individual user has received notable attention, the management of devices and services for multiple users simultaneously is still largely unexplored.Augusto et al. [13] point out the difficulty of satisfying a single user, let alone multiple users in the same environment.This paper addresses the challenge of managing and satisfying individual user preferences in multi-user scenarios in which all users are inevitably affected by service device settings.For example, consider the case of managing the ambient conditions in a room based on the preferences of multiple users, in which case it is unlikely that all users have the same preferences.
The research results presented in this paper show that our multi-agent system effectively addresses multi-user scenarios by accommodating diverse preferences and resolving conflicting requirements, benefiting all individuals involved.Multi-agent systems are a valuable solution for complex systems in which the division of tasks makes a big difference in effectiveness.Moreover, context-aware systems require intelligence and autonomy, which are attributes of software agents [14].In this context, our system employs specialized agents that represent users and services.Agents representing users, predict the preferences for an unseen context based on detected and established user preferences via artificial neural networks, and they represent users in the negotiation process with other user agents.In this paper, a novel negotiation process that utilizes agent grouping based on flexibility factors in conjunction with user preferences is proposed.It also enables effective negotiation in which user agents are equal without neglecting any user.At the same time, criteria are maintained that do not allow any user agent to obstruct the negotiation process by forcing the decision to favor their own preferences over those of other agents.To ensure interoperability between different devices of the same type, since not all devices have the same ranges, options, etc., we introduce a space agent tasked with aligning preferences with the devices' capabilities.We chose to focus on lighting not only because it is obviously one of the ambient conditions that are the sources of arguments in common areas at home and at work but also because it has a great impact on humans, as pointed out by Tomassoni et al. [15], who emphasize the influence of light intensity and color on people's emotions and psyches.A statement by Chew et al. [16] underlines the complexity and importance of smart lighting: "it can be concluded that the future of smart lighting development is a multi-disciplinary research area; smart lighting has the potential to provide the platform to bring advancements in key research areas pertaining to energy efficient buildings, human health, photobiology, telecommunications and human physiology to our living rooms and offices".Another service that is prone to similar problems in a multi-user setting, i.e., conflicts between users with different preferences, is smart heating and cooling.
In summary, the key contributions presented in this work are as follows: • The definition of a service provisioning solution for multi-user scenarios in an Internetof-Things smart space based on the optimization of service parameters, depending on the context, available devices, users, and users' preferences.
•
The solution architecture equipped with intelligent software agents representing users and spaces for smart space service provisioning.• A negotiation organization and negotiation process for software agents in the dynamic adaptation of ambient conditions based on context, presenting users' preferences and flexibility factors.
In addition to these main contributions, the challenges associated with the simultaneous provision of services to users with different desires and needs are also discussed, highlighting the lack of solutions for this type of scenario.Furthermore, the intricacies of defining preferences are explored, and the extent to which contextual factors are considered in this process is examined.
The paper is organized as follows.Section 2 presents the related work.Section 3 describes the methodology behind the development of the proposed system and the negotiation algorithm.Section 4 describes the selected use case and displays the results of the negotiation algorithm in the conducted experiments for the selected use case.The paper ends with Section 5, in which a discussion and conclusions concerning the results of this research, as well as plans and ideas for future work, are presented.
Related Work
Smart homes are not new, but they have gone through a significant evolution to reach their present state.In the beginning, the burden of automation fell on users, resulting in homes that were programmable but not yet aligned with the modern understanding and expectations of smart homes.As a case in point, consider that, in 1991, Stauffer [17] described a smart house as an "enabling system that provides the common resources needed for home automation".For many users, this was just an added pressure element and obligation because, for example, they would forget to turn on some of the various modes of the system.As an illustration, consider a "not at home" mode that would turn off all the lights, lock the door, turn off the heating, turn off the security alarm, etc.Also, initially, the settings of such systems were quite complicated and unintuitive.It is obvious that, to achieve truly smart homes, they must be adapted to the user and not the other way around, when the "brains" of the smart environment are smart users [18].The system must listen to its users and adapt to their needs and desires by programming itself; users must not spend more time adjusting the system than it would take to perform an action on their own.
It is evident that contextual awareness, a prerequisite for AmI, is significantly important.Contextual awareness is the foundation without which it is impossible to start talking about ambient intelligence or CIoT and, therefore, also about providing ambient services to the user.Lovrek [19] points out that context awareness had become a subject of research thirty years ago and noted its promising and challenging impact on the functionality, efficiency, and complexity of systems.It is also recognized in the field of the Internet of Things by Perera et al. [20], which further confirms the connection between IoT, ambient intelligence, and context awareness.According to Dunne et al. [4] the research community agrees that context awareness is critical in the development of ambient intelligence systems, as the awareness of context enables the accomplishment of complex tasks through an understanding of the states of the user and the environment, allowing for greater precision in the predictability, assistance, and handling of multiple users in the same space.In the system proposed in this paper, the preferences for a selected service are defined by users based on the context recognized in a smart environment.This fundamental aspect of the solution proposed in this paper ensures adaptability and emphasizes the importance of context awareness in navigating changing ambient conditions.
The idea of smart homes enriched with cognitive technologies has long been discussed, as they take the cognitive burden off users.This can be seen, for example, in a large amount of research, much of which focuses on energy savings with the help of machine learning and artificial intelligence [21,22].User comfort is put at the center of the focus of numerous works, alongside other factors, such as energy saving.Rasheed et al. [23] use mathematical optimization models to minimize power consumption while taking into consideration user comfort, which is modeled based on user preferences, weather conditions, and appliance classes.Different comfort constraints apply to different appliances; for example, air conditioner usage is essential in the summer to ensure user comfort, but it leads to higher electricity bills.
Artificial neural networks (ANNs) have established their place in the development of smart environments since they are used for automation and the control of appliances, activity recognition, classification, etc., with various types of learning used for ANNs.In 1998, Mozer [24] used a feed-forward neural network with reinforcement learning to develop ACHE, a smart home system that aims to predict user action in order to save energy and satisfy users.A deep attentive tabular neural network (TabNet) was used by Ani et al. [25] in 2023 for prediction and control tasks through imitation learning to automate a control, and they pointed out that it is more suitable for the task than reinforcement learning.Health and well-being monitoring has been improved and raised to a different level with the introduction of IoT, as is evident in a large number of papers published on the subject [26][27][28].It has proven to be very helpful in providing older people with the opportunity to stay independent and relieving burdened health systems.These application ideas and concepts have received more attention, given the COVID-19 pandemic [29,30].
Despite the positive aspects, there is resistance towards some forms used in ambient assisted living (AAL) system proposals, such as cameras and microphones, as was found in a survey carried out by Igarashi et al. [31].
Various approaches to the development of smart environments have been explored, one of which is the integration of software agents.Among their three primary attributes-autonomy, the ability to learn, and cooperativeness [32]-lies the foundation for their further growth and development.These attributes are the key to developing additional abilities such as reactivity, proactivity, coordination, negotiability, and sociability that contribute to the development of personalized user experiences in smart homes.They have been integrated into smart environments in many various ways, for example, in energy management [33] and home automation.Park et al. [34] describe a reinforcement learning-based system for lighting that balances two factors, human comfort and energy savings, with the goal of achieving energy savings without neglecting and sacrificing user comfort.Their system learns users' behavior patterns and the environment, but it is focused on one user.Belief-desire-intention models are often used in the design of software agents, like in the work by Sun et al. [35].Their multi-agent system is described in an illumination control use case.The system does not differentiate between different users, but the user inputs require illumination control through the user interface.The BDI model guides the decision-making process in order to achieve energy savings and set the illumination conditions to complement the current activity of users.In their study, Amadeo et al. [36] introduce the COGITO platform for cognitive buildings, focusing on energy efficiency, security, and user comfort.Within their cognitive office framework, they describe four agents responsible for position detection, activity recognition, curtain management, and lighting management.The environmental settings are adjusted according to the detected activity, which includes possible states of no activity, meetings, desk work, or free time.
Undoubtedly, the research and development of smart spaces have come a long way, but they continue to progress since there is much to be yet covered.While there are many advanced solutions with respectable and significant results, works focus predominantly on single-user scenarios in smart environments, as pointed out by Oguego et al. [37].This highlights the complexity of multi-user environments.Consequently, there is only a limited number of solutions that address this significant challenge.This is particularly evident in human activity recognition, which requires more attention since accurately detecting the actions of multiple individuals in shared spaces can unlock new application possibilities [38].Cook [39] observes that only a limited number of researchers have endeavored to address the complexities arising from multiple users within the same smart environment.Even though this observation was made in 2009, the scenario with multiple users is still a problem and a challenge, especially when taking into consideration their preferences.
However, there are works that deal with this challenging issue in different forms, focusing on different AmI scenarios not necessarily in the smart home.Valero et al. [40] present their MAS platform, Magentix2, which improves the adaptability and efficiency of smart environments by defining user profiles, controlling access to services, and optimizing the information flow for inhabitants by tracing their behavior.Their platform provides an effective solution to the challenge of controlling access to smart home functions and devices through the definition of organizational roles and organizational units in living spaces.However, it does not address the issue that arises when two agents, carrying the same weight, i.e., role, may have different preferences.If users have the same role, conflicts may arise when multiple users attempt to set device settings according to their individual preferences.
Negotiations can be mediated or non-mediated, like the non-mediated bilateral multiissue negotiation model proposed by Sanchez-Anguix et al. [41], which is illustrated through the example of a product fair with separate agents representing buyers and sellers.These agents incorporate the negotiation attributes of their client, i.e., buyer or seller, and then search for and attract potential deals, aiming to minimize the negotiation process and save valuable time.An approach with a home agent (mediator) for the negotiation process between user agents and device agents is proposed by Loseto et al. [42].The goal is to optimally fulfill the user agent's function requests.While it is based on user needs and preferences for service functions, conflicting situations in which users have different preferences for the same functionality are not covered.Muñoz et al. [43] use argumentation in their proposal for a television service that acts as a mediator, taking into account the preferences of users and television programs to make recommendations aimed at satisfying all users present.Argumentation is also used by Oguego et al. [44] to manage conflict situations, focusing not on conflicts between multiple user preferences but, rather, on reconciling user preferences with device settings, especially with regard to safety concerns.AmI systems, particularly AAL systems, are expected to adjust and customize their services in order to achieve maximum user comfort.This goal becomes unattainable if the system falls short in gathering information and generating knowledge about the user and comprehending their preferences [45].Preferences are not always clear-cut and easily presented; for instance, consider lighting.While it is easy to define a preference when the options are limited to "on" and "off", things get more complicated when introducing lighting fixtures that offer multiple colors and intensity levels.With more devices and options, there are also more ways to combine preferences, especially when more context is included in the definition of preferences.The domain of context recognition presents a complex challenge, providing numerous research possibilities, considering the complexity of obtaining and defining context attributes despite context awareness having been a research topic for more than three decades.Moulouel et al. [46] address the challenge of context abnormalities within partially observable uncertain AAL environments with an ontologybased framework.Their approach integrates a probabilistic context reasoning formalization, aiming to effectively address the incompleteness of knowledge in such environments.
While it is important to acknowledge the users' concerns about their data security and privacy, this paper will not focus on those issues.This paper will focus on the negotiation process in a multi-agent system in which agents represent users' preferences with the aim of achieving a decision that suits all users.
Agent Types, Organization, and Architecture
The envisioned multi-agent system was designed with the goal of achieving users' satisfaction at its core, representing a user-centric approach.But this goes beyond one user; the focus is on all present users, each with their own unique preferences that can change at any time.Therefore, it naturally follows that each user is assigned a dedicated agent responsible for advocating for their preferences in the negotiation process.A more comprehensive explanation of preferences, including how user agents observe and learn users' preferences and subsequently manage them, is presented in Section 3.2.3.The number of user agents in the negotiation process equals the number of users present in the specific smart environment.
During the design process of the negotiation algorithm, several options were considered to reach agreements between agents representing users and reduce their preferences to a single selection.Conitzer [47] discusses numerous decision rules with various complexity levels, ranging from the simple and straightforward plurality rule to the more complex Kemeny rule.The plurality rule selects the most preferred alternative, and complementarily, the antiplurality rule selects the least disliked choice.Furthermore, he describes more complex approaches such as the Borda rule, which utilizes a ranking system and integrates a scoring system based on each negotiator's preference for option placementwhether it is first or last.In Kemeny's rule, which has been previously tested for preference aggregation [48], alternative rankings are aggregated, ultimately proposing one that minimizes the distance from the entered rankings.
The goal of our proposed solution is to create a decentralized system without a centralized "brain" that makes a decision after receiving the preferences of all users present.This opened several questions that had to be carefully approached, one of which was the course of negotiations.For example, when "chain" negotiation is taken into account, the first agent submits their preference to the second, the second proposes a joint decision that they agree with the first agent and then sends it to the third agent, who, with the second agent, checks the joint decision and sends it on to the fourth, who does the same, etc. From the first to the last agent, i.e., after one round, the proposal for a decision is possibly very different from the preference of the first agent.Thus, a certain course must be set in place that does not "forget" or neglect any participant, no matter their position in the negotiation process.
As expected, the negotiation process becomes increasingly complex when users exhibit a wider range of diverse preferences.Also, restarting a negotiation process from the start every time a user enters or exits the smart environment would be inefficient.In both of these scenarios, another independent entity should handle these problems.This will be handled by the smart space agent.For the first scenario, when user agents cannot reach an agreement in the allowed number of negotiation rounds, the smart space agent collects all of their preferences and calculates a setting that will attempt to satisfy all the users to the maximum.This centralized solution is only resorted to when no agreement has been reached in the permitted number of negotiation rounds, which is defined according to the number of users and the scope of the negotiation; i.e., it is not the same when they are negotiating one setting as when they are negotiating more settings.In the second scenario, the smart space agent will check with the user agent that just registered whether the current setting satisfies them and try to reach an agreement without involving all other agents and triggering a new negotiation process.As expected, this approach is not always successful, depending on the difference between the current setting and the preference of the new user, so the negotiation process is restarted.
Another role of the smart space agent is to act as a middleware element.For instance, when considering lighting preferences, it is important to note that not all lighting fixtures in different smart environments share identical settings.These include variations in maximum intensity values and color options.Furthermore, the smart space agent monitors the ongoing context and updates the user agents.This eliminates the need for user agents to gather sensor readings from multiple sensors on their own, which means less traffic on the network and enables interoperability because the smart space agent passes sensor readings to the user agents in an agreed notation understood by all agents.This means that the smart space agent is like a specialist dedicated to the specific smart environment.It can be considered in future works that one smart space agent is in control of more spaces, like a whole house or building floor.
The described multi-agent system (MAS) has two agent types per smart environment: user agents u_agent n , and a smart space agent s_agent marked as follows: where the number of user agents, n, equals the number of users present in the smart environment; i.e., each user agent i, u_agent i , is associated with one user.
The proposed system architecture is shown in Figure 1.It illustrates the relationship between users, agents, devices, and sensors placed in three different environments-the physical environment, the IoT platform, and the agent platform.
The physical environment, i.e., the smart environment, is equipped with devices and sensors, the selection of which depends on the selected service to be provided to the currently present users in the environment.The sensors are needed to assess the state of the environment, i.e., to achieve context awareness so that the devices can be controlled based on their possible settings and options to manipulate the ambient conditions to match the users' preferences.
As explained earlier, it is evident that the user agents do not access devices and sensors on their own.The responsibility for monitoring device states and sensor readings, as well as forwarding and processing them for the user agents, lies with the smart space agent, which connects to them via an IoT platform.The IoT platform hosts virtual representations of devices and sensors within the smart space.The current status of each setting of a device (setting_status), the possible range (setting_range) and/or options (setting_options[]), as well as the sensor readings (sensor_reading) are stored there.
An agent platform hosts two types of agents: the smart space agent and user agents.The number of agents varies based on the number of present users, as explained previously.
The smart space agent gathers information from the IoT platform about devices and sensors, as well as information about present users, the current time, etc.The obligation of the smart space agent is to provide the user agents with the information relevant to them according to the implemented use case.Each user agent holds information about the user, such as their preferred settings for devices (dev_setting_pref ) based on the selected use case, i.e., service, and their flexibility factor for the specific service, i.e., the device that provides the specific service (dev_flex_factor).The prediction of preferences and the role of flexibility factors are described in detail in Sections 3.2.3 and 3.3.Also, users do not access the IoT platform on their own, as it is intended to be a truly intelligent service provision.When users want to make any changes to the environment, they do so through their user agent, for example, to enter new preferences and modify or delete existing ones (dev_setting_pref ), as well as to redefine their flexibility factor for a certain device (dev_flex_factor).
In Section 3.5, an in-depth explanation of the behavior of each agent type in the negotiation process is provided.An overview of the implementation of the proposed system for a selected use case, including an analysis of the results of the tested scenarios, can be found in Section 4.
Context
It can be said that the effectiveness and precision of the proposed multi-agent system is proportional to the collected context.A richer context enables a more detailed definition of user preferences, which is causally connected with the ability to satisfy user preferences to the maximum.But it should be emphasized that a richer context affects the level of complexity and, in certain cases, causes an increased number of interactions between agents when making a decision.
The relevant context, C, incorporates the following three items: where the following applies: • SR-represents sensor readings that are used to sense the ambient conditions; • DS-represents the device status of actuators can alter the ambient conditions; • P-represents currently present user preferences.
Sensor Readings
Relevant sensor readings can come from the same sensor type but different locations.For example, when deciding on the optimal temperature for their home, users require information regarding both the outdoor and indoor temperatures.The sensor readings are marked as follows: SR = (sr 1 , sr 2 , . . ., sr j , . . ., sr m ) The total number of sensor readings, m, indicates the number of sensor readings relevant to the service in focus.For different services, different sensor types are needed to assess the context.For example, in the case of an AC control system, temperature sensors are essential.
User Preferences
User preferences can be detected or observed and then analyzed and stored using various methodologies.The user preferences part of the negotiation process at a given moment in time is marked as follows: where the total number of preferences, n, indicates the number of present users in a particular smart environment.Users determine their preferences based on the relevant context for each preference.For instance, when configuring an AC system, considerations may include outdoor and indoor temperatures, the time of day, the season, and ongoing activities.In this proposed system, users define their preferences solely based on sensor readings SR and device settings DS, as seen in ( 6).This means there is no need for them to factor in others' presence and their preferences, as this aspect is managed during the negotiation process between agents.Device settings are important since the different smart environments and devices that are part of it do not necessarily offer the same service options; for example, consider various modes of an AC system.
These preferences can be realized either as precise, exact specifications or as a range of values corresponding to the condition under which it would accommodate the user.Users can specify a preference with an exact value x for a particular device setting e (ds e ) for a particular context C. For example, a user may define their temperature preference to be 22 • C in their home during the day in winter and 17 • C at night.Alternatively, they could express a broader range of acceptable values, finding satisfaction as long as the value for e.g., heating-remains within a specific range or meets a certain threshold of intensity.This means that satisfaction is achieved and upheld as long as the temperature value is greater than x, denoted as ">x".In scenarios where a single device setting is in focus within a smart environment, as seen for preference a (7), the preferred numerical value x is attributed to the device setting e (ds e ).If, for instance, the preference encompasses multiple device settings-such as color and intensity for lighting-then the preference consists of two values, as demonstrated in preference b (8).
Flexibility Factor
The maneuvering of preference modifications in the negotiation process is directed through the implementation of the flexibility factor.Through this approach, users retain the ability to articulate their precise preferences while incorporating supplementary information regarding the extent to which modifications can be applied-a notion referred to as "stretching" the preference.
The flexibility factor is modeled after the Gaussian function (9).The Gaussian function is applied in the following way: along the x-axis, numerical values corresponding to adjustable device settings within an environment are plotted, while the y-axis represents the associated user usefulness values.The peak of usefulness, i.e., the mean of the Gaussian function µ, corresponds to the user's preferred value.As a result, the utility function consistently centers around the precise user preference value.The introduced flexibility factor essentially represents the standard deviation σ of the utility function.
As the flexibility factor increases, the Gaussian utility function becomes wider and wider, as shown in Figure 2. In this figure, four users share an identical exact preference, denoted by the numerical value 50, which corresponds to the peak value of their utility functions.However, these users have different flexibility factors, with the most adaptable user shown in blue.This results in a slower decline in utility compared to the original user preferences, meaning that users with higher flexibility factors are more adaptable.During the negotiation process, it is easier to negotiate with them since they are more willing to accept propositions that deviate from their exact preferences.Consider an Illustrative point: the wider utility function of a user with greater flexibility might encompass the narrower utility function of a less flexible user.This scenario is portrayed in Figure 3, where users, despite not sharing identical preferences, experience a smoother negotiation process due to the flexibility demonstrated by the user depicted in green.An in-depth analysis of this and other different scenarios is given in the subsequent sections of the paper.When a user sets their flexibility factor to 0, it signifies that only their precise preference is deemed acceptable, leading to an immediate decline in their utility function if the setting deviates from their exact preference.On the other hand, if a user opts for a very high flexibility factor, their utility function consistently maintains a high value for any given device setting.As is evident, the number of preferences that need to be assigned to all the variations of the context grows exponentially, depending on the number of elements that make up the context.The more context information is available, the more precisely users can define their preferences, which makes it easier for the system to adapt the services to the users' needs.Expecting users to enter all of these preferences manually is not realistic, and it contradicts the concept of a truly smart service, which aims to minimize user effort.Therefore, users are not required to enter all of their preferences.Instead, the system takes on the task of learning their preferences through observation.In this way, the system can predict a user's preferences for a given context without the user having to explicitly enter anything.The task of predicting preferences is performed by the user's agent, which is equipped with an artificial neural network (ANN) to predict preferences.Since preferences are defined based on context, the input to the ANN consists of context information, and the output is the predicted corresponding user preferences.The training of the network is based either on preferences entered by the user or on observations of the preferences selected by the user over a certain period of time, which should cover different contextual situations in order to create a reliable foundation for preference prediction.
Negotiation Group Organization
When designing the algorithm, the constant thought was also to minimize the number of messages exchanged by the agents.Therefore, the grouping of users was immediately considered.Grouping users, i.e., user agents, is a logical option to reduce the number of preferences that "confront" each other by connecting agents according to a certain criterion.It is intuitive to group agents according to their preferences, i.e., to group all agents who have closely related preferences into groups for which a joint decision will not require any of them to deviate more from their exact preference.Closely related, i.e., similar, preferences can be defined more strictly or more loosely.For example, in smart lighting, user groups can be defined using different colors (red, green, blue, etc.), or the colors can be divided into cool and warm colors.If the number of users with similar preferences was decisive for a decision, their reduction must definitely have an impact on the definition of a uniform decision.The number of users in groups is clearly visible here, making the scenario in which a change occurs easier to handle.Even though this is an option worth exploring, there are some drawbacks.The context is constantly changing, so when the context changes, it is necessary to reconstruct the groups, as one or more users may have changed their preferences for a new context.In addition, the described flexibility factor and the scenarios in which users enter and leave a smart environment must be taken into account.Since agents are "hidden" in groups, it is more difficult to detect and implement a change in device settings when a user with a low flexibility factor enters or leaves the room.For example, in the smart environment, there are several groups with relatively high flexibility factors and the final decision between the groups was made based on the smallest group, which contains a user with a low flexibility factor in addition to users with higher flexibility factors.If this user is eliminated, the previous decision is no longer correct.This is a simple example, but in scenarios where only a small variation in flexibility factors and the number of users present have influenced the decision, any change in presence in the smart environment requires renegotiation, as the user agents are looking to maximize their utility function and could use the change to their advantage.
The second idea, which seems more complex at first, facilitates the negotiation process in many ways.The agents must be grouped according to their flexibility factors.This is better than the previous approach because the flexibility factor is constant, i.e., it only changes when the user changes it.The proposed negotiation process ensures that no user is "left behind", and the decision is constantly subjected to review by all participants through negotiation rounds.As can be seen in Figure 4, users are placed in concentric circles according to their flexibility factors, with the inner circles accommodating the users with the lower flexibility factors.As the diameter of the circle increases, so does the flexibility factor of the users in that circle.The organization of these circles, i.e., groups, is in the hands of the smart space agent.The question arises as to how large the individual groups should be, i.e., how wide the value range of the flexibility factor is that defines a group.Initially, a fixed definition of the minimum and maximum flexibility of each group was considered, but this approach turned out not to work well in the vast majority of cases.For example, two users with a similar flexibility factor in a space could be assigned to different groups, which would constitute the mistake shown in Figure 5.Here is the most extreme example: agents representing users with flexibility factors 19 and 21 are assigned to different groups despite the slight difference in their flexibility factors, which is only 2.
The solution chosen for this problem was the K-means clustering algorithm [49], an unsupervised machine learning algorithm for determining clusters-in this example, agent groups with similar flexibility factors.The mentioned algorithm fulfilled the required characteristics and performed the task at a high level, but with a drawback; i.e., it is necessary to determine the number of clusters in advance.For this reason, the smart space agent has the task of calculating the optimal grouping for different numbers of clusters using the silhouette method.The selected number of clusters is the one with the highest silhouette coefficient, which is calculated using two distances.The first distance is the distance between the data point (in this case, an agent with a flexibility factor) and the center of the cluster to which it is currently assigned.The second distance is the distance to the nearest center of the cluster that it is not assigned to [50].For the presented example of the agents present, i.e., users with different flexibility factors, the analysis of the number of clusters is shown in Figures 6-8.The initial centroids were set at the midpoints of the range of the original division.As can be seen in Figure 6, the initial values for the two centroids were, thus, set to 25 and 75.Considering the agents present, the centroids were shifted to 28 and 77 at the end of the method with a silhouette coefficient (SC) of 62.The division into these two groups was obviously inappropriate, as agents with flexibility factors of 47 and 53 were separated; i.e., they were grouped with agents for which the difference in the flexibility factors of the agents from the group was above 20.In this clear example, the division into three groups is a simple visual solution, as can be seen in Figure 7.This approach outperforms the division into two groups, as indicated by a higher silhouette coefficient (SC = 91), which underlines the better grouping accuracy.Figure 8 shows the grouping into four categories.Despite a (slightly) higher silhouette coefficient (SC = 94) compared to the division into three groups, it is important to note that this division into four groups entails "overfitting".It isolates agents with minor differences in flexibility factors (in this example, agents with factors of 47 and 53) into separate groups, even if the differences are minimal.To avoid this, the maximum number of clusters is determined by rounding up the division of the current number of user agents by two.
Based on this example, the choice is to divide the user agents into three groups.This choice ensures an optimal flow of the negotiation process, as the division into two or four groups leads to an increased exchange of messages.This is due to the fact that agents with significantly different flexibility factors are grouped together, or agents are separated despite minor differences in their factors.
Negotiation Process
Once the formation of the groups has been determined, everything is ready for the negotiation process to begin.As shown in Figure 9, the groups, which are represented as agents in concentric circles, engage in the negotiation process, mimicking a pulsating rhythm.The negotiation process starts in the innermost circle, where the users of the lowest flexibility factors are grouped; then, their proposals are passed on to the second innermost circle, and this sequence continues outwards to the outermost circle.In the first part of the first round of negotiations, all circles pass on their preferred proposals to the outer circles, together with the proposals they have received from the inner circle or circles, regarding the position of the group circle and the number of groups.It is not effective to negotiate in the first part of the first round because, for example, the least flexible agents with very narrow views regarding deviations from the exact preference are very likely to fail to reach an agreement.Ideally, agents in the outer circles will easily adapt to the preferences of agents with lower flexibility factors due to their high flexibility, emphasizing a cooperative approach.However, it is important to avoid any form of exploitation and ensure that agents with higher flexibility factors are not forced into endless concessions.After reaching the outermost circle, the first part of the negotiation round is over.The negotiation descends again towards the innermost circle, which is the second part of the negotiation round, as shown in Figure 9.When the process returns to the innermost circle, this marks a complete negotiation round, which can be compared to a heartbeat.In the second part of the first negotiation round, after reaching the outermost circle with the agents with the highest flexibility, the negotiation process only really begins when the agents analyze their circumstances and make a decision about their proposals.
To enable a more efficient and effective negotiation process, the agents' proposals should be defined as a range around an agent's exact preference and not just the exact preference itself.This approach, inspired by observations of human negotiations in which the use of ranges [51], albeit with a different focus, i.e., psychological, aimed to accelerate the negotiation process and reach constructive proposals.The range is logically defined around the exact preference, i.e., the anchor point.
In non-conflicting cases, the initial range proposals submitted by the agents overlap, facilitating a relatively straightforward negotiation process.However, in situations where these ranges do not overlap, a problematic situation arises.Ranges are achieved with the purpose of reaching a range of settings that facilitate potential agreement.For example, in a scenario with multiple agents, in which two or more agents' ranges overlap but there is also a separate agent or multiple agents that do not have any overlaps, it is an obvious observation that the agent without overlapping ranges will be forced to adjust its proposed range to reach a mutually acceptable outcome.In scenarios where multiple agents are in this state, i.e., there is no overlap, the agent whose range is farthest from the other agents' ranges is most likely to be pressured to adjust its proposals.This is because the described agent has the least likelihood of finding a mutually acceptable outcome with other agents with its current range, which makes it more susceptible to concessions, i.e., it is forced to expand its proposed range.If the described situation applies to more than one agent with the same distance to other agents' ranges, the agent with the higher flexibility factor is expected to broaden its proposal range.In certain scenarios, an additional criterion comes into playthe size of the overlapping range.If two agents share an equal number of agents within their overlapping ranges, it is anticipated that the agent with the smaller overlapping range should be the one to expand its preferences.Once an agent deems an exact final proposal ready to be made, it is forwarded through all circles, i.e., all participants, for approval.If a consensus on the proposed setting is not achieved, the negotiation process continues.
As described, several criteria are taken into account when determining which agent will have to expand its range.These include the following: evaluating whether the agent's proposed range overlaps with any other agent's range, considering both the quantity of agent ranges it overlaps with, as well as the width of the overlapping; assessing the agent's group based on its flexibility factor; analyzing the distance of the agent's proposed range from other non-overlapping agents; and, in the case of ties, considering the number of concessions made in the current round.The pseudocode outlining the decision-making process for the user agents is presented in Algorithm 1, detailing the steps that the agents take upon receiving a query regarding their proposal.The computational complexity of the proposed algorithm is O(n), where n represents the number of user agents in the environment.The termination of the algorithm is guaranteed by its finite iterations and the bounded nature of the functions it calls.The completeness of the algorithm is obvious, and there are no segments that could cause incompleteness.For a more in-depth illustration of the factors and development of the negotiation process, detailed, step-by-step examples are provided in Section 4.
It may appear intuitive to assess the agreed-upon outcome and, with that, the entire negotiation process by relying on the comparison with the sum of the agents' utility functions.However, this approach is not applicable in the observed use case, e.g., in a scenario in which two agents are present and have non-overlapping preference ranges.In such cases, optimizing the sum of their utility functions leads to the highest value if the chosen preferences exactly match the preference of one agent, resulting in a sum of 1.Unfortunately, the other agent is then completely dissatisfied (function at 0), which is an undesirable and unacceptable result.
Algorithm 1 Agent logic to determine an answer to a proposition query
Input: Information about other agents (proposals, group organization) Output: Proposal determined via the algorithm-unchanged or adapted range
Results and Discussion of the Negotiation Algorithm in the Smart Lighting Use Case
The results obtained with the described system were compared with our previous work, in which a centralized system approach was explored.In this study, a neural network was responsible for predicting the preferences of all users.The final decision on device settings was made by taking into account the number of users and their preferences using a calculation model.In this approach, users had no influence on the model's decisions either through agents or other means.It is important to note that this approach does not take into account the flexibility factor, which was added later for comparison purposes.The comparison was based on the difference between the user's exact preference and the selected device setting determined via the model in the centralized system, as opposed to the negotiation process in the multi-agent system.The results of the experiments conducted showed that the multi-agent system provided better results, with the overall deviation from the exact preferences, taking into account the flexibility factor, which was lower than in the centralized system with the calculation model.
In the remainder of this section, a comprehensive examination of the negotiation algorithm is presented using concrete cases.The showcased scenarios were carefully selected to illustrate the negotiation process in specific situations with a different number of agents and to highlight the differences in preferences and flexibility factors.To assess the algorithm's efficacy, the segment involving the neural network's prediction for each user via their agent was skipped"; i.e., exactly defined preferences were used to observe the negotiation process only, without the potential error of the neural network in preference prediction.In this analysis of the negotiation process using smart lighting as an example, the focus is on light intensity.
Experiment 1a
This experiment included agent profiles with the following preference and flexibility factor combinations (95, 4), (44,20), and (80, 9), as depicted in Figure 10.In this example, according to their user preferences, two agents were in group 1, and one agent, with a flexibility factor of 20, was in group 2. The user agents initially defined their bargaining ranges, i.e., first propositions, as [91, 99] for u_agent 1 , [24,64] for u_agent 2 , and [71, 89] for u_agent 3 , with no overlapping ranges observed.A concise partial overview of the negotiation process is presented in Table 1.The table is organized by rounds, outlining the actions of each agent within each group across those rounds.Each row specifies which agent is presenting a proposition in a given group, accompanied by their "backed ranges".The first value indicates the number of agents within their own group that share the same range, while the second value represents backing outside their group.This information is needed to determine which agent is required to broaden their range based on all agents' proposals.In the last column of the table, the action of each agent is described, i.e., whether they estimate that they are not responsible for adjusting their proposed preference range to accommodate others or not.This type of table is used for the other examples in this paper as well.
Following the exchange of initial propositions in the first part of the first round (not shown in the table), in the second part, u_agent 2 in group 2 broadens its bargaining range from [24,64] to [24,74], as its initial range proposition did not overlap with any other.In this step, two negotiation occurrences come into view.Firstly, agents broaden their ranges only towards those of their counterparts, not in both directions.Secondly, the important decision of determining the extent by which the range expands must be made.Initially, all agents establish their ranges as [exact preference − σ, exact preference + σ].As negotiations progress, the agents are compelled to expand their ranges sufficiently to either achieve an overlap or induce an opposing agent to expand its own range, i.e., removing that responsibility from itself.The governing principle specifies that the expansion required for this purpose is determined as the minimum between its σ value and the result of dividing σ by a natural number, denoted as n ∈ N.This strategic choice holds significance due to its adaptability to negotiation longevity.As negotiations progress into later rounds, the necessity for substantial expansions of the range gradually diminishes.This aligns seamlessly with agents' intention to stay as close as possible to their initial exact preferences throughout the negotiation process.With acquired backing now in their bargaining range, u_agent 2 forwards the proposition.In group 1, u_agent 3 determines overlapping with its range, which means it will stick to its proposition, while u_agent 1 has to broaden its range due to a lack of backing.With this, u_agent 1 secures backing, i.e., overlapping with u_agent 3 .In this situation, it is clear that the absence of agreement is exclusive to u_agent 1 and u_agent 2 , while u_agent 3 aligns with both of them.Since it is determined that all agents in group 1 have backing, the amount of which is not individually lower than the amount of backing for agents in other groups, the negotiation process enters round 2, reaching group 2 again.Requesting u_agent 2 to align solely with u_agent 1 , given its high flexibility factor, would be unfair.However, as the negotiation process approaches its finalization, it is imperative to consider the difference in flexibility factors.Taking into account the frequency of concessions made by each agent is another factor under consideration.Given this, u_agent 3 is ready to make a proposal to prevent a continual back-and-forth widening of ranges between u_agent 1 and u_agent 2 .In this instance, the proposal is accepted by all agents.The same principle would apply if only u_agent 1 and u_agent 2 were present.It would be unjust to require u_agent 2 to cover the entire distance between its ranges solely due to its higher flexibility factor.An agreement is reached regarding the agents' flexibility factors, but this does not imply that the responsibility lies solely with the agent possessing higher flexibility.Both agents are mandated to expand their ranges accordingly.
Experiment 1b
This experiment built on the previous example by adding another agent, u_agent 4 , with the preference and flexibility factor combination of (60, 4), as shown in Figure 11.Following the low flexibility factor of u_agent 4 , it is added to group 1, which means group 2 is unchanged; i.e., only u_agent 2 remains in group 2. Right at the beginning of the negotiation process, in comparison to Experiment 1a, there is a difference visible in Table 2.In the previous experiment, u_agent 2 , the most flexible agent, was forced to broaden its range at the beginning because it had no overlap.Since that is not the case now, because of the overlap with the range of u_agent 4 , u_agent 2 now forwards its proposal range to group 1.In group 1, u_agent 4 is the only one with overlapping, leaving agents u_agent 1 and u_agent 3 as candidates to broaden their ranges.As u_agent 1 is furthest from other agents' ranges, it is the one to broaden its range.After widening, u_agent 1 's gain is not exclusive; u_agent 3 now benefits from an overlap supported by u_agent 1 's expansion.At this point, all user agents of group 1 determine that they have the same backing quantity among all agents, and, as there is a more outer circle, they forward their current ranges to the more outer circle that houses agents with a higher flexibility factor.Now, u_agent 2 is forced to broaden its range.Following this, it now overlaps with u_agent 3 and forwards its proposition back to group 1.Here, u_agent 3 has backing with two other agents, so the decision is up to u_agent 1 or u_agent 4 , which both have the same backing quantity; i.e., they are both backed by one other agent.The decision to broaden its range is made by u_agent 1 , as its backing range is smaller than that of u_agent 4 .With the introduction of the newly proposed range, u_agent 1 is now backed by two agents, whereas u_agent 4 remains backed by only one.As a result, u_agent 4 needs to expand its range, allowing for a final proposal of 72, which, in this case, is agreed upon by all agents.
Experiment 2
In this experiment, a scenario was examined involving three groups, each composed of two agents, totaling six agents.The most flexible agents, u_agent 5 and u_agent 6 , were in group 3. Group 2 consisted of u_agent 3 and u_agent 4 , representing moderate flexibility in this scenario.Meanwhile, group 1 included the least flexible agents, u_agent 1 and u_agent 2 .The exact preferences and width of agents' flexibility functions are shown in Figure 12.This example illustrates a scenario in which the most flexible agents were not making substantial changes to their ranges since they already encompassed a significant portion of other agents' ranges.Following the initial phase of the first round, the agents in group 3 assess their backings.Agent u_agent 6 exhibits an overlap with all other agents (one within the same group and four in other groups), while u_agent 5 has an overlap with three of the five agents in total.After their analysis, they decide that no adjustments to their ranges are needed presently to reach an agreement.They proceed to pass on their current proposed ranges to group 2.
Agent u_agent 3 of group 2 identifies the necessity to expand its range.This decision is influenced by having the least support and being situated in a circle further out.Even though u_agent 1 in group 1 has the same quantity of backing as other agents, u_agent 3 broadens its range due to higher flexibility.Now, u_agent 3 (and u_agent 4 ) is in a position to forward its proposition since there are agents that have to broaden their ranges based on the set principles.In group 1, agent u_agent 1 overlaps with only two agents in terms of ranges; it is the lowest among all agents in the current negotiation process.Consequently, there is a need for u_agent 1 to expand its bargaining range.As a result, both the agents in group 1 overlap with the ranges of three other agents.Although this is the lowest quantity, the agents in outer circles also share the same quantity.Consequently, they move forward to submit their proposals to group 2, as there are more eligible agents that should expand their bargaining ranges.
Both agents in group 2 have overlaps with the ranges of three other agents.u_agent 4 is the one to broaden its range, given that its range is the farthest from the ranges of the other agents.This is another criterion taken into account when making decisions.After u_agent 4 's move, u_agent 3 now acknowledges the necessity of broadening its range since it is the agent with the range furthest away from the other agents' ranges and the least overlap with backed ranges.With this broadening, u_agent 3 achieves an overlap with u_agent 4 , providing both agents in group 2 with enough backing to advance their proposals.Given that all agents in group 3 already cover the ranges of every other agent, there is no pressure for them to broaden their ranges.As a result, the negotiation process returns to group 1.Now, with all other agents outside of group 1 achieving an overlap of their bargaining ranges, the responsibility falls on u_agent 1 and u_agent 2 despite their being the less flexible agents.In this context, as u_agent 1 is the farthest from everyone else, it is the one to broaden its range.Taking the opportunity, u_agent 1 puts forward a final proposal that leans more toward its initial preference and bargaining range.Ultimately, this proposal is accepted by 2 , as it has not made any changes to its ranges until now, and the proposed range is not significantly distant from its preferred range.All the other agents also agree, as the proposal falls within their bargaining ranges.The summarized overview of the negotiation process is given in Table 3.
Conclusions
This paper has presented a context-aware, multi-agent system designed to improve service provisioning in smart spaces.It enables the simultaneous consideration of the different preferences of multiple users via negotiations among user agents without user intervention.The proposed system is a viable solution to the gap in the provision of ambient services caused by the lack of solutions for multiple users at the same time, i.e., when users are affected by the same device settings when they are in the same environment at the same time.First, the current state of ambient service provisioning was analyzed, considering different approaches that incorporate intelligent services in smart spaces.Then, the multi-agent system was presented in detail with a description of the different agents and their tasks and obligations, as well as a context-dependent preference definition and the prediction of preferences for a previously unseen context using an ANN.The main contribution is the negotiation algorithm and the organization of negotiation participants, which support multiple users and incorporate all user preferences into the negotiation process without neglecting any of them.In addition, the system incorporates user flexibility factors that users determine and use to express their willingness and openness to settings that differ from their preferences.The negotiation process was illustrated using the smart lighting use case, offering a comprehensive description of the entire system within this selected application.results of the selected scenarios have been presented, followed by a discussion of the proposed negotiation algorithm and its organization.As has been mentioned in the paper, many researchers point out the problem of user satisfaction when there are multiple users with different preferences.The presented system offers a new way to effectively provide services to multiple users without user intervention.It attempts to satisfy all users by employing software agents that negotiate and advocate for the preferences of their users during agent negotiations.It is planned to further develop this system in order to achieve the best-quality user experience with the shortest possible negotiation, i.e., with as little need for message transactions in the negotiation process as possible.The ongoing challenge is to further optimize the responsiveness of the system to context changes so that they appear to go unnoticed by users with a focus on user changes, i.e., when a user enters or leaves the smart space.Another planned optimization in future work is to manage more than one service simultaneously with the same multi-agent system.The aim is to improve the ability of the multi-agent system to handle multiple services simultaneously.This goes beyond simultaneous negotiations; it includes upgrading the negotiation system to manage interdependent negotiation processes.For example, consider negotiations concerning light color and light intensity.
Figure 2 .
Figure 2. Representation of the exact same preference with different flexibility factors.
Figure 3 .
Figure 3. Users with different exact preferences and different flexibility factors.
Figure 4 .
Figure 4. Grouping users by flexibility factor in circles.
Figure 5 .
Figure 5. Grouping approach with fixed groups.
Table 1 .
Experiment 1a-rough partial overview of the negotiation process.
Table 3 .
Experiment 2-rough partial overview of the negotiation process.
|
v3-fos-license
|
2018-04-03T05:56:16.078Z
|
2003-09-26T00:00:00.000
|
25081463
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/278/39/37632.full.pdf",
"pdf_hash": "014ac9cce46f11aaf7fb6d6985ebe56dd9b700fe",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:682",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "90671ab4e0da59d71dd37b15b02494af0f72e886",
"year": 2003
}
|
pes2o/s2orc
|
Canstatin Inhibits Akt Activation and Induces Fas-dependent Apoptosis in Endothelial Cells*
Canstatin, a 24-kDa peptide derived from the C-terminal globular non-collagenous (NC1) domain of the α2 chain of type IV collagen, was previously shown to induce apoptosis in cultured endothelial cells and to inhibit angiogenesis in vitro and in vivo. In this report, we demonstrate that canstatin inhibits the phosphorylation of Akt, focal adhesion kinase, mammalian target of rapamycin, eukaryotic initiation factor-4E-binding protein-1, and ribosomal S6 kinase in cultured human umbilical vein endothelial cells. It also induces Fas ligand expression, activates procaspases 8 and 9 cleavage, reduces mitochondrial membrane potential, and increases cell death (as determined by propidium iodide staining). Canstatin-induced activation of procaspases 8 and 9 as well as the induced reduction in mitochondrial membrane potential and cell viability were attenuated by the forced expression of FLICE-inhibitory protein. Canstatin-induced procaspase 8 activation and cell death were also inhibited by a neutralizing anti-Fas antibody. Collectively, these data indicate that canstatin-induced apoptosis is associated with phosphatidylinositol 3-kinase/Akt inhibition and is dependent upon signaling events transduced through membrane death receptors.
Type IV collagen, a complex vascular basement membrane protein consisting of six distinct gene products (␣1-␣6), is thought to play a crucial role in endothelial cell (EC) 1 adhesion, migration, and differentiation (1)(2)(3). The isolated C-terminal globular non-collagenous (NC1) domains of several type IV collagen ␣ chains, however, function as potent angiogenesis inhibitors. For example, canstatin, an endogenously produced fragment of the NC1 domain of the ␣2 chain (4), was recently shown to inhibit EC proliferation and tube formation in vitro and to suppress the growth of implanted PC-3 human prostate carcinoma and 786 -0 renal cell carcinoma cells in severe com-bined immunodeficiency and athymic nude mice, respectively (4). In these xenograft models, tumors excised from canstatintreated mice were less extensively infiltrated with CD31-staining cells than were control tumors. This reduced tumor vascularity and the fact that canstatin had no effect on tumor cell growth in vitro indicate that angiogenesis suppression is the dominant mechanism by which canstatin retards tumor growth.
The 26-kDa NC1 domain of the type IV collagen ␣1 chain (known as arresten) also inhibits endothelial tube formation and migration in vitro (5) and, like canstatin, blocks tumor growth in the prostate and renal cell carcinoma xenograft models cited above. The NC1 domain of the ␣3 chain (known as tumstatin) is similarly angiostatic (6) but, in addition, possesses intrinsic antitumor activity against melanoma cells, an effect attributable to a peptide sequence (amino acids 185-203) distinct from that which mediates its angiostatic properties (amino acids 54 -132) (7,8). Although this angiostatic peptide binds to both melanoma and endothelial cells, its antiproliferative effects are restricted to EC. As with canstatin and arresten, tumstatin also has potent antitumor activity in vivo. Thus, the NC1 domains of at least three distinct type IV collagen ␣ chains have documented angiostatic activity and, in this respect, seem to resemble the previously characterized angiogenesis inhibitor endostatin, a peptide derived from the NC1 domain of the ␣1 chain of type XVIII collagen (9 -11).
In addition to its suppressive effects on EC proliferation and tube formation, canstatin induces apoptosis in EC (4). We showed previously that this process was associated with the down-modulation of the anti-apoptotic protein FLIP (4). FLIP is a structural homologue of procaspases 8 and 10, but unlike these proteases, FLIP lacks a critical cysteine at what would otherwise be the active site of a functional protease (12)(13)(14). FLIP is recruited in competition with procaspase 8 to the plasma membrane through an interaction between its death effector domain and that of the adaptor protein FADD. This recruitment is triggered by the multimerization of FADD, an event usually initiated by an interaction between the death domain of FADD and one of the TNF receptor family members (e.g. Fas, TRAIL death receptors 4 and 5) induced by the binding of Fas ligand or TRAIL to its respective receptor (15)(16)(17)(18). Once complexed with FADD, FLIP interacts with TRAFs 1 and 2, receptor-interacting protein, and Raf to facilitate the activation of the NFB and mitogen-activated protein kinase pathways (19), both of which are associated with apoptosis resistance. It also interferes with autoprocessing of procaspase 8 in the multimolecular complex generated in response to FADD clustering (12)(13)(14)(15)(16)(17)(18). Therefore, one would expect that the down-modulation of FLIP induced by exposure to canstatin would sensitize endothelial cells to apoptosis mediated through procaspase 8 activation.
In our previous study, we provided no information on the mechanism by which canstatin down-modulates FLIP or data about whether its disappearance from endothelial cells contributed to the anti-angiogenic effects of canstatin. In this report, we demonstrate that canstatin inhibits the activation of Akt, a kinase previously shown to regulate FLIP levels in several cell types including EC (20,21). We also establish an unambiguous causal link between the loss of FLIP and the induction of apoptosis by canstatin.
EXPERIMENTAL PROCEDURES
Cell Lines and Reagents-Human umbilical vein endothelial cells (HUVEC) were purchased from American Type Culture Collection and maintained in McCoy's 5A media supplemented with 20% fetal bovine serum, 2 mM L-glutamine, 50 g/ml gentamycin, 5 g/ml polymyxin B (to inhibit the effects of residual lipopolysaccharide in the canstatin preparations), and 100 g/ml endothelial cell growth factor (ECGF, Biomedical Technologies, Stoughton, MA). The vinculin antibody used in our studies was purchased from Sigma. The pFAK-Tyr 397 antibody and the anti-Fas antibody used in Western blots were purchased from Calbiochem. The rabbit anti-FasL, pp70 s6k , caspase-8, and FLIP antibodies were obtained from Santa Cruz Biotechnology. The anti-Fas antibody (ZB4) used in the neutralization assays was obtained from Upstate Biotechnology (Lake Placid, NY). The pAkt (Ser 473 ) antibody was purchased from BD Biosciences, and the procaspase 9, phospho-mTOR, and phospho-4E-BP1 antibodies were obtained from Cell Signaling Technology (Beverly, MA). The canstatin cDNA was provided by Raghu Kalluri (4). Recombinant canstatin was produced and purified as reported previously (4).
Western Analyses-HUVEC were lysed in 62.5 mM Tris-HCl buffer, pH 6.8, containing 1% SDS, protease (phenylmethylsulfonyl fluoride and leupeptin), and phosphatase (NaF, Na 3 VO 4 , glycerophosphate, and Na 4 P 2 O 7 ) inhibitors. Lysates were separated on 12% SDS-PAGE gels, and the fractionated proteins were transferred to nitrocellulose. The blots were probed first with rabbit or murine antibodies specific for the protein of interest and then with either a goat anti-rabbit or anti-mouse antibody conjugated to horseradish peroxidase. The blots were then treated with SuperSignal chemiluminescent substrate (Pierce) and then exposed to Kodak X-Omat Blue XB-1 film. The films were analyzed by densitometry with a Bio-Rad densitometer.
Cytotoxicity and Mitochondrial Membrane Potential Assays-In these assays, adherent cells were detached by gentle trypsinization and combined with detached, floating cells. Propidium iodide (5 ng/ml), annexin V-FITC (5 l), or 5 g/ml BD Mitosensor reagent (BD Biosciences) were added to the cell pool, and the cells were then analyzed by flow cytometry with a BD Biosciences FACScan.
Infection of HUVEC with FLIP-Tet Adenovirus-Cells were co-infected with the rTA adenovirus and an adenovirus into which the cDNA of the long isoform human FLIP had been inserted (obtained from Ken Walsh, Boston University Medical Center) (21), each at a multiplicity of infection of 10. After overnight incubation, the medium was replaced. The FLIP gene is regulated by doxycycline and readily induced by exposing infected cells to an antibiotic concentration of 300 ng/ml.
Canstatin Inhibits the Phosphorylation of Akt and FAK and Induces Fas Ligand Expression in HUVEC-
We demonstrated previously that cultured bovine pulmonary arterial endothelial cells undergo apoptosis when exposed to canstatin (4). Although the biochemical mechanism underlying this effect was not explored in our earlier report, we did show that canstatin exposure was associated with the down-modulation of the antiapoptotic protein FLIP. We and others have since shown that FLIP expression is determined primarily by the activity of the phosphatidylinositol 3-kinase (PI3K)/Akt signaling pathway in a variety of cell types including tumor and endothelial cells (20,21). These observations suggested that canstatin-induced FLIP down-modulation might be caused by PI3K/Akt inhibition. To test this hypothesis, HUVEC were first incubated for 4 h in culture medium containing no exogenous growth factor and only 5% serum to reduce background signaling. Canstatin (20 g/ml) was then added to some of the cells, and 1 h later, the medium was replaced by fresh serum-supplemented (20%) medium containing ECGF (100 g/ml) with or without canstatin (20 g/ml). The cells were lysed at various time points, and the lysates were analyzed by Western blot for the phosphorylation of Akt and focal adhesion kinase (FAK), an integrin-associated kinase upstream of PI3K/Akt (22). As shown in Fig. 1, the addition of serum/ECGF resulted in a prompt increase in Akt phosphorylation. This effect was totally blocked by canstatin. The initial exposure to canstatin slightly increased FAK phosphorylation over background but completely blunted the subsequent response to ECGF. As reported previously, FLIP expression was suppressed in the canstatin-treated cells (4). Exposure to canstatin modestly increased Fas expression in HUVEC and markedly enhanced that of Fas ligand, especially at the later time points tested. These data confirm that canstatin does indeed suppress Akt phosphorylation in HUVEC as proposed, and they suggest that the disruption of this signaling pathway may be responsible for the disappearance of FLIP, the induction of Fas ligand, and other events triggered by canstatin.
Canstatin Inhibits the Phosphorylation of mTOR, 4E-BP1, and p70 s6k -Many of the effects of Akt on cell growth, protein synthesis, and cell cycle progression are mediated by the kinase mTOR and its effectors 4E-BP1 and p70 s6k (23,24). To determine whether canstatin-induced inhibition of Akt affected the activity of these downstream targets, HUVEC were exposed first to canstatin as described above, and 1 h later, ECGF was added to the culture. The cells were then lysed and analyzed by Western blot. As shown in Fig. 2A, even the 1 h of exposure to canstatin that preceded the addition of ECGF (the zero time point) was sufficient to completely inhibit mTOR phosphorylation. The phosphorylation of 4E-BP1 was also markedly attenuated with similar kinetics. Canstatin also completely blocked the phosphorylation of p70 s6k (Fig. 2B), although this effect was not evident until 24 h.
Role of FLIP, Fas, and Fas Ligand in Canstatin-induced Apoptosis-The observation that exposure to canstatin reduces FLIP levels and induces Fas ligand expression in HUVEC suggests that canstatin-induced apoptosis might be mediated through a death receptor (i.e. Fas)-dependent pathway (12)(13)(14). To test this hypothesis, we sought to determine whether exposure to canstatin induced the cleavage of procaspase 8 and if this effect was blocked by forced expression of FLIP. In this experiment, HUVEC were first co-infected with an rTA adenovirus and one containing a doxycycline-inducible FLIP construct. Doxycycline (300 ng/ml) was then added to some of the cell cultures to induce FLIP expression (21). Twenty-four hours later, the cells were placed into culture in ECGF-containing medium in the presence or absence of canstatin (20 g/ml), and after 48 h of incubation the cells were lysed and analyzed by Western blot for procaspase 8 activation. As shown in Fig. 3A, the antibiotic augmented FLIP expression in the infected HU-VEC. Exposure to canstatin resulted in procaspase 8 cleavage, suggesting that canstatin-induced apoptosis in HUVEC might indeed be mediated through Fas. Furthermore, procaspase 8 cleavage was evident in the absence of doxycycline but not in its presence, indicating that the process could be inhibited by FLIP.
To determine whether the forced expression of FLIP was able to maintain the viability of canstatin-treated HUVEC, the adenovirus-infected HUVEC were cultured with or without doxycycline (300 ng/ml) for 24 h and then placed in medium containing ECGF (100 g/ml) with or without canstatin (20 g/ml) for 48 h, as described above. Spontaneously detached cells were combined with those removed by trypsinization and analyzed by flow cytometry after staining with annexin V and propidium iodide (PI). As shown in Fig. 3B, exposure of the antibioticuntreated HUVEC to canstatin doubled the number of PIstaining cells, whereas no increase in the number of PI-staining cells was observed in the antibiotic-pretreated HUVEC.
As shown in Fig. 1, canstatin reduces FLIP expression and induces that of Fas ligand. To determine whether Fas-Fas ligand interactions might play a role in canstatin-induced activation of procaspase 8, HUVEC were placed in serum-supplemented medium containing canstatin with or without ECGF or a blocking anti-Fas antibody (ZB4, 1 g/ml) for 24 h. Procaspase 8 activation was then assessed by Western blot. As shown in Fig. 4A, canstatin induced procaspase 8 cleavage independently of the presence of ECGF, and this cleavage was substantially blocked by the anti-Fas antibody. This anti-Fas antibody also reduced the number of PI-staining cells in both canstatin-treated and untreated cultures (Fig. 4B), implicating membrane Fas-Fas ligand interactions in both the spontaneous (background) apoptosis observed in HUVEC cultures and the augmented apoptosis induced by canstatin.
Role of the Mitochondria in Canstatin-induced Apoptosis in HUVEC-To determine whether the mitochondria play a role in canstatin-induced apoptosis, HUVEC were first infected with the rTA and FLIP-tet adenoviruses as described above and cultured for 24 h in the presence or absence of doxycycline (300 ng/ml). The cells were then placed in medium containing ECGF (100 g/ml) with or without canstatin (20 g/ml) for 48 h. Spontaneously detached cells were combined with adher-ent cells removed by trypsinization, stained with the MitoSensor reagent, and analyzed by dual color flow cytometry for the presence of intramitochondrial dye aggregates (FL2-H) and intracytoplasmic monomers (FL1-H). Healthy cells with mitochondria that maintained a normal transmembrane potential (i.e. those with a high aggregate/monomer ratio) are depicted in the upper left quadrant of each panel in Fig. 5A, whereas those with lower ratios fall along a diagonal to the right. Cells with a high ratio appear more numerous in the lower two panels, which were generated with antibiotic-treated (FLIP-expressing) HUVEC than in the upper panels, which were generated with untreated HUVEC. In the absence of FLIP, canstatin exposure resulted in a thinning of this population, with an accumulation of the cells along a diagonal (cells with a lower aggregate/monomer ratio), whereas no such shift was observed in the doxycycline-treated cells. Exposure to canstatin reduced the mean aggregate/monomer ratio from 43 to 36 in HUVEC not treated with doxycycline but only from 43 to 42 in cells pretreated with the antibiotic. These data indicate that exposure of HUVEC to canstatin does reduce mitochondrial membrane potential and that this effect is blocked by forced expression of FLIP.
A reduction in mitochondrial membrane potential is associated with the release of several mitochondrial proteins including cytochrome c, second mitochondria-derived activator, and apoptosis-inducing factor, all of which participate in apoptosis (25)(26)(27). Cytochrome c released from the mitochondria binds to
FIG. 3. Canstatin induces procaspase 8 cleavage and cell death in HUVEC, and these effects are blocked by the forced expression of FLIP. A, pretreatment of FLIP-tet adenovirus-infected cells with doxycycline induces FLIP expression (top)
. No procaspase 8 cleavage was apparent in HUVEC not exposed to canstatin or in the canstatin-treated cells previously exposed to doxycycline. The p44 and p20 fragments of procaspase 8 were, however, readily detectable in the lysates of cells exposed to canstatin without doxycycline pretreatment. B, without prior doxycycline treatment (to induce FLIP expression), exposure to canstatin increased the number of PI/annexin V-staining HUVEC from 12.5 to 25.9%. Canstatin, however, had no effect on the viability of antibiotic-treated cells. the adaptor protein Apaf-1, which activates procaspase 9. To determine whether the reduction in mitochondrial membrane potential induced by canstatin results in procaspase 9 activation, adenovirus-infected HUVEC, some of which had been pretreated with doxycycline to induce FLIP expression, were exposed to canstatin, and procaspase 9 cleavage was then assessed by Western blot. As shown in Fig. 5B, exposure to canstatin results in increased procaspase 9 cleavage, an effect nearly completely inhibited by forced FLIP expression. These data show that the Fas-dependent apoptotic signaling events triggered by canstatin are amplified in the mitochondria.
DISCUSSION
Canstatin is a recently discovered, endogenously produced angiostatic peptide derived from the NC1 domain of the ␣2 chain of type IV collagen (4). In our initial description of this novel inhibitor, we demonstrated that recombinant canstatin inhibited endothelial cell proliferation, migration, and tube formation in vitro and suppressed the growth and vascularization of human tumor xenografts (4). We also showed that canstatin induced apoptosis in endothelial cells and reduced the levels of the anti-apoptotic protein FLIP but provided no additional information on the mechanism of FLIP down-modulation or its biological significance. In this report, we show that the disappearance of FLIP induced by canstatin is associated with reduced activity of the phosphatidylinositol 3-kinase/Akt signaling pathway. We also demonstrate that FLIP down-modulation plays an essential permissive role in canstatin-induced apoptosis.
The mechanism by which canstatin inhibits Akt activation in EC is unclear, but it may involve the disruption of signaling downstream of FAK, as has been proposed for endostatin (28) and tumstatin (29). As shown in Fig. 1, canstatin modestly increases basal FAK phosphorylation but completely blunts the inductive effect of ECGF. Similar results have been reported with endostatin, which increases basal FAK phosphorylation but inhibits the increase that would otherwise be induced by fibroblast growth factor (28). When recruited to the cytoplasmic domains of integrins clustered at focal adhesions, FAK directly interacts with the SH2 and SH3 domains of the p85 regulatory subunit of phosphatidylinositol 3-kinase and activates Akt (22). It is possible that canstatin interferes with some aspect of either integrin-dependent adhesion or downstream signaling. Tumstatin inhibits FAK activation by directly binding to the endothelial integrin ␣ v  3 (9,29,30). It is possible that canstatin functions similarly, although the binding of canstatin to an integrin has not yet been demonstrated.
In addition to its transcriptional regulation by Akt, FLIP is also subject to post-transcriptional regulation. FLIP is rapidly degraded in the proteasome in response to such diverse stimuli as p53 activation (31) or exposure to peroxisome proliferatoractivated receptor-␥ ligands (32). Because of its rapid turnover, protein synthesis inhibitors such as cycloheximide markedly reduce FLIP levels (33). The rapamycin analogue CCI-779, which inhibits protein synthesis by specifically blocking the Akt target mTOR (24), has similar effects. 2 5. Effects of canstatin on HUVEC mitochondrial membrane potential and procaspase 9 activity. A, HUVEC were infected with the rTA and FLIP-tet adenoviruses as described above. The infected cells were then incubated in medium with or without doxycycline (to induce FLIP expression), and subsequently, some were exposed to canstatin. Dye oligomers were detected in FL2-H and monomers in FL1-H. Data were reported as the ratio of the mean fluorescence intensity in FL2-H to that in FL1-H. Without prior doxycycline treatment, exposure to canstatin reduced the number of cells in the upper left region of the panel (high aggregate/monomer ratio) and increased the fraction of cells with data points along an apparent diagonal (lower ratio). The mean fluorescence intensity ratio decreased from 43 to 36 in response to canstatin. Cells pretreated with doxycycline, however, were essentially unaffected by canstatin (mean fluorescence intensity decreased from 43 to 42). B, canstatin activates procaspase 9. This effect, however, was markedly attenuated by the induction of FLIP.
shown to inhibit Akt and mTOR, resulting in a decline in overall protein synthesis (29). In fact, this phenomenon has been invoked as an explanation for its angiostatic effects. Our data demonstrating that canstatin completely inhibits mTOR and its downstream targets 4E-BP1 and p70 s6k suggest that canstatin may function in a similar manner. Therefore, it is likely that the nearly complete disappearance of FLIP from endothelial cells exposed to canstatin is caused by both reduced gene expression as a result of Akt inhibition and reduced FLIP protein synthesis due to mTOR inhibition.
The activation of procaspase 8 induced in HUVEC by canstatin is associated with changes in mitochondrial membrane potential and the activation of procaspase 9. This amplification is usually mediated through the cleavage of the proapoptotic Bcl-2 homology 3-domain-only Bcl-2 family member Bid, which, once cleaved, triggers the oligomerization of Bak or Bax and the release of mitochondrial proteins (34). Although we have been unable to detect Bid in HUVEC (data not shown), recent studies have indicated that this protein is not absolutely essential for apoptosis in response to procaspase 8 activation. Kandasamy et al. (35) recently showed that TRAIL induces apoptosis and the release from the mitochondria of cytochrome c (but not second mitochondria-derived activator) in Bid Ϫ/Ϫ murine embryonic fibroblasts as long as either Bak or Bax were present. Thus, canstatin seems to be able to reduce mitochondrial membrane potential and activate procaspase 9 in HUVEC despite limited Bid expression.
The mechanism by which canstatin activates procaspase 8 and initiates an apoptotic signaling cascade is unclear. Stupack et al. (36) demonstrated that the proenzyme is directly recruited to the cytoplasmic domains of unoccupied integrins and activated there in a manner similar to that induced by the multimerization of FADD. It is conceivable that canstatin functions by uncoupling endothelial integrins from the underlying matrix, as has been proposed for tumstatin and endostatin (9), and that procaspase 8 is subsequently recruited to these unoccupied integrins and activated. Such a model, however, would not explain the ability of FLIP to block canstatin-induced procaspase 8 activation without invoking the prospect that FLIP might be similarly recruited to integrins. This model would also fail to explain the Fas dependence of canstatin-or disadhesioninduced apoptosis (i.e. anoikis) (37) unless one proposes that the initial procaspase 8 activation triggered by unoccupied integrins is later amplified by membrane Fas-Fas ligand interactions. Although our data do not exclude a contribution of integrin-mediated procaspase 8 activation, they indicate that Akt inhibition with the induction of Fas ligand, the downmodulation of FLIP, and the resulting interactions between Fas ligand and Fas at the cell membrane are the primary determinants of canstatin-induced apoptosis.
|
v3-fos-license
|
2022-10-30T15:17:51.296Z
|
2022-10-28T00:00:00.000
|
253214114
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CC0",
"oa_status": "GOLD",
"oa_url": "https://www.cdc.gov/mmwr/volumes/71/wr/pdfs/mm7144e2-H.pdf",
"pdf_hash": "fab903684045606ecf9c4632822ae91793e54cb4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:684",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c674f140a48a344c6d115c4c2a76514175b6ff18",
"year": 2022
}
|
pes2o/s2orc
|
Wastewater Testing and Detection of Poliovirus Type 2 Genetically Linked to Virus Isolated from a Paralytic Polio Case — New York, March 9–October 11, 2022
In July 2022, a case of paralytic poliomyelitis resulting from infection with vaccine-derived poliovirus (VDPV) type 2 (VDPV2)§ was confirmed in an unvaccinated adult resident of Rockland County, New York (1). As of August 10, 2022, poliovirus type 2 (PV2)¶ genetically linked to this VDPV2 had been detected in wastewater** in Rockland County and neighboring Orange County (1). This report describes the results of additional poliovirus testing of wastewater samples collected during March 9-October 11, 2022, and tested as of October 20, 2022, from 48 sewersheds (the community area served by a wastewater collection system) serving parts of Rockland County and 12 surrounding counties. Among 1,076 wastewater samples collected, 89 (8.3%) from 10 sewersheds tested positive for PV2. As part of a broad epidemiologic investigation, wastewater testing can provide information about where poliovirus might be circulating in a community in which a paralytic case has been identified; however, the most important public health actions for preventing paralytic poliomyelitis in the United States remain ongoing case detection through national acute flaccid myelitis (AFM) surveillance†† and improving vaccination coverage in undervaccinated communities. Although most persons in the United States are sufficiently immunized, unvaccinated or undervaccinated persons living or working in Kings, Orange, Queens, Rockland, or Sullivan counties, New York should complete the polio vaccination series as soon as possible.
On October 28, 2022, this report was posted as an MMWR Early Release on the MMWR website (https://www.cdc.gov/mmwr).
In July 2022, a case of paralytic poliomyelitis resulting from infection with vaccine-derived poliovirus (VDPV) type 2 (VDPV2) § was confirmed in an unvaccinated adult resident of Rockland County, New York (1). As of August 10, 2022, poliovirus type 2 (PV2) ¶ genetically linked to this VDPV2 had been detected in wastewater** in Rockland County and neighboring Orange County (1). This report describes the results of additional poliovirus testing of wastewater samples collected during March 9-October 11, 2022, and tested as of October 20, 2022, from 48 sewersheds (the community area served by a wastewater collection system) serving parts of Rockland County and 12 surrounding counties. Among 1,076 wastewater samples collected, 89 (8.3%) from 10 sewersheds tested positive for PV2. As part of a broad epidemiologic investigation, wastewater testing can provide information about where poliovirus might be circulating in a community in which a paralytic case has been identified; however, the most important public health actions for preventing paralytic poliomyelitis in the United States remain ongoing case detection through national acute flaccid myelitis (AFM) surveillance † † * These authors contributed equally to this report. † These senior authors contributed equally to this report. § A VDPV is a strain related to the attenuated live poliovirus contained in OPV. VDPV2s are OPV virus strains that are >0.6% divergent (or at least six nucleotide changes) from the OPV2 strain in the complete VP1 genomic region. https://polioeradication.org/wp-content/uploads/2016/09/ Reporting-and-Classification-of-VDPVs_Aug2016_EN.pdf ¶ The term PV2, referring to all serotype 2 polioviruses, is used throughout the report to indicate either a confirmed VDPV2 or a type 2 Sabin-like virus that is genetically related to the Rockland County patient. A Sabin-like poliovirus is a poliovirus that is related to one of the Sabin vaccine strains and whose nucleotide sequence in the genome region encoding the VP1 capsid protein differs from the related Sabin strain by 0-5 nucleotides for type 2 or by 0-9 nucleotides, for types 1 and 3. ** Wastewater, also referred to as sewage, includes water from household or building use (e.g., toilets, showers, and sinks) that can contain human fecal waste and water from nonhousehold sources (e.g., rain and industrial use); it does not include open drains or potable water. https://www.cdc.gov/ healthywater/surveillance/wastewater-surveillance/wastewater-surveillance. html#how-wastewater-surveillance-works † † https://www.cdc.gov/acute-flaccid-myelitis/index.html and improving vaccination coverage in undervaccinated communities. Although most persons in the United States are sufficiently immunized, unvaccinated or undervaccinated persons living or working in Kings, Orange, Queens, Rockland, or Sullivan counties, New York should complete the polio vaccination series as soon as possible.
High rates of poliovirus vaccination coverage (2) resulted in the elimination of paralytic polio caused by wild-type poliovirus in the United States in 1979. § § Only inactivated polio vaccine (IPV) has been used in the United States since 2000; 3 doses of IPV confer 99%-100% protection from paralytic poliomyelitis (3). Some countries still use oral poliovirus vaccine (OPV); advantages to this approach include low cost, ease of use, and high efficacy in stopping outbreaks. However, in rare cases, the live attenuated virus in OPV can regain neurovirulence, circulate in underimmunized populations, and cause paralytic disease. A previous report confirmed that paralysis of the Rockland County patient resulted from infection with VDPV2, and that related viruses had been detected in wastewater collected from Orange and Rockland counties (1). Since then, the New York State Department of Health (NYSDOH); Nassau, Orange, Putnam, Rockland, Suffolk, Sullivan, Ulster, and Westchester counties' health departments; New York City Department of Health and Mental Hygiene (NYC DOHMH); New York City Department of Environmental Protection; and CDC have expanded poliovirus wastewater testing as part of an emergency response. This report summarizes findings from the more extensive wastewater testing conducted in the New York metropolitan area as part of investigations to understand the extent of poliovirus circulation and to direct polio vaccination efforts.
Wastewater samples, including some originally collected for SARS-CoV-2 surveillance, were collected from a subset of sewersheds during March 9-October 11, 2022. Samples were collected approximately once or twice weekly from each site. Wastewater samples were processed using either ultracentrifugation or polyethylene glycol precipitation followed by nucleic acid extraction. The extracts were forwarded to the Wadsworth Center (part of NYSDOH) or the New York City Public Health Laboratory (part of NYC DOHMH) where they were packaged and shipped to CDC. At CDC, total nucleic acids were screened for the presence of PV2 using the pan-poliovirus real-time reverse transcription-polymerase chain reaction (RT-PCR) assay, and positive samples were sequenced (4,5).
To investigate the number of indeterminate ¶ ¶ results from some of the New York City samples from large sewersheds (those servicing more than 700,000 residents), NYC DOHMH collected additional larger volume (500 mL) wastewater samples from two sewersheds on August 11, one receiving wastewater from parts of New York County, and another with combined wastewater from parts of Kings, New York, and Queens counties (two distinct upstream sub-sewersheds*** were sampled, one feeding only from the New York County area and another ¶ ¶ Indeterminate results include those from samples that tested positive using real-time RT-PCR, but not enough viral material was available to complete sequencing. *** Sub-sewersheds are upstream sampling locations within a larger sewershed. feeding from Kings and Queens counties combined). CDC then concentrated virus from the samples using the filtration and elution method, followed by inoculation of concentrates onto susceptible cell lines to isolate polioviruses (6). Cultures exhibiting viral cytopathic effect were screened by real-time RT-PCR to identify polioviruses (4) and sequenced as described. Data presented are from samples collected during March 9-October 11, 2022, and testing conducted through October 20, 2022.
The 48 sewersheds tested serve parts of 13 counties in New York, with a total population of approximately 11,413,000 persons (7). A total of 1,076 wastewater samples were collected during March 9-October 11, 2022. Among these, 89 (8.3%) samples from 10 sewersheds tested positive for PV2. Of the 82 PV2-positive samples in the state of New York (outside of New York City), 81 (98.8%) sequences from six sewersheds in Nassau, Orange, Rockland, and Sullivan counties were linked to the virus isolated from the Rockland County patient, and the sequencing results for one sample were not adequate to determine whether it was linked to the virus isolated from the patient ( Table Indeterminate results include those from samples that tested positive using real-time reverse transcription polymerase chain reaction, but not enough viral material was available to complete sequencing. Specimens pending sequencing results are also excluded. § Number of samples in each jurisdiction include New York City (408) and the following New York counties: Rockland (124) to the virus isolated from the patient; this sample was from one of the larger-volume samples. The other six PV2-positive New York City samples included one from Kings County that was not genetically linked to the virus isolated from the patient, and five from three different sewersheds serving parts of Kings, New York, and Richmond counties that were inadequate for sequencing. PV2-positive samples genetically linked to the virus isolated from the patient were collected on more than one occasion in Orange (June 13-October 6), Rockland (May 23-October 4), and Sullivan (July 21-October 5) counties. Only a single sample each from Nassau County on August 18 and the sub-sewershed serving parts of Kings and Queens counties on August 11 tested positive for a PV2 linked to virus isolated from the patient. In addition to wastewater testing for poliovirus in New York, a multifaceted public health response is underway that includes efforts to enhance case detection and increase vaccination access and demand. Efforts to improve case detection include testing of persons with nonparalytic, nonspecific viral symptoms consistent with poliovirus infection † † † and review of syndromic † † † https://health.ny.gov/diseases/communicable/polio/docs/2022-09-28_ health_advisory.pdf surveillance databases. Strategies to increase vaccination include communication campaigns, community engagement, vaccination clinics, and outreach to providers and patients, focused on communities with the lowest IPV coverage. On August 12, NYSDOH and NYC DOHMH issued a press release and health alert to guide the public and the health care community about the importance of polio vaccination, emphasizing the imperative to protect unvaccinated and undervaccinated children through vaccination. § § § On September 9, New York declared a state of emergency, ¶ ¶ ¶ which allowed additional health professionals (including certain emergency medical service providers, midwives, and pharmacists) to administer poliovirus vaccine in the state.
Discussion
Wastewater testing during March 9-October 11 has detected PV2 genetically linked to the virus isolated from the Rockland County patient in six of 13 New York counties where § § § https://www1.nyc.gov/site/doh/about/press/pr2022/nysdoh-and-nycdohmwastewater-monitoring-finds-polio-urge-to-get-vaccinated.page ¶ ¶ ¶ https://health.ny.gov/press/releases/2022/2022-09-09_polio_immunization.htm wastewater was tested. One county (Nassau) had only a single detection, and therefore was not considered to have evidence of a transmission event. Three counties (Orange, Rockland, and Sullivan) had repeated detections over the course of months in one or more sewersheds, suggesting some level of community transmission in these areas. Only a single large-volume wastewater sample collected on August 11 from Kings and Queens counties in New York City tested positive for a PV2 genetically linked to virus isolated from the patient. However, this finding, coupled with the repeated PV2-positive results from the lower volume samples collected from the broader sewershed catchment areas serving parts of Kings, New York, and Queens counties during June 5-September 6 for which sequencing was not possible, suggests that PV2 could be circulating in Kings and Queens counties as well.
FIGURE 2. Sewersheds* with detections of poliovirus type 2 genetically linked to the virus isolated from a paralytic polio patient † -Sullivan (A), Orange (B), Rockland (C), Kings and Queens (D), § and Nassau (E) counties, New York
Wastewater testing in conjunction with high-quality AFM surveillance, has helped clarify the scope of the polio outbreak in New York, which indicates community transmission in a five-county area near the only identified symptomatic patient. Some researchers and public health agencies have had interest
Summary
What is already known about this topic?
In July 2022, a case of paralytic poliomyelitis was confirmed in an unvaccinated adult Rockland County, New York resident; environmental sampling found evidence of poliovirus transmission.
What is added by this report?
Wastewater testing has identified circulating polioviruses genetically related to virus isolated from the Rockland County patient in at least five New York counties.
What are the implications for public health practice?
Public health efforts to prevent polio should focus on improving coverage with inactivated polio vaccine. Although most persons in the United States are sufficiently immunized, unvaccinated or undervaccinated persons living or working in Kings, Orange, Queens, Rockland, or Sullivan counties, New York should complete the polio vaccination series to prevent additional paralytic cases and curtail transmission.
in expanding wastewater testing for poliovirus beyond the current outbreak area; however, additional effort is needed to understand the limitations and implications of wastewater testing for poliovirus outside the context of a localized emergency response and epidemiologic investigation of a confirmed polio case. The impact of sewershed system design and size on result interpretation needs further characterization. According to the World Health Organization's guidelines for environmental surveillance of poliovirus circulation,**** sampling sites chosen for testing should represent selected populations at high risk with a source population of 300,000 or fewer persons. Many sewersheds in the United States, including many in New York and New York City have catchments that exceed this number by a factor of five, which could affect reliability or interpretability of results and limit the ability to effectively target interventions. Although sampling upstream sub-sewersheds can sometimes be possible, this activity might not always be feasible to do regularly because of resource and logistical constraints. In addition, monitoring the progress of polio eradication in a population with high IPV coverage is complicated by use of OPV for routine vaccination and outbreak response in other international settings. The live OPV strain can persist in stool for several weeks after vaccination, and detection of these viruses in wastewater does not have the same public health implication as does detection of a VDPV. In addition, standardized methods of testing and virus characterization need to be established if wastewater testing is to become more widespread, because reliable sequencing and careful interpretation are needed to characterize a finding in wastewater as either **** https://polioeradication.org/wp-content/uploads/2016/07/ WHO_V-B_03.03_eng.pdf an OPV strain or a VDPV. Lastly, and most importantly, the public health objectives for wastewater testing for poliovirus should be defined before its application and before the public health response is scaled up beyond the currently implicated communities at risk in New York. Identifying geographies with connections to the patient's community and persistently low polio vaccination coverage can, even in the absence of wastewater testing, help target vaccination efforts. However, these areas at risk for paralytic polio and poliovirus circulation might be considered for wastewater testing to prioritize or enhance vaccination efforts in the event of poliovirus detections. The findings in this report are subject to at least five limitations. First, even if only a small number of persons are excreting poliovirus into a given sewershed, virus mixtures in a sample can be difficult to resolve. High-quality sequences are needed to characterize the virus and confirm linkages between viruses. Because the total number of nucleotide differences is small, a single nucleotide change can be critical in confirming a linkage between viruses. Second, defecation by infected persons in counties other than their home county in New York (e.g., where they work or visit, or through which they travel) could result in wastewater detection; hence, isolated detections do not confirm community circulation. Third, wastewater testing does not provide information about communities and facilities that are not served by municipal sewer systems; neither was every sewershed in each county sampled. Fourth, test results indicate detection or nondetection of poliovirus but cannot provide quantitative estimates of the number of persons infected. Finally, negative test results cannot guarantee that a community is free from poliovirus but can be assessed in conjunction with other surveillance approaches.
At least five New York counties had evidence of a sustained period of community transmission of poliovirus in 2022. Unvaccinated and undervaccinated persons in these areas are at risk for infection and paralytic disease. A robust national AFM surveillance system must be maintained with reporting of any suspected case of AFM to the appropriate public health authorities and collection of stool samples from any person with a suspected case. All U.S. children should receive IPV in accordance with the routine childhood immunization schedule (8). Most adults in the United States were vaccinated as children and are therefore likely to be protected from paralytic polio; however, any unvaccinated or undervaccinated adult or child living or working in Kings, Orange, Queens, Rockland, or Sullivan counties, New York should complete the IPV series now (9). reaction equipment for public health testing purposes from QIAgen. Nancy McGraw reports an uncompensated leadership in the New York State Association of County Health Officials. Andrew Knecht reports uncompensated membership on the editorial board of the American Journal of Preventive Medicine-Focus. Daniel Lang reports uncompensated membership on the New York State Water Quality Council. No other potential conflicts of interest were disclosed.
|
v3-fos-license
|
2018-10-25T16:06:25.138Z
|
1997-01-01T00:00:00.000
|
53412403
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ambp.centre-mersenne.org/item/AMBP_1997__4_2_83_0.pdf",
"pdf_hash": "10bf36328f2150b9fa66271ab33a6b4f4d2e0fc2",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:685",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "f564112036d7cf9e99f26344f90025b876ef31d0",
"year": 1997
}
|
pes2o/s2orc
|
A note on boundedness properties of Wright’s generalized hypergeometric functions
. In this paper we obtain some inequalities giving the boundedness properties for the Wright’s generalized hypergeometric function which belong to the classes P(A,B) and R(A,B). The results besides yielding the inequalities obtained recently in [3] and [7], would also be applicable to special functions like, the Bessel-Maitland functions and Mittag-Leffler functions..
The symbol -is the usual symbol of subordination.That is for f(z), g(z) E S, f(z) ~ g(z) if there exists a Schwarz function w(z) (w(0)=0 , j w(z) ) I 1 in U) such that f(z) = g(w(z)).
We denote by P(A,B) the set of functions if h(z) ~ 1 + AZ , where A and B are real numbers such that -1 _ B A -l; -1 Also, R(A,B) denotes the set of functions If A= 1-2 a, B= -1, then the subclass of functions of S are denoted by P* (a).
Our purpose in this paper is to obtain some inequalities for the function defined by (I.I) which belong to the classes P(A,B) and R(A,B) .The boundedness properties for the generalized hypergeometric function as well as similar properties for the Bessel-Maitland and Mittag-Leffler functions follow as worthwhile consequences of our main inequalities.
Further Inequalties Theorem 2. Let A ~ o , and °A j~' J ~~~ ~ § z P(A,B) and I z I ~ ) I B I , then where 0394 is given by (2.1).
Proof.Since it follows then that 0394 d dz p w [ ' i , Ai > i , p 1 z = 0394p y [ h i + Ai, Ai) i , p ° z. (3.2) We recall the following result ([8]): If h(z If we set h(z) = 0394p y ( h I ' Ai) I , p ; z in (3 3) i ( F i , I Bi) i , q i ' ° ' and use (3 .2) in the process, we are lead to the result (3 .I).
4.
Some Consequences of Theorems 1-3 By specializing the parameters, we observe that for Aj = 1 (j=l,...,p) and Bj =1 (j =l,...,q), the Wright's generalized hypergeometric function Setting the parameters of the Wright's generalized hypergeometric function occurring in Theorems 1-2 in accordance with (4.1), we get the results obtained recently in [3] (Ths.1-2, pp.67-70).In addition to the choice of parameters indicated above for (4.1), if we also put x=l in Theorem 3, we are then lead to the other known result [3] (Th. 3, p. 71); see also [7].Further, Theorems 1-2 can be applied to special functions like the Bessel-Maitland and Mittag-Leffler functions, and boundedness properties for these functions can be obtained.Indeed, by noting the relationships [7, p. where J 03BD (z) and F03B1, 03B2(z) are the Bessel-Maitland and Mittag -Leffler functions, respectively, the corresponding relations can easily be deduced from the main results.
|
v3-fos-license
|
2021-09-01T15:03:21.863Z
|
2021-06-30T00:00:00.000
|
237833639
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-652611/v1.pdf?c=1631899839000",
"pdf_hash": "1dc8f0ae6b90f9ca57c64f087b4c5e198c3efce2",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:688",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "458addbb394b1d1add535da29808a16cffdc69e2",
"year": 2021
}
|
pes2o/s2orc
|
Environmental factors inuencing the composition of phyllosphere bacterial communities in bamboo: A staple food source of giant pandas
The giant panda has developed a series of evolutionary strategies to adapt to a bamboo diet. The abundance and diversity of the phyllosphere microbiome change dramatically depending on the season, host species, location etc., which may, in turn, affect the growth and health of host plants. However, few studies have investigated the factors that inuence phyllosphere bacteria in bamboo, a staple food source of the giant panda. Amplicon sequencing of the 16S rRNA gene of rRNA genomic loci was used to explore the abundance and diversity of phyllosphere bacteria in three bamboo species (Arundinaria spanostachya, Yushania lineolate and Fargesia ferax) over different seasons (spring vs. autumn), elevation, distance from water, etc. in Liziping National Nature Reserve (Liziping NR), China. The results show that a total of 2,562 operational taxonomic units (OTUs) were obtained from all 101 samples, which belonged to 24 phyla and 608 genera. Proteobacteria was the dominant phyla, followed by Acidobacteria and Actinobacteria. The Sobs index and Shannon index of F. ferax phyllosphere bacteria were greater than that of the other two bamboo species in both seasons. The Sobs index and Shannon index of phyllosphere bacteria in all three bamboo species in autumn were signicantly higher than in spring. Season was a stronger driver of community structure of phyllosphere bacteria than host bamboo species based on the (un)weighted UniFrac distance matrix. Many bacteria phyla were negatively correlated with elevation and distance from water, but positively related to mean height of bamboo and mean base diameter of bamboo. Function prediction of PICRUSt revealed the relative abundance of transporters function was highest in all three bamboo species, followed by ABC transporters. There were nine relative abundance pathways with signicant differences in the 3-level KEGG pathway. The genes related to membrane transport, signal transduction and porphyrin transport in phyllosphere bacteria of F. ferax were signicantly lower than in the other two species.
phyllosphere. These ndings could provide a reference for the restoration and management of giant panda habitat and food resources in this area, especially for those small isolated populations of giant pandas in Xiaoxiangling mountains.
Background
Large numbers of different kinds of microorganisms within the phyllosphere constitute a stable and complex micro-ecosystem (Mei et al. 1991;Li et al. 1998;Shi et al. 2007; Vorholt et al. 2012). Different hosts have different microbial communities (Yadav et al. 2008; Leveau et al. 2015) and may include bacteria, archaea, fungi and protists (Newton et al., 2010). Bacteria is the most abundant microorganism in the phyllosphere (Lindow et al., 2003) and Proteobacteria are the dominant phylum (Delmotte et al. 2009).
Numerous environmental factors determine phyllosphere community composition, such as plant species, temperature, humidity, nutrient availability (on the plant surface), sun/UV exposure levels, and even the underlying soil geochemistry (Lindow et al. 2003; Leveau et al. 2015). Ecological factors and plant attributes e.g., wood density, leaf mass per unit area, and leaf nitrogen and phosphorus concentration, may also affect the microbial community structure in the phyllosphere (Steven et al. 2014).
The phyllosphere microbial community represents a wide range of primitive symbiotic relationships (Fedorov et Bamboo is a perennial evergreen plant belonging to the Gramineae family and Bambusoideae subfamily and is an important forest resource all over the world. It has the characteristics of wide-distributing, fastgrowing and high-yield, and strong regeneration ability. Therefore, it has considerable economic, ecological and social bene ts (Shanmughavel et al. 1997;Anu et al. 2008;Chang et al. 2015; Zhang and Xue, 2018). Bamboo leaves are an important habitat for many microorganisms, with microbial abundance and diversity acting as an indicator of forest health. However, few studies have investigated the environmental factors that in uence phyllosphere bacterial composition in bamboo. Using the traditional cultivation method, Zhang and colleagues (2014) found that the composition of the phyllosphere microbial community differed between bamboo species and seasons. The species composition and frequency of leaf endophytic bacteria and fungi have also been found to differ between bamboo species (Helander et al. 2013), while endophytic fungi from sh-scale bamboo (Phyllostachys heteroclada) differs between the branch and leaf tissue (Zhou et al. 2017). Signi cant differences in bacterial richness and diversity were also observed between different bamboo species using high throughput amplicon sequencing (Jin et al. 2020). However, there is a lack of research on what factors in uence the composition and diversity in the bamboo phyllosphere microbiome.
The giant panda (Ailuropoda melanoleuc) belongs to a carnivorous clade (Wei et al. 1999), yet has an exclusively herbivorous diet and specialises in the consumption of bamboo leaves throughout the year (Hu et al. 1985;Zhao et al. 2013). In the long evolutionary process, the giant panda developed a series of foraging strategies to adapt to a bamboo diet, such as seasonal vertical migration, selection of habitat and feeding point (Hong et al. 2015(Hong et al. , 2016. However, in the different seasons of different mountain systems, the diet of giant pandas does vary. For example, in the Qinling mountains, giant pandas mainly feed on bamboo leaves during the non-bamboo shoot seasons (Pan, 2001;Wu et al. 2017), while bamboo leaves of the Xiaoxiangling mountains account for more than half of the panda faeces in summer and autumn (Wei et al. 1999). Although bamboo is an important food source, its leaves also contribute to intestinal diseases of giant pandas in captivity. Previous research shows that Escherichia coli and Klebsiella pneumoniae could cause diarrhoea and septicaemia for giant pandas (Zhang et al. 1997;Xiong et al. 1999). However, there are few studies into the phyllosphere microbiome in the bamboos pecies foraged by wild giant pandas.
In this study, we investigate the phyllosphere bacterial community of bamboo species frequently used as a food source by giant pandas, using Next Generation Sequencing Technology (NGS) in Liziping National Nature Reserve (Liziping NR), Sichuan, China. The speci c goals include: (i) comparing differences in microbial composition and diversity between three bamboo species and two seasons; (ii) exploring the ecological factors that in uence the phyllosphere bacteria community changes; and (iii) predicting the functional differences of phyllosphere bacterial communities in different bamboo species.
Study area
Liziping NR is located in Shimian County, Sichuan Province, China, and is in the middle and upper reaches of the Dadu River, on the southwestern edge of the Sichuan Basin and southeast of Gongga (Hong et al., 2015). The reserve covers an area of 47940 km 2 and ranges from 1330 m to 4550 m above sea level, with uneven ridges and narrow valleys. The annual average temperature and rainfall are 11.7-14.4°C and 800-1250 mm respectively (Xie et al., 2020). As the altitude increases, the vegetation in the reserve transitions from evergreen broadleaved forest to deciduous broad-leaved forest, then coniferous and broad-leaved mixed forest, The species with the largest distribution is Arundinaria spanostachya, which accounts for 38.08% of the total area of giant panda feeding bamboos in these mountains. Next is Yushania lineolate and Fargesia ferax, which account for 28.02% and 12.48% of the giant panda's feeding bamboo respectively (Sichuan Forestry Bureau. 2015). A. spanostachya mainly grows above 2500 m a.s.l, while Y. lineolate and F. ferax occur below 2800 m a.s.l. Giant pandas in this nature reserve prefer to eat A. spanostachya throughout the whole year, some Y. lineolate in winter, and occasionally F. ferax (Hong et al., 2015(Hong et al., , 2016Xie et al., 2020).
Experimental design
We conducted surveys and sampling during May (Spring) and October (Autumn) in 2020. First, we set up four transects in A. spanostachya, Y. lineolate and F. ferax bamboo forests respectively. The transects were set from low altitude to high altitude, and the distance between them was no less than 200 m.
Secondly, we set up 3-5 survey plots (20×20 m 2 ) in each transect, with the altitude distance of adjacent survey plots on the same transect no less than 50 m. We then recorded and measured bamboo species, latitude and longitude, altitude, and other related variables in the tree and shrub layer (Table S1).
Moreover, one bamboo plot (1×1 m 2 ) was set up in the centre of each survey plot, and another two bamboo plots (1×1 m 2 ) were set up east and south, 5 m from the centre point of each survey plot. Finally, the related variables within the bamboo layer were also measured and recorded (Table S1).
Sample collection and DNA extraction
In each survey plot, one mixed bamboo leaf sample (not less than 200g) was collected with sterile gloves, immediately transported to the laboratory (less than 2 hours) and stored at -20°C for further processing within 48 h. Each 200g sample was aseptically transferred into a Ziplock bag (24 cm × 35 cm) containing 200 ml sterile precooled TE-buffer (10 mM Tris, 1 mM EDTA, pH 7.5) supplemented with 0.05% Tween-80 (Hong et al. 2017). Leaf surfaces were washed to collect the microbial population by 5 min of shaking, vortexing and sonication of each sample in the TE-buffer, with the Ziplock bag kept in ice water (~ 4°C) for each processing step (Hong et al. 2017). The cell suspension was separated from the leaf material by ltration through three-layer sterile nylon mesh. Sonication was performed at a frequency of 40 kHz in an ultrasonic cleaning bath (Shanghai Kudos Instrument Co. Shanghai, China) to dislodge the microbes from the leaf surface. Following ltration, cell suspensions were placed in four 50 ml tubes/sample, and cells were pelleted using centrifugation at 2000×g for 15 min at 4°C. Cell pellets from multiple tubes were pooled into 2.0-ml reaction tubes and washed twice with TE-buffer with Tween-80. Cell pellets were immediately frozen at − 80°C until DNA extraction.
DNA extraction was performed using the E.Z.N.A.™ Soil DNA Kit (Omega, Norcross, GA) as described with slight modi cations. Frozen cell pellets were resuspended in 1 ml of kit-supplied SLX Mlus buffer with 500 mg of glass beads, and cell lysis was performed at 65 Hz for 90 s. The cell debris suspension was immediately processed following the instructions in the kit manual. Finally, total DNA was obtained from the column by two sequential elutions with 50 µl elution buffer. window, and the truncated reads shorter than 50 bp were discarded. Reads containing ambiguous characters were also discarded; (ii) only overlapping sequences longer than 10 bp were assembled according to their overlapped sequence. The maximum mismatch ratio of the overlap region was 0.2. Reads that could not be assembled were discarded; (iii) samples were distinguished according to the barcode and primers. The sequence direction was adjusted using exact barcode matching with two nucleotide mismatch in primer matching.
Operational taxonomic units (OTUs) with 97% similarity cut-off (Edgar,2013;Stackebrandt et al.,1994) were clustered using UPARSE version 7.1 and chimeric sequences were identi ed and removed. The taxonomy of each OTU representative sequence was analysed by RDP Classi er version 2.2 (Wang et al., 2007) against the 16S rRNA database (e.g., Silva v138) using a con dence threshold of 0.7. Non-target sequences, including mitochondrial and chloroplast sequences, were also removed by QIIME from the nal OTU data set. To better convey the biological information in these samples, the average relative abundance of the bacterial community was visualized by bar at the level of phylum and genus.
Statistical analysis
First, Kruskal-Wallis H tests and Wilcoxon rank-sum tests were used to evaluate the differences between dominant bacteria abundance at both phylum and genus levels, for each combination of bamboo species and season. We used mothur software (version v.1.30.1) to calculate the Sobs and Shannon indices for each sample, to estimate the species abundance and diversity of phyllosphere bacteria between the different bamboo species and seasons and used a Student's t-test to identify any signi cant differences. We then used PCoA based on the weighted UniFrac and unweighted UniFrac method distance matrix to evaluate the differences in the microbial community structure between the bamboo species and seasons. We also used the permutational MANOVA (PERMANOVA) to analyse the effect of different bamboo species and seasons on the phyllosphere bacterial community, based on Bray-Curtis distance matrices with a substitution test to analyse the statistical signi cance of the division.
Second, Spearman's correlation heatmap analysis was performed to examine the relationship between the relative abundance of bacterial taxa and the environmental factors described in Table S1. Linear regression was used to evaluate the relationship between environmental factors and the results of either Alpha (Sobs and Shannon indices) or Beta diversity analysis (Bray-Curtis distance). The Mantel test was used to test the correlation between the UniFrac distance matrix and the environmental variable distance matrix.
Finally, the PICRUSt prediction was used to predict the functional composition of all phyllosphere bacteria in the different bamboo species. The greengene id corresponding to each OTU, the COG and KEGG functions of the OTU were annoted to obtain the function level of COG and KEGG, and the abundance information for each function in different samples.
Samples, sequences and OTUs between different seasons and bamboo species
Up to 19 samples of each bamboo species were collected in each study season (Table 1). After removing the mitochondrial and chloroplast sequences, a total of 1,233,917 effective target 16S rRNA reads were obtained. The average sequence of each sample was 12,217 ± 3,037. Clustered by 97% similarity, all samples had a total of 2,564 OTUs. 1378 OTUs were shared between the three bamboo species, and the bacterial OTUs of each in autumn were signi cantly higher than in spring ( Fig. 1 and S1). Fargesia ferax had a higher number of OTUs than the other two species in both seasons (Fig. 1a). The in uence of season and bamboo species on phyllosphere bacterial composition After the clustered OTU representative sequence was annotated for species, all bacterial OTUs belonged to 24 phyla and 614 genera. The most dominant phylum was Proteobacteria, comprising ~ 70% of the bacterial diversity observed across all samples (Fig. 2a). The remaining bacteria were distributed among the phyla Acidobacteria, Bacteroidota, Actinobacteria, Planctomycetota, Myxococcota and others. In spring, the relative abundance of phyla Bacteroidota, Actinobacteriota and Myxococcota in the F. ferax phyllosphere were signi cantly higher than that of A. spanostachya and Y. lineolate. However, the relative abundance of phylum Acidobacteriota in the F. ferax phyllosphere was signi cantly lower than that of A. spanostachya and Y. lineolate ( Fig. 2a; Table S2). In autumn, the relative abundance of phyla Actinobacteriota and Myxococcota in the F. ferax phyllosphere were signi cantly higher than that of A. spanostachya and Y. lineolate. The relative abundance of phyla Bacteroidota in Y. lineolate and Planctomycetota in A. spanostachya phyllospheres were signi cantly lower than that of the other two bamboo species (Fig. 2a; Table S2). Not difference was found in the relative abundance of dominant phyla in the F. ferax phyllosphere between seasons, except Planctomycetota (Table S2). The relative abundance of phylum Acidobacteriota in the A. spanostachya and Y. lineolate phyllospheres were lower in autumn than spring, and the relative abundance of phyla Bacteroidota, Actinobacteriota and Myxococcota all increased from spring to autumn (Table S2).
The relative abundances of genera 1174-901-12, Acidiphilium, Sphingomonas, unclassi ed_f__Acetobacteraceae, Terriglobus, Granulicella, Pseudomonas and Hymenobacter revealed them to be the dominant bacteria in the phyllosphere of all three bamboo species, and the differences among the top eight total abundances of genera were calculated. In spring, the relative abundance of 1174-901-12, Acidiphilium and Terriglobus were highest in the Y. lineolate phyllosphere. The relative abundance of Hymenobacter and Sphingomonas were highest in the F. ferax phyllosphere, but the opposite pattern was observed for Granulicella ( Fig. 2b; Table S2). In autumn, the lowest relative abundance of Acidiphilium and Granulicella existed in the F. ferax phyllosphere, and that of Hymenobacter and Sphingomonas existed in the Y. lineolate phyllosphere ( Fig. 2b; Table S2). For all three bamboo species, similar relative abundance patterns were found for the seven most dominant bacteria genus in each phyllosphere. The relative abundance of 1174-901-12, Acidiphilium, Hymenobacter, Granulicella and Terriglobus decreased from spring to autumn but increased for Sphingomonas and Pseudomonas ( Fig. 2b; Table S2).
The in uence of season and bamboo species on phyllosphere bacterial diversity. The Sobs and Shannon indices for the bacterial community in the F. ferax phyllosphere were higher than that of A. spanostachya and Y. lineolate in both seasons ( Fig. 3; Table S3). However, in spring the Sobs and Shannon indices in Y. lineolate was lower than that of F. ferax and A. spanostachya. The phyllosphere bacterial community of A. spanostachya showed the lowest Sobs and Shannon indices in autumn ( Fig. 3; Table S3). These indices were also found to increase signi cantly from spring to autumn in all bamboo species (Fig. 3).
The results of the PCoA revealed that samples were clustered by species and season. Samples of A. spanostachya and Y. lineolate were also clustered together away from that of F. ferax in the same season (Fig. 4). Principle Coordinate Analyses based on unweighted UniFrac differentiated better the samples based on weighted UniFrac (Fig. 4). The PERMANOVA revealed signi cant differences in phyllosphere bacterial community based on Bray-Curtis distance matrixes between the three bamboo species in spring and autumn (Table S4).
The in uence of ecological factors on bamboo phyllosphere bacterial community
The results of Mantel analysis revealed that elevation had the greatest impact on the phyllosphere bacterial community (R = 0.313, P = 0.001). And elevation; distance from water; trees height; trees diameter at breast height; shrubs coverage; shrubs numbers; mean height of bamboo and mean base diameter of bamboo also had signi cant impacts on the bacterial community (Table S5).
Spearman's correlation heatmap analysis revealed that almost all bacterial phyla abundance were positively correlated with the base diameter of bamboo except Proteobacteria and Actinobacteria, and almost all of the bacterial phyla abundance were negatively correlated with elevation except Proteobacteria, Acidobacteria and WPS-2 (Fig. 5a). Moreover, distance from water was negatively correlated with abundance of Abditibacteriota, Actinobacteria, Bacteroidota and Deinococcota, but positively correlated with Acidobacteria and WPS-2. Abundance of Bdellovibrionota was positively correlated with tree height, trees diameter at breast height, shrub coverage and the number of shrubs. Mean height of bamboo was positively correlated with Chloro exi, Fusobacteriota, Myxococcota, SAR324_cladeMarine_group_B, unclassi ed_k_norank_d_Bacteria and Verrucomicrobiota, but negatively correlated with WPS-2. Trees diameter at breast height was negatively correlated with unclassi ed_k_norank_d_Bacteria and positively correlated with WPS-2. At the genus level, 1174-901-12 and Acidiphilium were both negatively correlated with tree height, trees diameter at breast height, shrub coverage and shrubs number (Fig. 5b). A positive correlation was observed between Sphingomonas and shrubs number, shrubs number, tree height and mean base diameter of bamboo. Granulicella showed signi cant correlations with elevation and water source distance, but negative correlations with mean base diameter of bamboo (Fig. 5b).
Further linear regression found that elevation, bamboo deaths, mean base diameter of bamboo, bamboo coverage and tree height had a signi cant impact on the Sobs and Shannon indices (Table S6). Elevation had the highest explanation for the Sobs index, the highest explanation ecological factor for the Shannon index, and Bray-Curtis distance was death bamboo number (Table S6). Elevation, death bamboo number, mean base diameter of bamboo, bamboo coverage and tree height, shrub coverage and shrubs number all had signi cant relationships with bacterial community structure in bamboo phyllosphere based on Bray-Curtis distance matrixes (Table S6).
PICRUSt gene function estimation
All three bamboo phyllosphere bacterial communities had similar COG function classi cation patterns as generated by PICRUSt in both spring and autumn (Fig. S2). There were higher relative abundance sequences related to cell wall/membrane/envelope biogenesis, amino acid transport and metabolism.
Interestingly, the relative abundance of transporter function (4.17%) was the highest in all three bamboo species, followed by ABC transporters (2.55%), DNA repair and recombination proteins (2.37%) and twocomponent systems (2.23%). Prediction software PICRUSt enriched 22 categorizable dominant pathways (relative abundance > 1%) in the level 3 KEGG pathway ( Fig. 6; Table S7). Among them, nine pathways had signi cant differences between bamboo species (P < 0.05). It is worth noting that there were signi cant differences in the relative abundance of the bacterial secretion system, the secretion system and the two-component system. Oxidative phosphorylation in energy metabolism, porphyrin and chlorophyll metabolism were also different, and the gene relative abundance for these functions for F. ferax was signi cantly lower than that of the other two species (Fig. 6; Table S7).
Discussions
This study revealed that the bacterial OTU richness of the three bamboo species in autumn was signi cantly higher than that in spring. The Phyllosphere of F. ferax had a greater diversity of bacterial OTUs than that of A. spanostachya and Y. lineolate. The dominant phyllosphere bacteria of the three bamboo species are Proteobacteria, Acidobacteria, Bacteroides and Actinomycetes. In a warm and humid climate, the diversity and richness of phyllosphere bacteria in the three bamboo species in spring were signi cantly higher than in autumn. The overall importance of seasonality to the structure and composition of the phyllosphere microbial community has been con rmed by many studies (Thompson et al., 1993;Copeland et al., 2015). Proteobacteria, Acidobacteria, Bacteroides and Actinomycetes are often detected in a variety of forests, indicating that these organisms have a wide ecological range and an ability to adapt to many environments (Isabelle et al., 2016; Feng et al., 2019). In this study, the relative abundance of the Proteobacteria in the three bamboo species were all above 60%, and the differences between the bamboo species were not signi cant, indicating that Proteobacteria played a dominant role in the phyllosphere microbial community. Its changing in turn may impact health of the staple food bamboo foraged by giant panda around Xiaoxiangling mountains. In spring, the relative abundance (15.29%) of Acidobacteria in F. ferax was signi cantly lower than that of A. spanostachya (21.16%) and Y. lineolate (25.67%). Actinomycetes are gram-positive bacteria that can decompose cellulose and lignin (Taibi et al., 2012).
The bacterial diversity and abundance of all three bamboo species in autumn were signi cantly higher than that in spring, which was similar to the result of Zheng and colleagues (2011), who found that the number of phyllosphere microbial community of Pinus tabulaeformis varied signi cantly between different seasons, with the largest diversity and abundance in autumn, followed by summer and the least in spring. The higher temperature and humidity in summer and autumn contributed to the higher diversity and richness than that in spring, while the higher altitude and longer low temperature in winter lead to the lower diversity and abundance (Isabelle et al, 2016). Airborne microorganisms can settle directly onto the leaf layer and their diversity and density may vary with time (daily and seasonal patterns)and other environmental events (Liu et al., 2020). In addition, agricultural practices such as harvesting and planting also have an impact on the movement of airborne microorganisms via disturbances to the leaf surface, precipitation or rain splash, and soil pollution (Redford et al., 2009 ).
The Mantel test found that elevation, distance from water, tree diameter at breast height, mean height of bamboo, mean base diameter of bamboo, tree height, shrub coverage and the number of shrubs, all signi cantly affected phyllosphere microbial community (Table S5). Jackson and colleagues (2006) found that the changes in the phyllosphere bacterial community in resurgent ferns were related to rainfall and humidity. Similarly, Laforest and colleagues (2016) found that the host species, habitat and climate (average summer temperature and precipitation) drove the phyllosphere bacterial community structure in temperate trees. In this study, elevation had the strongest relationship with phyllosphere microbial community. Elevation was signi cantly and positively correlated to many bacteria phyla abundance, such as Proteobacteria (Fig. 5). However, with an increase in elevation, both the Shannon and Sobs indices declined (Fig. S6). Higher elevations generally have fewer and more widely distributed water sources, and lower temperatures can limit the uidity of microbial cell membranes and proteins, which are not conducive to microbial reproduction and growth (Zhang et al., 2014).
Changes to the phyllosphere microbial community could impact the degradation and absorption of plant nutrients and the metabolism of enzymes (Fazal et al., 2021). In this study, we used PICRUSt to predict the function of phyllosphere bacteria of three bamboo species foraged by giant pandas. Our data shows that the gene function spectrum of phyllosphere bacteria of F. ferax was signi cantly different from the other two bamboo species. The relative abundance of gene types in the membrane transport secretion system, signal transduction and oxidative phosphorylation metabolism in the third level of the KEGG pathway for F. ferax were lower than for the other two bamboo species (Fig. 6). This indicated that the low-altitude F. ferax may need less energy and protein to maintain the physiological activities of some phyllosphere bacteria. While the high-altitude A. spanostachya and Y. lineolate may require more material and energy to adapt to the colder, lower oxygen environment. It is worth noting that the relative abundance of gene types in transporters and ABC transporters were the most expressed pathways in membrane transport at level 2. Hamana and colleagues (2012) revealed that ABC transporters can protect animals from the barrier of toxic substances. In addition, genes related to replication and repair may help reduce damage to biomolecules and may help bamboo adapt to high altitude environments. However, our results are only based on predicted metagenomics, and do not represent the actual function of leaf-peripheral bacteria.
Food microbes can affect the gut microbes of animals (Kohl and Dearing 2014; Kohl et al. 2016). Lei and colleagues (2020) found signi cant associations of certain bacteria and fungi between bamboo and the gut of giant panda. The diversity of bamboo bacteria was also positively correlated with that of gut bacteria in giant panda. Giant pandas prefer to consume bamboo that grows naturally at high altitudes, probably because the total number of endophytic bacteria in high altitudes tends to be lower (Helander et al., 2013). There's a lot of work needed to fully understand the relationship between the food microbes and gut microbes of pandas. To explore the relevance of these genes in the environmental adaptability of giant pandas and bamboo, further research is needed to directly sequence the metagenomics of the phyllosphere microbial community and determine whether there are speci c enzymes related to digestion in giant pandas.
Conclusions
In this study, high-throughput sequencing was used to explore the factors that in uence phyllosphere bacterial composition in three bamboo species (A. spanostachya, Y. lineolate and F. ferax) foraged by giant panda in different seasons (spring vs. autumn), in Liziping National Nature Reserve (Liziping NR), China. Our ndings suggested that the diversity of F. ferax phyllosphere bacterial species was greater than that of the other two bamboo species in both seasons, indicating that low altitude was a promoter of bamboo phyllosphere microbial richness and diversity. Besides, phyllosphere bacterial diversity was also signi cantly higher in autumn than in spring, implying that season-related changes in environmental factors (e.g.,temperature and moisture) may in uence bacterial communities. In autumn, giant pandas prefer bamboo at higher altitudes, which has lower abundance and diversity of phyllosphere bacteria. Lower abundance and diversity of phyllosphere bacteria also were found at lower altitudes in spring than in autumn. Therefor, giant pandas prefer to lower abundance and diversity of phyllosphere bacteria may attribute to nutrient-rich bamboo and adapted climate, these may be also considered as panda adaptation to partially meet the higher energy requirements needed for survival in the harsh cold and hypoxic environment. All these ndings could provide a reference for the restoration and management of giant panda habitat and food resources in this area, especially for those small isolated populations of giant pandas in Xiaoxiangling mountains.
Declarations Acknowledgements
Thanks to all the students who assisted with data collection and the experiments. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
|
v3-fos-license
|
2017-06-05T10:29:29.020Z
|
2006-10-01T00:00:00.000
|
42696607
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/bor/v20n4/14.pdf",
"pdf_hash": "87fe35f57439f825798c0002f34b831301dfca09",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:689",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "87fe35f57439f825798c0002f34b831301dfca09",
"year": 2006
}
|
pes2o/s2orc
|
Association between clinical parameters and the presence of active caries lesions in first permanent molars
The aim of the present study was to evaluate the association between clinical parameters and the presence of active caries lesions on the occlusal surface of first permanent molars. Forty eight children (5.8-13.8 years-old) with at least one first permanent molar present were selected. The clinical parameters evaluated were gender, age, DMF-T and dmf-t, presence of active white spots in other teeth, general plaque index, tooth's dental arch (upper or lower), tooth's side (right or left), presence of visible plaque and eruption degree of the first permanent molars. The first permanent molars were evaluated through visual inspection by two examiners in order to assess the presence of active or inactive caries lesions on the occlusal surface. Univariate and multivariate analyses for determination of the association between clinical parameters and the presence of active caries lesions in these teeth were performed. The presence of active white spots in other teeth was associated with the presence of active caries lesions in the first permanent molars, in both univariate and multivariate analyses (Odds ratio = 8.8 and 1.9, respectively). The presence of abundant visible plaque on the occlusal surface of the first permanent molars (Odds ratio = 3.5 in the univariate analysis, and 3.9 in the multivariate one) also presented a significant association. In conclusion, the presence of active white spots in other teeth and the presence of considerable visible plaque were associated with the presence of active caries lesions on the occlusal surfaces of first permanent molars.
INTRODUCTION
The diagnosis of dental caries has been traditionally limited to caries lesions detection which classified them based on physical criteria such as size and presence of cavitation. 18However, since dental caries is a highly dynamic process, the diagnosis of caries activity is essential for a correct treatment decision. 9,17,18he assessment of caries activity comprises the evaluation of etiological factors associated with the clinical examination of caries lesions. 2 Bacterial plaque, tooth and other biological factors act as determining factors of dental caries, along with several behavioral and socioeconomic factors which influence caries development. 11egarding the assessment of caries lesions activity, longitudinal studies could evaluate the progression of lesions.Nevertheless, clinical changes assessed by visual inspection would be only detected after two or three years, and, for ethical reasons, dentists could not leave the caries lesions with no intervention in this period.Therefore, many authors have proposed diagnostic criteria based on a single visual inspection for distinguishing active and arrested caries lesions. 9,10,20 set of clinical diagnosis criteria to assess caries lesions activity has been proposed with good reliability, 20 and it presented construct and predictive values in previous studies. 19However, there are some difficulties involved in using this visual score system in clinical practice.For that particular reason, examiners should be extensively trained to achieve a good reliability in distinguishing active and arrested caries lesions, 10,20 because visual inspection is a qualitative and subjective method.Thus, evaluation of some clinical parameters associated with caries lesions activity could be helpful for a correct examination by clinicians.
The more susceptible teeth to dental caries are the first permanent molars.These teeth have a longer eruption time, and dentists must be careful in this period. 8They should also be able to perform an accurate assessment of caries lesions activity in these teeth, which will be crucial to establish correct management and preventive approaches.Thus, the aim of this study was to evaluate the association between several clinical parameters and the presence of active caries lesions assessed by visual inspection on the occlusal surface of first permanent molars.
MATERIAL AND METHODS
The Ethical Committee of the São Leopoldo Mandic Dental College, Campinas, Brazil, approved the study.Informed consent forms were signed by the patients' parents.
Examiners' training
Prior to the clinical examinations, two examiners (MBS and JMQ) were trained for two weeks to perform visual inspection according to the criteria described by Nyvad et al. 20 (1999).The training was performed with practical exercises using pictures of representative teeth for each visual score.Thereafter, the examinations were performed in five children until the two examiners reached a consensus.
Sample selection and clinical examination
Forty eight children (range = 5.8-13.8years old), living in Araras, Brazil, who had at least one erupted first permanent molar, participated in this study.
The examinations were carried out in a conventional dental chair under standard illumination.The children attended twice for examination.During the first examination, one examiner used the simplified plaque index 13 in order to obtain an overview of the oral hygiene condition.The presence of visible plaque on occlusal surfaces of first permanent molars was detected using the criteria previously described. 9The teeth were then cleaned using a rotating bristle brush with pumice/water slurry and rinsed with water.After that, the examiner evaluated the DMF-T and dmf-t for each patient, considering decayed only the cavitated teeth.The presence of active white spots in other teeth was also recorded. 20The eruption degree of the first permanent molars was recorded according to the index described by Ekstrand et al. 8 (2003).
In the second examination, first permanent molars were cleaned again, and two examiners performed a visual inspection of the occlusal surfaces of the first permanent molars according to the visual score system proposed in a previous study. 20The examiners were orientated to analyse each tooth independently.
During the examinations, all subjects were positioned in a dental unit and were examined using an operating light, a 3-in-1 syringe, cotton rolls, a plane buccal mirror and a non-sharpened explorer.
First permanent molars with restorations or fissure sealants, hypoplastic pits, an advanced degree of fluorosis, frank occlusal cavitation or large carious lesions on smooth or proximal surfaces were excluded.The total number of first permanent molars included in the study was 151.
Statistical analysis
Ten children were examined twice by one examiner with an interval of one week between each examination to assess intra-examiner reliability of clinical parameters.Cohen's Kappa statistics coefficient 6 was used to assess intra-examiner reliability and the values were classified according to the interpretation proposed in an earlier study. 16nter-examiner reproducibility of the visual inspection of the first permanent molars was also assessed using Cohen's kappa coefficient at the tooth level. 6Reliability was calculated using all seven scores of visual index.Thereafter, reproducibility was calculated by comparing categories as follows: Sound teeth (score 0) vs. carious teeth (scores 1 to 6); and sound teeth and teeth with inactive caries lesions (scores 0, 4, 5, and 6) vs. active caries lesions (scores 1, 2, and 3).
A univariate analysis was performed to assess the association between the studied clinical parameters -age, gender, general plaque index, visible plaque on occlusal surface of first permanent molars, eruption degree of first permanent molars, arch position of the tooth (upper or lower, right or left), active white spots caries lesions in other teeth, and DMF-T and dmf-t -and the presence of active caries lesions clinically assessed on the occlusal surface of the first permanent molars.Odds Ratio (OR) and 95% confidence interval (CI) were calculated.Significance was determined using the Chi Square test or Fisher's exact test.For all comparisons, the significance level was considered as p < 0.05.
A backward logistic regression model was also developed to assess the association between the same clinical parameters and caries lesions activity in the first permanent molars.The significance level for entry into the model was specified at p < 0.05.For the included variables, OR and 95% CI were calculated.For these analyses, we only considered the values obtained in teeth with coincidence in the diagnosis obtained by visual inspection by the two examiners.
RESULTS
The intra-examiner reproducibility of the clinical parameters reached substantial or almost perfect agreement.The Cohen's kappa values ranged from 0.611 (presence of visible plaque on the occlusal surface of first permanent molars) to 0.917 (eruption degree of the first permanent molars) (Table 1).Regarding the visual inspection, the examiners achieved an inter-examiner reproducibility of 0.776 (substantial).When the visual score system was divided in sound teeth vs. carious teeth, the agreement was almost perfect (0.932).When the score system was separated in sound and inactive caries lesions vs. active caries lesions, the reliability value was 0.755 (substantial) (Table 1).
In the univariate analysis, significant association was observed between the presence of at least one active white spot in other tooth and the presence of active caries lesions on the occlusal surface of first permanent molars.The OR was 8.80 (p < 0.001) (Table 2).Abundant visible plaque on the occlusal surface of the first permanent molar was another clinical parameter significantly associated (OR = 3.54, p < 0.05) (Table 2).The remaining clinical parameters did not present significant association with active caries lesions in the first permanent molars (Table 2).
In the multivariate analysis, the results of backward elimination logistic regression presented two retained variables associated with active caries lesions in first permanent molars.The retained variables were presence of active white spots in other teeth (OR = 1.92; p < 0.001) and abundant visible plaque accumulation on the occlusal surface of first permanent molars (OR = 3.88; p < 0.05) (Table 3).All the remaining clinical parameters were not retained in the regression model.
DISCUSSION
Evaluation of the caries lesions activity is more important than caries detection. 17However, clinical examination of caries lesions activity is difficult because visual inspection is a subjective method, and an assessment of etiological factors could facilitate this evaluation.In the present study, the association between some clinical parameters and the presence of active caries lesions was investigated.
Caries lesions activity was evaluated by assessment of clinical characteristics of the lesions in a single examination.For this purpose, we used the diagnostic criteria system proposed by Nyvad et al. 20 (1999).This system has presented good inter-and intra-examiner agreement. 20Moreover, despite the impossibility of an appropriate gold standard, this method presented construct and predictive validity. 19n order to minimize the subjectivity of clinical examination in the present study, training of the examiners was performed.The inter-examiner reproducibility of the visual inspection of the present study was substantial or almost perfect, and it was in agreement with that of a previous study. 20urthermore, only the coincident results of the two examiners were considered in the analyses.Nevertheless, the difficulty in evaluating the ac- tual caries lesions activity was a limitation of the present study.
Concerning the association between clinical parameters and the presence of active caries lesions in the first permanent molars, there was no significant association between the presence of active caries lesions and the general plaque index in the present work.Studies involving the plaque index and caries lesions have presented controversial results. 1,15It seems that dental caries prevention is more related to the widespread use of fluoride toothpastes than to plaque removal. 3n the other hand, there was a significant association between the presence of abundant visible plaque on the occlusal surfaces of first permanent molars and the presence of active caries lesions in these teeth.As the development of caries lesions occurs exactly under bacterial plaque, 11,21 the association between the presence of active caries lesions on the occlusal surfaces of first permanent molars and the presence of visible plaque on the same sites is more understandable than that with the general plaque index.
These facts are in agreement with the results of a previous study, 12 but they are in contrast to the results of another research. 9These disagreements could be explained by the different statistical approaches used in those studies.In addition, the assessment of caries lesions activity was performed using a different diagnostic system in the latter study, mainly based on color changes of the arrested caries lesions, 9 although color evaluation must not be used as the single indicator of caries lesions activity. 18ome authors have asserted that the first permanent molars are more susceptible to develop caries lesions during the eruption period. 4,8,14It was observed that partially erupted first permanent molars had more abundant visible plaque and a greater proportion of active caries lesions than fully erupted ones. 4However, in the present study, there was no significant association between eruption degree of the first permanent molars and the presence of active caries lesions in these teeth.The presence of abundant visible plaque is probably more important in the development of dental caries than the eruption degree of the teeth.Thus, children with partially erupted molars could compensate this effect with efficient plaque removal.These facts could explain the significant association with visible plaque but not with eruption degree.In fact, careful plaque removal in partially erupted first permanent molars was able to arrest occlusal caries lesions. 5he presence of active caries lesions in first permanent molars and the presence of active white spots in other teeth were also associated.Otherwise, DMF-T and dmf-t did not show significant association.While activity reflects the dynamic nature of dental caries, DMF-T and dmf-t correspond to the past and present caries experience.Therefore, it is understandable that the presence of active caries lesions in other teeth is more related to the presence of active caries lesions in first permanent molars than the DMF-T and dmf-t.Several studies have shown that the past experience of dental caries is a significant factor of caries risk. 15he majority of these studies, however, have considered cavitated caries lesions, independently of the activity of these lesions.In fact, in our present research, there was a significant association between dmf-t and DMF-T and the presence of caries lesions in the first permanent molars, independently of activity (OR = 2.09; 95% CI: 0.95-4.64;p < 0.05; unpublished data).
Evaluation of these clinical parameters could aid dentists to assess caries lesions activity in first permanent molars.However, the cross-sectional design used in this study has some limitations because there is no possibility to find a cause-effect relationship. 7Thus, further longitudinal studies should be performed to corroborate the results of the present study.
CONCLUSION
The presence of active white spots in other teeth and the presence of abundant visible plaque were associated with the presence of active caries lesions on the occlusal surfaces of first permanent molars.
TABLE 1 -
Reproducibility values of several clinical parameters.
TABLE 3 -
Results of backward multivariate logistic regression analysis of several clinical parameters on the presence of active caries lesions assessed by visualinspection on occlusal surfaces of first permanent molars.
TABLE 2 -
Results of univariate analysis of clinical parameters and association with the presence of active caries lesions on the occlusal surfaces of first permanent molars.
OR = Odds Ratio.CI = confidence interval.*Significance evaluated by the Chi Square test or Fisher's exact test.ns = no statistically significant association (p > 0.05).WS = active white spots in other teeth.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.