added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2018-12-05T01:19:36.003Z
|
2001-10-10T00:00:00.000
|
56050623
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://aab.copernicus.org/articles/44/489/2001/aab-44-489-2001.pdf",
"pdf_hash": "0e5e5c0309e4d1d43e9fed62f7fa43f724af32a2",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45484",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "0e5e5c0309e4d1d43e9fed62f7fa43f724af32a2",
"year": 2001
}
|
pes2o/s2orc
|
Heritability of reproduetive traits in Asturiana de los Valles beef cattle breed
Heritability was estimated for four reproduetive traits in the Asturiana de los Valles breed in order to evaluate the possibility of include this information in the breed's current improvement program. The estimations were done using an animal model except for calving ease score. For this last trait, a threshold model under a sire model was fitted, with the sire effect as the only random effect in the model besides the residual. Estimated heritabilities for calving interval age at first calving, gestation length, as dam traits, and calving ease were 0.12, 0.27, 0.15 and 0.42, respectively. The estimated heritability for calving interval and age at first calving could justify a sire selection program in the Asturiana de los Valles breed taking account their female offsprings' reproduetive characteristics. Heritability estimates for gestation length and calving ease suggested a close genetic relationship of these two traits and birth weight. Further research is needed to estimate the genetic (co)variances between these three traits to allow the use of this information in a breed improvement program to reduce dystocia without affecting preweaning growth traits.
Introduction
Selective breeding in beef cattle has focused on increase the in animals' growth rates.However, whatever the production system, reproduetive traits appear to be the most economically important in a beef cattle improvement program.In fact, reproduetive traits dramatically affect productivity.Nevertheless, no suitable selection criterion exists due to the difficulty of finding easily measurable traits under paddock mating (the most frequent case in beef cattle) which are genetically related to reproduction.Asturiana de los Valles is a Spanish beef cattle breed, exploited mostly in traditional and semi extensive conditions in the north of Spain (CANON et al., 1994).The presence in the population of a high number of dams presenting the muscuiar hypertrophy Syndrome has increased the breeders interest in maintaining good maternal characteristics.Breeders association (ASEAVA) has defined four major reproduetive traits: calving interval, age at first calving, gestation length and calving ease.Calving interval is the chosen trait to'measure cows' fertility.Whatever calving date trait (BOURDON and BRINKS, 1982) could be a better measure of fertility because it has a clearer economic significance and higher heritability, this trait have not been selected by ASEAVA.In Asturiana de los Valles breed the calvings are uniformly distributed along year (GOYACHE et al" 1995).In this case, the calving date trait do not show better characteristics than calving interval (MacGREGOR and CASEY 1998).The age at first calving is crucial trait for the cow's reproduetive Performance.There seems to be a high correlation between the age at first calving and the age at subsequent calvings, as well as between the age at calving and the interval between subsequent calvings.In consequence, it does not seem to be possible to compensate late first calving with short intervals between calvings (MICHAUX et al., 1987).Gestation length has been proposed as a goal to reduce birth weight without affecting preweaning growth traits (BOURDON and BRINKS, 1982).Gestation length is expected to show a high genetic determination with moderate to high heritability (ANDERSEN and PLUM, 1965), and high genetic correlations with birth weight (BOURDON and BRINKS, 1982) and dystocia (NADARAJAH et al., 1989).Gestation length has been included in some sire selection indices (AMER et al., 1998).Calving difficulty affects dramatically economic Performance.Distocic calvings influence calf survival, culling and fertility rates, and needing of veterinary assistance (MEIJERING, 1984).It is difficult to estimate the influence of the different environmental and genetic factors on calving ease due to the subjeetivity of recording and the lack linear relationship between the effects involved.It is usually admitted that reproduetive traits heritability ränge between 0.03 and 0.05 (FREEMAN, 1984).Nevertheless, available information is limited and not always can be compared between different production Systems: a long time interval is needed to record data and traits included in the breeding goal are not always the same.The objective of this paper is to estimate the genetic parameters for major reproduetive traits in the Asturiana de los Valles breed in order to evaluate the possibility of include this information in the current improvement program ofthe breed.
Material and methods
The Regional Government of Principado de Asturias, through Asturiana de los Valles Breeders Association (ASEAVA), has implemented Performance recording based on nuclei grouping farms aecording to their proximity and their production system.Only single calving records including calf sex and calving number were considered.Animals with identification errors or ambiguous birth date were eliminated.Those records outside 3.0 Standard deviations from the mean values were deleted for calving interval, age at first calving and gestation length.Calving ease was recorded using BIF Guidelines with the following scores: 1 (no assistance), 2 (minor assistance), 3 (hard assistance), 4 (caesarean section) and 5 (abnormal presentation).Score 5 was not considered for the estimation ofthe genetic parameters.Genetic parameters affecting calving interval, age at first calving and gestation length considered as cow traits, were analysed with Meyer's DF-REML program (1991) under an animal model.The program has been re-started with different a priori values ol the genetic parameters to avoid confusion arising from the possible existence of local maxima.The structure of the data for the three traits is shown in Table 1 The fitted model included four fixed effects for calving interval and gestation length' management group by year of calving as a comparison group, season of calving calving number and sex of calf.Sex of calf did not show significant influence for age at tust calving; in consequence, sex of calf was not included in the fitted model for this trait.The random effects included the additive genetic effect, being their variancecovariance matrix proportional to the additive numerator relationship matrix and the maternal permanent environment effect for calving interval and gestation length and the residual.
ö ' Calving ease score is a categorical trait (Table 2), and the usual linear model procedures for estimating heritability would not be appropriate.Consequently, a threshold model under a sire model was fitted to analyse the genetic parameters affecting calving ease.the sire effect was the only random effect in the model besides the residual.The structure of the data prevented the fitting of a sire-maternal grandsire model which would have eliminated 70% of the dams with record.The runs were carried out with the program written by MISZTAL et al. (1988) that uses an EM algorithm to solve REML.The fitted model included the same four fixed effects described above.
Results and Discussion Estimates ofthe genetic and environmental parameters are shown in Table 3. Valles breed were 0.27 and 0.12 respectively.These estimates are higher than those usually found in literature.Nevertheless, available papers are scarce and usually based on small samples.The mean heritability for calving interval of cows calculated from 4 published papers was 0.10 ( KOOTS et al., 1994).When each of these estimates was weighted by the inverse of its sampling variance, mean heritability was 0.01.The same authors, from 7 published estimates for heifers, calculated unweighted and weighted heritability means of 0.09 and 0.06 respectively.For age at first calving and 7 published estimates, calculated unweighted and weighted heritability means were 0.14 and 0.06.The low heritability estimated for these traits could be explained by: 1) little number of animal available in estimations, 2) the existence of a very important environmental influence on these traits, 3) the decrease of genetic variability Coming from the culling policy that affects essentially non-regular cows, 4) the need for better adjustment of fixed effects, 5) failure to consider the influence of some other reproduetive traits (gestation length or days open) on calving interval and age at first calving, and 6) the use of fitted modeis that can not explain sufficiently the population structure (HANSET et al., 1989;LOPEZ de TORRE and BRINKS, 1991;REGET and FAMULA, 1993;HAILE-MARIAM and KASSA-MERSHA, 1994).Asturiana de los Valles breed heritability estimates have been calculated using an animal model.The animal model takes account of every animals' relationships tili the base population, although it can obtain a better estimate of genetic additive variance.HAILE-MARIAM and KASSA-MERSHA (1994), in Boran cattle exploited in tropical conditions, obtained heritabilities of 0.07 and 0.04 for age at first calving and calving interval respectively using REML under an animal model.BRAGA LOBO (1998) using an animal model in zebu cows estimated a heritability of 0.14±0.01 and 0.29±0.09for calving interval and age at first calving respectively.Heritabilities reported for these authors are in close agreement with those estimated in the present paper.On the other hand, there was a relatively large phenotypic variance for both traits in the current database.For calving interval, this Situation allow to analyse data from cows that become pregnant 10-11 moths after previous calving.In this conditions is possible to find higher heritability values for reproduetive traits (LOPEZ de TORRE and BRINKS, 1991).
Gestation length Estimated heritability in Asturiana de los Valles breed for gestation length was 0.15 with a low permanent environmental effect (0.01).This result contrasts with the published ränge for gestation length heritability of between 0.25 and 0.50 (ANDERSEN and PLUM, 1965).NADARAJAH et al. (1989) in Canadian Holstein cattle and WRAY et al. (1987) in American Simmental cattle estimated direct heritability in 0.33 and 0.374 respectively with a sire-maternal grand sire model.There are no available papers using animal model to estimate genetic parameters of gestation length.Usually, genetic parameters of gestation length have been calculated under a sire or sire-maternal grand sire model (AZZAM and NIELSEN, 1987;NADARAJAH et al., 1989) assuming a non zero covariance between direct and maternal genetic effects.These modeis could lead to an overestimation of direct and maternal genetic variance if covariance is moderate to high and negative (MEYER, 1994).
In addition, most papers treated gestation length as a calf trait, confounded with birth weight as a genetic character (BOURDON and BRINKS, 1982).Nevertheless, the present results are consistent with those obtained considering gestation length as a dam trait.De FRIES et al. (1959) obtained a heritability of 0.19 with a permanent environment value of 0.02; BOURDON and BRINKS (1982) obtained a gestation length repeatability of 0.20; SAPA et al. (1992) found a heritability of 0.21 in Charolais, Limousin and Blond d'Aquitaine heifers maintained in Station using a Henderson III method.However, gestation length, as a dam trait, could be related with birth weight maternal genetic effect.GUTIERREZ et al. (1997), fitting different modeis, reported estimates of maternal genetic effect for birth weight in Asturiana de los Valles breed ranging from 0.09 to 0.20.
Calving ease
The calculated heritability for direct genetic effect of calving ease was 0.42.This result is higher than most previous estimates founded in the literature.Aecording to PHILIPSSON (1979) ranged the heritabilty for calving ease direct genetic effect between 0.03-0.20 and 0.00-0.08 for heifers and adult cows.MEIJERING (1984) reported a mean heritability for calving ease in heifers of 0.06 and 0.23 for the observable categorical scale and the underlying normal scale, respectively.The mean values for multiparous cows were 0.075 and 0.21, respectively.KOOTS et al. (1994), reviewing more than 70 published estimates in adult cows, reported a mean heritability for calving ease direct genetic effect of 0.16.Weighting these estimates by the inverse of their sampling variance, the mean heritability was 0.13.The calculated unweighted and weighted heritability means for the same trait in heifers were 0.13 and 0.10 respectively.It is not surprising that calculated heritability in the current analysis was higher than values usually found in the literature.Most analysis calculated the calving ease heritability on the observed scale.The usual linear model procedures would underestimate the 'real' heritability on the underlying scale (MEIJERING, 1984-MANFREDI et al., 1991;VARONA et al., 1999).Nevertheless, the heritability calculated in this analysis could be overestimated.The estimate could be biased by fitting the additive genetic as the only random effect besides the residual.GUTIERREZ et al. (1997) suggested that modeis which ignore maternal effects tend to overestimate direct heritability.There could be an important maternal genetic effect affecting calving ease.The amount of this maternal genetic effect would be similar than the direct genetic effect.KOOTS et al. (1994) reported a heritability mean value for maternal genetic effect of calving ease ranging 0.09-0.12.VARONA et al. ( 1999) using a threshold model estimated direct and maternal heritability values of 0.23 and 0.10 respectively.When these authors fitted a linear model, direct and maternal heritability values were of 0.18 and 0.08 respectively.Nevertheless, calving ease would be closely genetically related with birth weight.GUTIERREZ et al. (1997) reported a heritability estimate of 0.37 and 0.57 for birth weight in Asturiana de los Valles breed using a sire and animal model respectively, where a direct genetic effect was the only random effect in model.
Table 1
Means, Standard deviations, coefficient of Variation and structure of available data for the estimation of genetic parameters for calv.ngmterval, gestation length and age at first calving in Asturiana de los ValleThreed S
Table 3
Estimated parameters and Standard errors (below) for calving interval, age at first calving, gestation length and calving ease in the Asturiana de los Valles beef cattle breed
|
v3-fos-license
|
2021-10-17T15:07:12.110Z
|
2021-10-01T00:00:00.000
|
239471768
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/26/20/6218/pdf",
"pdf_hash": "97fe2abfca3517be953b304b46df7ce55c11ae67",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45485",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "aef2937b90785a84e5a8f960ea7468aad24395be",
"year": 2021
}
|
pes2o/s2orc
|
Polarisation of Electron Density and Electronic Effects: Revisiting the Carbon–Halogen Bonds
Electronic effects (inductive and mesomeric) are of fundamental importance to understand the reactivity and selectivity of a molecule. In this article, polarisation temperature is used as a principal index to describe how electronic effects propagate in halogeno-alkanes and halogeno-alkenes. It is found that as chain length increases, polarisation temperature decreases. As expected, polarisation is much larger for alkenes than for alkanes. Finally, the polarisation mode of the carbon–fluorine bond is found to be quite different and might explain the unusual reactivity of fluoride compounds.
Introduction
Linus Pauling was one of the most prominent scientists in the 20th century. His contribution to theoretical chemistry is especially linked to his book "The nature of the chemical bond" published in 1933, cited several millions of times [1] in which, among many other contributions, he introduced a quantitative estimate of atom electronegativity. The scale he developed was based on thermodynamical data, and is still used for the semi-quantitative analysis of bonds. It must be underlined that in this usual model, electronegativity scales as a square root of an energy. This concept has been instrumental since its inception to characterize chemical bonds. Indeed, bonds linking atoms of similar electronegativity will mainly be covalent, while they are expected to be more polar or more ionic when they involve elements with different electronegativity values. It is noteworthy that these concepts of covalence and ionicity of bonds can also be investigated from the valence bond point of view, a theory [2] that was strongly promoted by Pauling himself. Discussing the nature of bonds in a molecule remains a cornerstone in chemical interpretation, and while it can be basically tackled from Pauling's electronegativity perspective, alternative approaches are possible.
Indeed, a few months later, Mulliken introduced another definition [3] for electronegativity, namely the average of ionization potential (IP) and electron affinity (EA). This scale has been less used because of the difficulty of experimentally measuring EAs at that time. By contrast with Pauling's approach, within Mulliken's scheme, electronegativity scales as an energy and is absolute. Over the years, other electronegativity scales have been also proposed, where it scales either as a force or a potential, or is dimensionless [4][5][6][7]. Some are even fairly recent [8,9]. Therefore, any relationship between all these scales can only be approximate, the fits being in general constrained to give values approaching 4 for fluorine, and 2 for hydrogen. A thorough description of these scales is reported in [10]. Indeed, Mulliken's definition is just a numerical approximation (linearisation) of the opposite of the electronic chemical potential µ defined by Parr et al. in 1978 [11] within the framework of Density Functional Theory (DFT): where E is the electronic energy, N the total number of electrons and v(r) the external potential. This was the first chemical concept derived from DFT, giving rise to a bunch of indexes and concepts (hardness, linear response function (LRF)...) [12,13] making the so-called Conceptual DFT (C-DFT) a scientific area by itself. As previously mentioned, electronegativity allows characterize chemical bonds. We can thus expect various C-DFT descriptors to also be relevant to this purpose. More specifically, we will focus in the present paper on polarisation descriptors, which we recently derived using a timeindependent Rayleigh-Schrödinger (RS)perturbation framework, and which we aim at applying to study electronic effects at stake in chemical bonds. Indeed, following his work on electronegativity, Pauling developed the notion of ubiquitous electronic effects (such as inductive and mesomeric ones) in chemical bonds, which are obviously connected to the idea of electron density polarisation as a response to an external perturbation ("what happens in place B when the electronic system is perturbed in place A").
As a proof of concept, we have thus decided here to concentrate on halogen-carbon bonds, which may cover an interesting span of bonding types, since fluorine is the most electronegative element, whereas iodine features an electronegativity value close to that of carbon (2.66 and 2.55, respectively in Pauling's scale). Accordingly, depending on halogen X, some C-X are predicted to range among the most polarised single bonds in organic chemistry, C-F showing the largest polarisation of all. Intuitively, one would then expect the C-F bond to be the most reactive in the series. Yet, experimentally C-F bonds are known to be much more inert than other halogen-carbon bonds, a feature that also reflects in the bond dissociation enthalpies (115, 84, 72, 58 kcal/mol for the H 3 CH 3 C bonds, X from F to I) [14] From our point of view, C-DFT descriptors are hence tools of choice to unravel these different effects and to cast light on polarisation in these particular bonds. To this aim, this paper will be built as follows: in the next section, the basics for the description of polarisation within C-DFT will be briefly reviewed. The used theoretical methods will then bedescribed (Section 3), before an in-depth discussion of the results (Section 4).
Theoretical Background
Conceptual Density Functional Theory is a field of quantum chemistry in which one aims at understanding and rationalizing chemical rules through an electron density perspective [13,15,16]. In the last few years, a great deal of attention has been paid to the static linear response function (LRF) [17][18][19], which was shown to be effective to retrieve fundamental electronic effects such as inductive and mesomeric ones. The LRF is expressed as the first derivative of electron density ρ with respect to the external potential (as defined in Hohenberg-Kohn theory): This non-local kernel is to be interpreted as the variation of the electron density at point r when the external potential is changed at another location r (and vice versa since this function is symmetric under the exchange of its own coordinates). Its connection with energy can be safely built using a second-order Taylor expansion of the electronic energy with respect to an infinitesimal change of the external potential at fixed electron number within the so-called E[N,v] canonical ensemble: where E 0 denotes the ground-state electronic energy of the unperturbed system. Very recently, this equation has been put forward through a statistical physics analysis of electronic polarisation [20]. Identifying the electronic cloud as the thermodynamic system of interest, the external potential can act as an external energy reservoir, susceptible to exchange both heat and work with the system. Then, it can be shown that the first-order correction to the energy (the first integral in the right-hand side in Equation (3)) can be seen as the work exchanged between the molecule and the perturbation: Still using the statistical physics perspective, the second-order correction to the energy corresponds to the heat exchange according to: which can be interpreted as a polarisation energy. Here, as we consider throughout this paper that no particle exchange occurs with the surroundings (in other words, the number of electrons remains conserved), the polarisation density integral over the space coordinates vanishes: To evaluate the reshuffling of the electron density, one can instead use the number of electrons that have been shifted by the polarisation induced by the external potential variation: In practice, the best way so far to compute the static LRF is through the well-known Berkowitz-Parr formula that stems from traditional RS perturbation theory [21]: where ρ k 0 (r) is the transition density between the ground state and excited state k (we have here implicitly considered that all involved wavefunctions were real-valued, so that ρ k 0 and ρ 0 k are identical), i.e., the product of the ground-state wavefunction by the k th excited state wavefunction integrated over all spin coordinates and over all spatial coordinates but r. E 0 and E k are the energy of ground state and that of state k, respectively. It can be noticed that within this approximated form, the LRF is diagonal. This is not an "exotic" form since, as the LRF is a symmetric kernel, it can always be exactly diagonalised [22,23].
With this at hand, electron density polarisation and the associated energy [24] can be rewritten as δE (2) In summary, Equation (10) shows that everything goes as if the polarisation energy corresponds to the stabilisation energy the system experiences when the fraction of electron c 2 k is promoted in the k th excited state of the unperturbed system, the (major) number of electrons remaining in the ground state being The set of c 2 k with k ∈ (0, 1, 2, ..., ∞) can be seen as the distribution of electrons in the perturbed system within the eigenstates of the unperturbed system. A polarisation spectrum can be defined by the representation of the distribution with respect to the excited states energies, c 2 k = f (E k ) (see more details in our recent papers). A polarisation entropy can also be computed through the well-known Shannon formula: It should be noticed that while the LRF is an intrinsic property of the system, the polarisation density, polarisation entropy and polarisation energy are not, since they depend on the shape, orientation and position of the additional potential. However, these latter quantities can account for the evolution of an electron system when it is submitted to an external perturbation such as the approach of an electrophile or a nucleophile that can be simulated by such an additional potential. Moreover, as pointed out by Geerlings and De Proft, the LRF is somewhat cumbersome to deal with since it is function of two sets of spatial coordinates. Conversely, the electron density polarisation, the number of shifted electrons, the polarisation entropy and polarisation energy are either local or global quantities, hence much simpler to picture and more practical to use than a fully non-local kernel.
A temperature can also be defined as soon as one can calculate both a heat exchange and an entropy. The polarisation temperature reads: As the derivative of two extensive quantities, the polarisation temperature is actually an intensive quantity. Therefore, it does not come as a surprise that wherever the external potential perturbation is located, there is a linear relationship between polarisation heat and polarisation entropy, the slope being the polarisation temperature. Contrarily to both polarisation energy and entropy, polarisation temperature allows a comparison between systems with a different number of electrons. It may be noted that such temperature is found to depend only on the magnitude of the perturbation, and not on its position in space-hence this quantity is a global descriptor of the system under study.
Lastly, we will discuss some special formulations of this perturbing external potential. The simplest ones considered here are a uniform static electric field (EF) and a point charge (that can be easily extended to a collection of point charges by a superposition principle). In the first case, the potential associated with a space-independent and time-independent infinitesimal EF, δF c = δF cû (whereû is a unit vector), is (up to an arbitrary additive constant) δv F (r) = −δF c · r. Equation (3) then becomes: The first integral in the right-hand side is no more than the electronic part of the molecular dipole moment d e . For the sake of simplicity, we are now choosingû along the z axis, so that: The traditional second-order Taylor expansion for the energy with respect to the EF is: where α denotes the relevant component of the molecular polarisability tensor, which thus identifies to the second integral in Equation (14). The link between polarisability and LRF is then fully established and suggests that polarisability should also be considered to deal with bond polarity. Finally, we consider an infinitesimal point charge perturbation δq at point R c , generating an infinitesimal external potential δv q (r) = δq/|r − R c |. It is straightforward to see that the first-order correction, the work, is the product of δq by the electronic part of the molecular electrostatic potential. On the other hand, using the definition of the polarisation energy and the Berkowitz-Parr relationship (Equation (10)), one can evaluate the heat exchanged with the surroundings: It is plain to see that the integrated function is the product of one function depending only on r with another one depending on only r , so that the double integral can be simply written as the product of two simple integrals, which are, by symmetry, equal. The last equation thus simplifies: It immediately follows from this expression that the polarisation energy is then always negative, regardless of the sign of the point charge: this electron density polarisation definitely triggers a stabilisation of the electronic system.
Materials and Methods
All DFT calculations were performed using orca (rev. 3.0 and 4.0) [25]. Geometry optimisations were carried out without any constraints at the B3LYP/def2-SVP level, and frequency calculations conducted at the same level of theory to ensure no imaginary frequencies were present. The first 50 excited states were then computed under the Tamm-Dancoff approximation (TDA) [26], at the B3LYP/aug-cc-pVTZ level of theory. Large basis sets and diffuse functions are indeed often necessary to correctly model excited states. Conversely, their impact on geometries is often rather small, although they significantly increase computation time. Hence, optimisations with large basis sets become prohibitively long for the largest molecules in our study, suggesting a compromise needed to be found.
To assess the validity of our compromise (small basis set optimisation, large basis set for electronic properties), we ran additional calculations using the aug-cc-pVTZ basis set both for geometry optimisations and excited state computations for the smallest systems under study (carbon chains from 1 to 4 atoms). Satisfactorily, results matched those obtained using mixed basis sets.
Polarisation descriptors (energy, entropy, temperature) were then computed using a home-made Fortran90 program, which is available on request to the authors, using cube files for the transition densities (see Ref. [24] for the calculation details).
The LRF was computed in the so-called frozen molecular orbital approximation (actually corresponding to that of the fictitious non-interacting Kohn-Sham (KS) system) for closed-shell systems according to eq. 53 in the Geerlings-De Proft review: [27] where φ i (r) denotes a doubly occupied KS molecular orbital (MO) with energy i while φ b (r) is a vacant (i.e., virtual) one. Atomic and diatomic condensation requires orbital overlaps (see eq. 86 in the previous reference), which were here computed within the framework of Bader's Quantum Theory of Atoms-In-Molecules (QTAIM) [28] by our own implementation in the ADF software [29,30]. For these calculations, the ADF TZ2P Slater-type basis set was used. Atomic polarisabilities in the molecules (also called "distributed polarisabilities") were computed using the procedure developed by Macchi and co-workers. [31] In a nutshell, QTAIM atomic dipole moments were evaluated in the presence of a finite external uniform static EF (with the recommended magnitude equal to 0.050 atomic units) in the six possible directions (x, −x, y, −y, z, −z). The atomic polarisability tensor can then be reconstructed by finite linearisation. Mean values are finally estimated by taking one third of the trace of this tensor. Such calculations were performed using our own interface between the Gaussian09 [32] and AIMAll packages [33] at the B3LYP/aug-cc-pVTZ level of theory.
Studied Systems
The present investigation aims at exploring the inductive and mesomeric effects in halogeno-alkanes and halogeno-alkenes (see Figure 1). To achieve this goal, a set oflinear molecules have been computed. To allow unbiased comparison, the only considered halogens are fluorine, chlorine and bromine, since for iodine relativistic effects must be included.
To explore the electron donating/withdrawing effects of the halogen upon the carbon backbone, the external potential that polarises the electron density has been located either at the nucleus of the carbon bonded to the halogen or at the halogen itself. Polarisation densities, energies, entropies and shifted electrondensity have been then calculated with the potential settled this way, using 0.1e perturbing charge. After that, the potential has been successively shifted onto each nucleus of the molecule. It can be noticed that applyingthe perturbation directly at a nucleus allows oneto modifythe actual screening of the nucleus charge by the electron density and is also reminiscent of the H* method [34].
Saturated Compounds
We present in Figure 2 the evolution of the polarisation energy and entropy for the various halogeno-alkanes. As already stated, no comparison based upon both δE (2) and δS pol is possible since these two quantities are extensive. However, it is plain to see from Figure 2 that while chlorine and bromine derivatives happen to follow the same pattern, fluorine derivatives seem to follow a different one, especially for the shortest carbon chains. Another important tendency is that whatever the halogen derivative, it can be observed that both the polari-sation energy and entropy are converging to similar values as the number of carbons in the backbone increases. The convergence appears to be achieved when the backbone is of 5 carbons.An interpretation of these two observations will be provided further on.
The polarisation temperatures, represented in Figure 3 are indeed more straightforward to analyze. For each halogen, the polarisation temperatures tend to decrease as the carbon chain length increases. Noticeably, this decrease is diminishing along the series, suggesting that eventually temperatures may saturate to a constant value. This is in line with what has been observed for both polarisation energy and entropy. Some elements can be put forward to account for this. Indeed, polarisation temperature describes how easily a system may distort its electron density in response to given perturbation (the lower the temperature, the more polarisable the system is). In principle, perturbation by a negative point charge should result in an electron density displacement from the location of the point charge, and one may expect that the further away the displaced electron density can go, the lower the electrostatic repulsion. Hence, the larger the system, the lower the temperature. Now, of course the nature of the chemical system at hand must be taken into account. In the case of saturated compounds, electron density distortion will occur through the σ bond system, ultimately relying on inductive effects. It has been long anticipated that inductive effects strongly diminish along a carbon chain, hence it can be expected in first principles that polarisation effects should yield a plateau. This proposed explanation also rationalizes what has been observed for both polarisation energy and entropy. This simple interpretation is nicely corroborated by the shape of the electron density reshuffling isosurfaces depicted in Figure 4 (only alkyl chlorides are represented). In the case of the longest carbon chain (d), most of the electron density reorganisation is located within first three σ bonds from the perturbed nuclei. Actually, the density response barely reaches the fourth and is almost non-existent for the fifth. Figure 4 is quite a nice illustration of the well-known organic chemistry textbook rule: butyl is futile. Interestingly, a similar conclusion can be drawn by looking at the variation of the QTAIM-condensed linear response kernel between the halogen atom and each carbon atom, χ(X,C), as represented in the left graph in Figure 5. As already noticed by Geerlings, De Proft and collaborators in related compounds, inductive effects decrease more quickly with respect to the distance between the two considered atoms, values for Cl and Br being slightly higher than those for F. Now turning to the results for a given carbon chain length, an additional trend can be delineated. Indeed, we observe that temperatures for the fluoro derivatives are higher than those of the bromo and chloro compounds. In fact, it appears that the heavier the halogen, the lower the temperature. A comparable observation can be made for the shifted fraction of electron (see Figure 6 computed for a perturbation on the substituted carbon): chloro and bromo derivatives offer comparable behaviours, while the fluoro derivatives are rather systematically associated with a lower value. Here too, some elements can be put forward to account for these observations. Indeed, we expect the polarisability of the halogens to increase with their atomic number. This is indeed the case as shown by the right graph in Figure 5. The distributed polarisabilities for the halogen atoms slightly increase with the carbon chain size and follows the F < Cl < Br expected trend. However, the three curves are fully separated and do not exhibit the crossings or convergence at some points observed for instance in Figures 2 and 3. This is certainly due to the fact that these polarisabilities are computed assuming a uniform external electric field, which strongly differs from that generated by a point charge, which decreases with distance. One may even believe that uniform external EFs are actually a very poor model for the electric field created by a chemical environment that is in general anisotropic, so that such atomic polarisabilities should be, from our point of view, interpreted with caution. From our previous analysis, stronger responses to perturbation can be expected for the bromo and chloro derivatives, compared to fluoroalkanes-but at this stage nothing explains the clear differentiation of F from Cl and Br. A partial explanation is proposed in the following section.
C-Halides Polarisation Mode
In Figure 7 the density polarisation 3D maps of methyl halides are displayed. It is worth noticing that CH 3 Cl and CH 3 Br look quite alike while the density polarisation of CH 3 F exhibits a quite different feature. An investigation on the polarisation mode shows that only a few excited states display significant contributions to the perturbation response at the first carbon nucleus. For the heavier halides (Cl and Br), the most representative contribution is constructed by a loss of electron density at the first carbon atom and a gain at the halogen, along the interatomic axis (σ-orbital-type response). In the language of molecular orbital (MO) theory, polarisation in these cases is piloted by the promotion of a fraction of electron from a bonding σ(C-X) MO to the associated antibonding σ (C-X). This assertion has been confirmed by a natural transition orbital (NTO) analysis (not reported here). This phenomenon provokes the weakening of the C-X bond and is certainly at the origin of the first stage of a first-order Nucleophilic Substitution (SN 1 ). The process that weakens the C-X bond by polarisation ends up with the carbon and halogen being drawn apart from one another due to the bond breaking. The polarisation pattern of methylfluoride turns out to be rather different. The response is developed perpendicular to the inter-nuclei axis as if the response were supported by a π-like system. Two different situations are encountered. In the case of CH 3 F, in MO terms the polarisation response is triggered by an electron promotion from an occupied "lone-pair" type π MO (constructed through a combination of a 2p(F) AO with a C-H contributions) to a similar MO relying on a 3p(C) AO. This is schematized in Figure 8 In the case of longer carbon chains, the "accepting" orbital is an antibonding σ (C-C) MO. In both cases, no significant weakening of the C-F bond is expected. This is perfectly in line with the lower reactivity ascribed to these bonds compared to other halogen-carbon bonds.
A more detailed study of the MO diagram of these alkyl halides helps to understand this strong difference in behaviour. Indeed, the bonding σ(C-X) MO is deeply buried in the case of CH 3 F, compared to CH 3 Cl and CH 3 Br. Promotion of an electron in this MO is thus severely hampered, and thus π-type response becomes preferred.
Unsaturated Compounds
If we now turn our attention to unsaturated compounds, some differences are demonstrated. As was observed for haloalkanes, temperatures are also decreasing for a fixed halogen and as the length of the carbon chain increases (see Figure 9). However, the same is not true for the shifted fraction of electrons; on the contrary, this descriptor is increasing (as proven by Figure 9). Nevertheless, chloro and bromo derivatives once again present comparable features, while the fluoro compounds stand out.
Here also, some elements can be provided to account for these observations. Beside their inductive effects, halogens are also presenting mesomeric effects, which may be active in these compounds. Conjugation with the unsaturated chain thus allows electron density movements to spread over a rather large distance in the compounds, as can be seen in Figure 10. However, another feature is also evident from this Figure: besides the π-system response, an opposite response of the σ-backbone is present. From Equation (5), we may expect these "counter"-responses to mitigate the stabilisation from the π system reorganisation, and to relate to the inductive effects one may expect from halogens (inductive acceptor, mesomeric donor groups). This effect can be seen as an application at the electron level of the well-known Le Chatelier's rule or as a molecular electronic Lenz's Law. Hence, two opposite factors appear to be active in these cases: • conjugation, which allows a larger electron density movement, reflecting in the larger δN shi f ted values compared to the saturated compounds (and the incident increase in their value with the elongation of the conjugated chain); • "counter" polarisation of the σ backbone, stemming from the inductive effects of the halogen, and resulting in a slower decrease of temperature than could be expected.
To a lesser extent, these effects seem to be also present for halogeno-alkyls, but as expected induction in such cases prevails over mesomerism.
Conclusions
In a fairly recent paper [20], several new descriptors such as the polarisation temperature have been derived from a statistical physics view of density polarisation. In the present contribution, these indexes have been used to investigate how electronic effects develop and propagate in halogeno-alkanes and halogeno-alkenes. As expected, the investigation has shown that the longer the carbon backbone, the more polarisable the molecule. Moreover, it has been found that the density polarisation is barely measurable beyond four carbons for an alkyl chain while it develops further for alkenes. This confirms the organic chemistry rule butyl is futile. A sort of Le Chatelier rule for halogeno-alkenes has also been observed. Indeed, as the density polarisation propagates through the π bonding system, the σ bond backbone reacts to counterbalance the density reshuffling. Finally, for both halogeno-alkanes and halogeno-alkenes, the polarisation mode is different for fluorine than that of the other halogens. This difference in pattern might be at the origin of the unusual chemistry of fluoride derivatives.
|
v3-fos-license
|
2018-04-03T05:49:20.296Z
|
2013-06-11T00:00:00.000
|
12357193
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/np/2013/103949.pdf",
"pdf_hash": "9d441d15397397c35db4ee29800e633ae68e77d6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45486",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"sha1": "22819ac4577696a7893c06e3512af96458737fac",
"year": 2013
}
|
pes2o/s2orc
|
Is Sleep Essential for Neural Plasticity in Humans, and How Does It Affect Motor and Cognitive Recovery?
There is a general consensus that sleep is strictly linked to memory, learning, and, in general, to the mechanisms of neural plasticity, and that this link may directly affect recovery processes. In fact, a coherent pattern of empirical findings points to beneficial effect of sleep on learning and plastic processes, and changes in synaptic plasticity during wakefulness induce coherent modifications in EEG slow wave cortical topography during subsequent sleep. However, the specific nature of the relation between sleep and synaptic plasticity is not clear yet. We reported findings in line with two models conflicting with respect to the underlying mechanisms, that is, the “synaptic homeostasis hypothesis” and the “consolidation” hypothesis, and some recent results that may reconcile them. Independently from the specific mechanisms involved, sleep loss is associated with detrimental effects on plastic processes at a molecular and electrophysiological level. Finally, we reviewed growing evidence supporting the notion that plasticity-dependent recovery could be improved managing sleep quality, while monitoring EEG during sleep may help to explain how specific rehabilitative paradigms work. We conclude that a better understanding of the sleep-plasticity link could be crucial from a rehabilitative point of view.
Introduction
In 1971, Rechtschaffen [1] stated that "if sleep does not serve an absolute vital function, then it is the biggest mistake the evolutionary process ever made. . . ." Indeed, almost all the animal species, from the largest mammals to the fruit flies [2], show a behavioral state that can be considered sleeplike. Sleep seems to be a crucial need, as much as drinking or eating, to such an extent that chronic sleep deprivation in rats produces cellular and molecular changes in brain [3] that makes the animal die within a matter of weeks [4].
Different hypotheses were suggested to explain the functions of sleep, but a general consensus exists today that sleep is strictly linked to memory, learning and, in general, to the mechanisms of neural plasticity. Indeed, cognitive impairments, especially in learning, and memory tasks [5][6][7], are one of the main consequences of sleep deprivation. Although the link between sleep, memory, and neural plasticity has been widely investigated, such a relation is not yet completely understood. Several findings are in line with the hypothesis of a homeostatic, sleep-mediated synaptic downregulation [8,9], while other studies support a "consolidation" model based on the reactivation, during sleep, of the same areas which were active during wakefulness [10,11]. Since it is widely accepted that synaptic plasticity mechanisms underlie motor and cognitive recovery, understanding the relationship between sleep and plasticity is essential also in a rehabilitation perspective.
The main aim of the present paper is to review studies showing how sleep-dependent plasticity could be involved in functional recovery from different neuropsychological conditions (poststroke brain damage, obstructive sleep apnea, Alzheimer's disease, and autism) and to provide insights on how the efficacy of rehabilitation protocols could be improved by methods which enhance sleep-dependent plasticity. The term "functional recovery" has different implications for the various clinical conditions considered, depending on the hypothetical mechanisms underlying each disturb. In the current review, we will adopt a very general definition and will refer to it as a process involving improvement or slowing down of deterioration in different areas (motor and/or cognitive) of the illness. Aiming to propose a possible role of sleepdependent plasticity in functional recovery, we will shortly summarize how sleep and plastic process are related. Namely, we will discuss the role of sleep in ensuring the consolidation of plastic changes and the possibility of learning new things every day. To this aim, we will highlight the findings about how plastic changes during wakefulness affect the subsequent sleep and try to understand what kind of plastic modifications occurs during sleep. Then, we will report empirical evidence about the molecular and electrophysiological consequences of sleep deprivation on the plastic mechanisms. Finally, we will discuss how sleep-dependent plasticity could influence functional recovery, and we will suggest how to increase efficacy of rehabilitative protocols by enhancing sleep quality, reducing sleep disorders, and promoting sleep-dependent plasticity.
Influence of Plastic Changes during Wakefulness on Subsequent Sleep
Spontaneous wakefulness is characterized by molecular changes associated with long-term potentiation (LTP) [12,13], and by an increase of synaptic density [14]. The synaptic homeostasis hypothesis [8,9] posits that the mechanisms of synaptic potentiation during wakefulness are directly related to the enhancement of slow wave activity (SWA; 0.5-4.5 Hz) in the electroencephalogram (EEG) during the subsequent sleep. SWA is considered a measure of sleep need, and there is well-established evidence that SWA increases with the time spent awake and progressively decreases during sleep [15]. This hypothesis is based on the observation that increased expression of LTP markers during wakefulness is followed by higher level of SWA during the subsequent sleep [16]. A demonstration that the increase of SWA during sleep depends directly on the LTP mechanisms, and not on wakefulness as such, comes from animal studies showing that when the expression of LTP-related molecules is reduced, due to damages to the noradrenergic system, the SWA peak during sleep is blunted [12,17,18].
The relation between changes in cortical plasticity during wakefulness and quantitative changes of SWA in subsequent sleep is topographically specific; a higher LTP in a particular cortical area is directly correlated to an increase of the SWA activity in that area. Huber and co-workers [19] have investigated the effects on sleep of a specific visuomotor task (adaptation to a rotated frame of reference) that activates the right parietal cortex [20] and seems to be related to a synaptic potentiation mechanism. Results showed that this task, when compared with a subjectively indistinguishable control task, induced during the subsequent sleep an increase of SWA, which was limited to the right parietal area. This finding supports the hypothesis of the activation of a local homeostatic process during sleep. Similarly, a declarative learning task induced an increase of SWA and spindle activity (12)(13)(14)(15) in the left frontal area during sleep after a training session, which was positively correlated with changes in memory performance [21]. Finally, a potentiation of TMSevoked EEG responses induced in the premotor cortex, through the application of a 5 Hz repetitive TMS (rTMS), was positively correlated with an increase of SWA in the following sleep, which was topographically specific for the same premotor site [22].
What happens if a synaptic depression process occurs during wakefulness? Huber and co-workers [23] have found that a short-term arm immobilization was associated with impaired motor performance, smaller somatosensory evoked potentials (SEPs), and motor evoked potentials (MEPs) and was followed by a local reduction in SWA during the successive sleep episode. In summary, plastic changes occurring during wakefulness seem to induce coherent and topographically specific local changes in SWA during the subsequent sleep.
Further support to the hypothesis that changes in cortical plasticity lead to homeostatic modifications in SWA during sleep comes from studies using TMS paired associative stimulation (PAS) protocol, a method that allows to induce plastic changes in the human cortex by coupling, with a fixed interstimulus interval allowing the sensory impulse to reach and energize the S1/M1 cortex, a peripheral electric stimulus and a magnetic pulse on the scalp. The direction of the plastic changes (potentiation or depression) depends on the interval between the stimuli [24][25][26]. By using the PAS protocol it has been observed that LTP and long-term depression (LTD) lead to an increase and a reduction in SWA during sleep in somatosensory cortex [27], respectively. Other studies have found different, albeit coherent with the general hypothesis, patterns of topographic changes in SWA [28,29] and spindle activity [28] after the PAS protocol. These findings still support the hypothesis of a link between cortical plasticity and sleep homeostasis but stress the need of a better knowledge of the underlying neurophysiological mechanisms.
In computational studies, it has been observed that stronger synaptic connections are associated with higher SWA [30,31]. Moreover, an increase in SWA during sleep which is topographically specific for the motor cortex, induced in vivo in rats by learning a motor task, was associated with a post-training increase, in the same cortical area, of c-Fos and Arc levels [32], two activity-dependent proteins involved in motor learning [33][34][35]. Finally, a causal relation between brain-derived neurotrophic factor (BDNF) expression during wakefulness and subsequent sleep homeostasis has been observed in vivo; higher levels of BDNF lead to increased SWA during subsequent sleep [36,37]. Together, these data suggest that changes in cortical plasticity during wakefulness lead to homeostatic modifications in SWA during sleep, supporting the hypothesis of a direct relation between cortical plasticity and sleep regulation.
Synaptic Renormalization during Sleep
According to the synaptic homeostasis hypothesis [8,9] sleep has a functional role in promoting the so-called synaptic downscaling. This process would have the function of restoring the total synaptic strength to a sustainable energy level, favouring memory and performance gain. In other words, the experience-dependent synaptic potentiation occurring during wakefulness is proportionally reflected in the following sleep, and in particular during NREM sleep, a condition which is characterized by slow oscillations and is an ideal scenario for the experimental induction of LTDlike mechanisms [38]. SWA would support downscaling by the alternation of a depolarization phase (up-phase) and a hyperpolarization phase (down-phase) [39].
The hypothesis of a link between plastic changes and SWA has received considerable empirical support by the findings of significant fluctuation of different molecular and electrophysiological markers within the sleep-wake cycle. Absolute levels of brain metabolism significantly decrease after a period of sleep [40], consistently with the hypothesis of a synaptic downscaling process during sleep. Moreover, while molecular processes associated with LTP reach lower levels during sleep than during wakefulness [12,41], molecular changes involved in LTD-like mechanisms increase during sleep, compared with waking time [13,41]. At an electrophysiological level, animal studies found that different markers of synaptic efficacy increase after prolonged wakefulness and decrease following a period of sleep [41,42].
Notwithstanding this evidence, it is likely that the synaptic homeostasis hypothesis does not allow for a comprehensive understanding of the processes that occur during sleep. Frank [43] points out that different molecules, like Arc, observed at high levels in the cerebral cortex after waking, mediate both LTP and LTD processes [13,44] and the synthesis of GABA in inhibitory interneurons [45,46]. Their presence, then, is not a clear indication of synaptic potentiation or depression. Moreover, different studies show that some neuromodulators involved in synaptic potentiation increase during NREM sleep [47,48] and some potentiationrelated molecular changes have been observed during sleep [49][50][51][52].
These studies seem to question some predictions of the synaptic homeostasis hypothesis in favour of a "consolidation" model, which predicts an increased potentiation of specific neural circuits as a prerequisite of consolidation mechanisms. In this perspective, the consolidation process should involve a sleep-dependent reorganization and redistribution of the newly acquired information between different brain systems, so that it can be defined as a "system consolidation" process [53]. In particular, the parallel activation of specific neocortical and hippocampal networks during wakefulness, which represent, respectively, long-term and temporary store in declarative memory system, should induce a selective reactivation of the same circuits during subsequent sleep [10,11,54,55]. Several findings support this model pointing out to the crucial role played by sleep-specific brain activity in favouring consolidation of memory traces [19,[56][57][58][59][60]. According to the model, slow oscillations originated during SWS in neocortical networks [61] allow the formation of spindle-ripple events that mediates the hippocampusto-neocortex transfer of memory information. Specifically, the depolarizing up-phase of slow oscillations drives the hippocampus to trigger the reactivation of memory representations that are gradually transferred to the neocortex via thalamo-cortical spindles. In this view, sharp-wave hippocampal ripples represent the reactivation of memory traces while thalamo-cortical spindles, modulating cortical Ca 2+ influx, provide the background suitable for the changes in neocortical synaptic connections underlying the long-term storing of memory traces in the neocortical respective networks [56][57][58][59]62].
Several evidences in both animals and humans point to the association between learning processes during wakefulness and a coherent neural re-activation during subsequent sleep. Monocular deprivation studies [63,64] showed that ocular dominance plasticity in cats undergoes a sleepmediated consolidation process that involves both synaptic potentiation and depression mechanisms (monocular deprivation induces stronger cortical responses to the open eye and weaker responses to the deprived one after than before sleep). The same pattern of neuronal activation observed during song rehearsal of the Zebra finch was found during sleep [65], and it leads to a temporary deterioration of song quality correlated with a global enhanced learning [66]. A neural re-activation during sleep has been observed in rats after simple spatial tasks [54,67,68]. Moreover, the engagement in a learning task results in increased slow oscillations, spindles activity, and hippocampal ripples that seem to be associated with an improved performance [19,53,56,57,60,62].
Results from humans studies are consistent with the findings in animals, showing a sleep-dependent re-activation of brain regions involved in previous learning. By positron emission tomography (PET) recordings, Maquet and coworkers [69] found such a specific re-activation during REM sleep that followed a training on a serial reaction time task, while other studies found a re-activation during SWS following declarative learning task [70,71]. More in detail, Rasch and co-workers [71] showed that the presentation of an odor cue, previously associated with a visual-spatial learning task (memory for cards locations), during sleep induces a greater hippocampal activation, which positively affects task performance. This study provides the first evidence of a causal link between re-activation during sleep and consolidation of memory traces. Interestingly, the enhancement of memory took place only if the odor was presented during SWS, but not during REM, and not in case of a procedural learning task [71]. These findings are consistent with the hypothesis of a differential role of SWS and REM on consolidation of different memory functions [72]. In our view, this dissociation should be taken into consideration in a rehabilitative perspective that takes into account sleep features during treatment.
Traditionally, according to the "dual processes" theory, the postsleep improvement in procedural memory has been ascribed to REM sleep, while SWS is responsible for the consolidation of declarative memory [73,74]. Studies using declarative memory tasks showed a better retention performance after sleep if it consists mainly of SWS [21,71,73,75]. In others, boosting SWA enhanced word pairs retention [76][77][78], but not the retention of procedural memories [76]. Conversely, the recall of a mirror-tracing task improved more after a retention interval spent in REM sleep than in SWS [73], and post-sleep performance improvement in a finger-tapping task was correlated with the time spent in REM sleep [79]. However, this different role for REM and SWS has not been confirmed by other studies. Several nondeclarative tasks, like perceptual discrimination [80] and rotation adaptation [21], are also benefited by SWS, whereas REM sleep in some instances seems to mediate the consolidation of emotional aspects of declarative memory [81].
Thus, the "dual processes" theory should be considered an oversimplification, and, actually, a large body of evidence seems to support a different theory, the "sequential" or "double-step" hypothesis [82], which states that consolidation process involves both SWS and REM sleep, regardless of the memory system the traces belong to. According to this hypothesis, what leads to the consolidation of a still labile memory trace during sleep is the repeated pattern of non-REM sleep followed by REM sleep. This view is compatible with the "consolidation" model so that the integration of the two perspectives suggests that consolidation processes would consist in the repetition of sleep cycles with SWS favouring a "system consolidation" of memory by a reactivation and a redistribution of memories to the neocortical "longterm storage. " In this view, subsequent REM sleep allows local processes of consolidation at synaptic level, whereby cortical memory representations are further stabilized [53,83].
With respect to the different role of REM and slow wave sleep (SWS), Chauvette and co-workers [84], using an electrical stimulation at low frequency (1 Hz) of the medial lemniscal fibers in cats, studied the somatosensory corticalevoked local field power (LFP) responses before and after a period of SWS. The amplitude of the responses in the somatosensory cortex was increased after SWS compared to the previous waking period, suggesting that SWS induces synaptic upscaling, rather than downscaling. On the other hand, the analysis of firing rates across cycles of NREM-REM-NREM sleep in rat hippocampal pyramidal cells and interneurons showed a significant increase in the firing rates during NREM sleep, while a substantial decrease was associated with REM sleep [85]. In addition, the decrease in the firing rates between the first and the last episode of NREM sleep was positively correlated with the amount of theta activity (4-7 Hz) during REM sleep. Together, these findings suggest that different kinds of synaptic renormalization occur during sleep, and the role of REM sleep and theta waves must be better analysed. The results by Grosmark and co-workers could represent a link between the "consolidation" model and the downscaling process. In particular, Born and Feld [86] suggest that a local upscaling of specific memories could occur in concomitance of global downscaling processes. More studies are needed to better understand this intriguing issue.
Neural Plasticity and Sleep Deprivation
Since the early empirical observations of De Manacéine [87], it is well known that sleep loss has degrading effects on alertness and performance. If sleep is a behavioral state in which the body recovers physical and mental energies, the lack of sleep can jeopardize the execution of neurocognitive, psychological, and behavioral processes [88].
There are numerous empirical evidences about the harmful consequences of chronic sleep loss, such as drowsiness, reduced alertness, communication difficulties, and cognitive deficits [89,90]. In particular, different forms of learning are negatively affected by sleep deprivation in humans and animals [5][6][7], albeit recent findings point out that long-term consolidation does not seem to be affected by sleep loss in adolescents [91].
Memory consolidation is impaired also in different clinical samples characterized by disturbed sleep. Patients with primary insomnia showed a decreased sleep-dependent memory consolidation in procedural and declarative learning associated with the reduction of REM sleep [92,93] and SWS [94], respectively. Moreover, most of neurodegenerative diseases, like Alzheimer's disease (AD), Parkinson's disease (PD), or dementia with Lewy bodies (DLB), which are usually characterized by memory impairment, share a common pattern of sleep features. In these diseases, sleep is usually more fragmented, SWS is decreased, and spindles, K-complexes and REM sleep are often reduced (for a review, see [95]).
The detrimental effects of sleep loss on memory suggest a deterioration of the underlying neuronal processes. In particular, alterations of LTP/LTD mechanisms may underlie at least a part of the behavioural alterations observed during sustained wakefulness.
One of the most well-established consequences of sleep deprivation is the increase of delta and theta EEG activity [96,97], mainly in frontal cortical areas [98][99][100], interpreted as an index of higher "recovery need" [98]. It has been proposed that experience-dependent plasticity is directly linked to local changes in the electrophysiological expression of sleep need, as indexed by increased SWA after prolonged wakefulness [8,9].
An intriguing question is whether and how prolonged periods of wakefulness can affect plastic processes. Different in vitro studies show that sleep loss inhibits LTP in the hippocampus but enhances LTD mechanisms [101][102][103]. This suggests that the induction of LTP could be saturated after sleep deprivation. More recently, it has been observed in cat and mice cortical slices that the amplitude and frequency of miniature excitatory postsynaptic currents increase after wakefulness and decrease after sleep [42]. Moreover, an increase in number and size of central synapses was observed in Drosophila melanogaster after prolonged wakefulness, and a subsequent decrease was possible only after sleep [104]. In this study, qualitative characteristics of wakefulness were also evaluated; richer experiences during wake were followed not only by a higher sleep need, but also by a greater synaptic growth.
Vyazovskiy and co-workers [41] found that synaptic strength, in terms of amplitude and slope of local field power Neural Plasticity 5 (LFP), increases after a period of sustained wakefulness in rats, and that the induction of LTP is easier after sleep rather then after a waking period. Furthermore, cortical neurons fire at higher frequency [105] and cortical excitability is increased [106] after sleep deprivation.
At molecular level, these changes involve the delivery of postsynaptic glutamatergic AMPA receptors (AMPARs) containing GluR1 subunit [41]. In particular, GluR1-containing AMPAR levels increase with time spent awake and decrease during sleep [41]. These morphological and functional changes are likely connected with the variations in the firing rates of cortical and hippocampal neurons recently described for wake-sleep cycle and sleep deprivation in rats [105]. Furthermore, high levels of different pre-and postsynaptic proteins and proteins involved in neurotransmitters release have been found in Drosophila melanogaster after sleep deprivation, while their levels were low after sleep [107].
Considering that brain metabolism accounts for 20% of all the body rest metabolism [108] and that around 75% of brain's energy consumption is due to glutamatergic synaptic signalling (action potentials, post-synaptic potentials, and repolarization) [109], it is reasonable to assume that a widespread cortical increase of firing rate during wakefulness combined with a rapid and progressive increase of cortical extracellular glutamate levels [110] results in a raising of brain metabolic costs. In contrast, NREM sleep, characterized by low firing rate [105] and low cortical extracellular glutamate levels [110], is associated with a reduction in energy demand.
Taken together, these findings suggest that synaptic strength progressively increases with time awake, leading to high energy costs and saturating learning process. Sleep seems to be necessary for a homeostatic renormalization of cortical synapses, but it remains unclear how sleep propensity is accumulated during wakefulness. In recent years, adenosine assumed increasing importance as mediator of connection between brain activity during wake and sleep regulation, whereas basal forebrain seems to be the "adenosine sensor" of the brain responsible for the adenosinergic modulation of sleep-wake states (for review see [111]). In brief, adenosine, through its action on A1 receptor, promotes the transition from wakefulness to SWS by inhibiting wake-active neurons in basal forebrain, which are connected to cortical regions. Sleep begins when the activity of the wave-active cells decreases sufficiently. During sleep, neuronal activity decreases causing a reduction of extracellular adenosine concentration. A sufficient reduction of extracellular adenosine makes wake-active cell in basal forebrain free from adenosine inhibition resulting in the initiation of a new waking period [111]. Consistent with this hypothesis, an experimentally induced energy depletion by 2,4-dinitrophenol (DNP, molecule which prevents the synthesis of ATP) infusions in basal forebrain (but not outside) increases subsequent sleep need [112].
Taken as a whole, results from animal studies suggest that sleep deprivation has detrimental effects on synaptic plasticity. However, it should be borne in mind that processes like neuronal firing, metabolic activity, and synaptic potentiation represent different aspects of neuronal functioning, and the relationship between them is not simple, although the state of knowledge does not provide a complete unifying framework for these different aspects of neural functioning yet.
At present, there is no direct evidence in humans about the modification of plastic processes after sleep loss. Nevertheless, changes in cortical excitability have been widely investigated (mainly by means of TMS) but results are not univocal. Many studies point to an increase of cortical excitability during prolonged wakefulness in healthy subjects, in terms of modulation of motor evoked potentials [99,113], and TMS-evoked potentials [114]. It should be remembered that an increase of cortical excitability after prolonged wakefulness is usually observed in epileptic patients [115,116] and that sleep loss increases seizures [117]. On the other hand, other studies were not able to provide any evidence of a modulation of sleep deprivation on cortical excitability [118,119] or found conflicting results [120]. Although such negative findings can be explained by a lack of statistical power due to the small sample size of these last studies, the mechanisms underlying changes in human cortical excitability in condition of sleep loss remain unclear. Moreover, in most cases only changes in frontal and prefrontal cortical excitability have been investigated. The only study that assessed how sleep deprivation modulates the responsivity of the somatosensory cortex showed an increase in early SEPs components amplitude, but it did not account for the potential influence of circadian factors [121].
The Role of Sleep-Dependent Plasticity in Motor and Cognitive Rehabilitation
As discussed in the previous paragraphs, we still miss a complete understanding of the plastic processes occurring during sleep. Nevertheless, a coherent pattern of empirical findings shows that plastic changes during wake can affect subsequent sleep, which in turn has a beneficial effect on plastic mechanisms and learning processes. So, the existence of a link between sleep and synaptic plasticity is widely accepted. Since plastic processes are involved in functional recovery from different neuropsychological disorders and after brain damage, the understanding of how SWA during sleep can affect rehabilitation-dependent plasticity seems a suggestive issue. If sleep has a role in modulating cortical plasticity, rehabilitative protocols should be designed considering how sleep could improve recovery. In the present section, we will consider the possible role of sleep-dependent synaptic plasticity in the rehabilitation from different neuropsychological conditions.
Stroke.
Animal models show that ischemic stroke can induce a SWS increase and a paradoxical sleep decrease in mice [122] and rats [123]. Mice undergoing treatment with -hydroxybutyrate (GHB), a drug used to promote SWS in humans [124], showed a faster recovery of the grip strength in the paretic forelimb, when compared with those treated with vehicle saline [125]. Moreover, sleep disturbance and sleep disruption negatively affect poststroke recovery in rats [123,126] impairing axonal sprouting and neurogenesis [126], two cellular processes associated with functional recovery [127][128][129].
Only few human studies on this issue are available at the moment, but they show promising results. Several experiments have been designed to understand if sleep can affect motor learning in post-stroke patients [130][131][132], based on the observation of a sleep-dependent memory consolidation of different motor tasks in healthy subjects [133,134]. Results showed that sleep enhances offline implicit and explicit motor learning of a continuous sequencing task [130,131] and improves spatial tracking accuracy and anticipation of upcoming movements during a continuous tracking task [132]. Sleep seems to induce a selective enhancement in sequential motor learning and performance also in patients with prefrontal lesions, with no improvement in verbal and working memory [135]. A limitation of these studies is the absence of EEG recordings, so hypotheses on the electrophysiological basis of the influence of sleep on motor performance in post-stroke patients remain at a speculative level. Nevertheless, these findings suggest that an appropriate management of sleep-wake cycle in these patients could promote motor recovery. Siengsukon and Boyd [136] stated that sleep between therapy sessions should be supported, with the aim to support off-line learning, and a quiet environment for sleep should be ensured to patients. Moreover, poststroke patients are often characterized by different sleep disorders like hypersomnia, insomnia, sleep-related breathing disturbances, or restless legs syndrome (for a review, see [137]), and other factors like depression or side-effects of pharmacological treatments could negatively affect sleep in these patients. The importance of managing these factors for the rehabilitation outcome should not be underestimated.
Obstructive Sleep Apnea.
Obstructive sleep apnea (OSA) is a common breathing and sleep disorder, characterized by cessations or reduction in respiration due to pharyngeal collapse during sleep that induce intermittent hypoxia and sleep fragmentation [138,139] increasing daytime sleepiness [139] and risk for cardiovascular disease [140]. OSA is associated with neurocognitive impairment, with negative influence on vigilance, attention, executive functioning and memory [141,142]. Nevertheless, the neural basis of cognitive impairment in OSA patients is not well understood yet. Animal models show that hypoxia induces apoptosis in cortical and hippocampal neurons [143], but neural functioning may be already compromised before the beginning of the apoptotic process [144]. Results from recent studies suggest that changes in synaptic plasticity could account for cognitive impairment in OSA patients [144,145]. In fact, Xie and coworkers [145] have found that chronic intermittent hypoxia in mice induced an impairment in hippocampal early and late-phase LTP and a reduction of the expression of the brain-derived neurotrophic factor (BDNF), a neurotrophin that modulates synaptic plasticity [146,147]. Moreover, the increase in oxygen and nutrient demand induced by chronic intermittent hypoxia should lead to adaptive homeostatic changes in the blood-brain barrier that, at long-term, could influence brain's microenvironment, inducing an impairment in plastic processes and cognitive performance [148]. A better understanding of the plastic changes occurring in OSA patients, as well as their possible role in cognitive impairment, is of great importance at a therapeutic level. At present, albeit many studies show that the continuous positive airway pressure (CPAP), the treatment of choice for OSA, improves different cognitive functions [149,150], a recent meta-analysis points out that only a small recovery in the attention domain can be observed after CPAP treatment [151]. Therapeutic strategies based on the enhancement of synaptic plasticity may be useful to improve neurocognitive functioning in OSA patients, particularly for what concerns memory consolidation. For example, treatment with multiple intraventricular injections of BDNF in mice has beneficial effects on the LTP impairment induced by hypoxia [145] and has been proposed as a possible method to reduce neurocognitive impairments [144].
Alzheimer's Disease.
Sleep in patients affected by Alzheimer's disease (AD) is characterized by a general accentuation of the sleep modifications which are observed in normal aging [152]: sleep fragmentation, decreased SWS and REM sleep, increased percentage of stage 1, alterations of sleep spindles, and K-complex [153][154][155]. Moreover, different studies have found an EEG slowing during a condition of resting wakefulness in AD and Mild Cognitive Impairment (MCI) patients [156][157][158][159][160][161]. A similar phenomenon of EEG slowing in these patients has been observed also during REM sleep, particularly in temporoparietal and frontal sites, with increased delta and theta frequencies and reduced alpha and beta activity, as compared with normal elderly [162,163]. Recently, melatonin treatment for the management of sleep has been proposed as a possible therapeutic strategy in AD patients, and many studies have shown its beneficial effects on sleep quality, depressive symptoms, and neuropsychological performance in MCI patients (for review see [164]). Albeit the mechanisms underlying the beneficial melatonin effects in AD and MCI remain unclear, Kang and co-workers [165] have found in mice that sleep reduces synaptic anomalies associated with amyloid precursor protein (APP), one of the typical synaptic alterations observed in AD [166]. These data raise the possibility that the positive effects of sleep in AD and MCI are associated with an enhancement of synaptic plasticity. Since plastic processes are strongly impaired in AD patients [167], a reduction of sleep alterations could be useful to restore synaptic plasticity and to limit or to slow down the cognitive decline in such patients. Future studies should be designed to understand if synaptic plasticity in AD patients benefits from the restoration of specific sleep features (i.e., SWS) or if a general improvement of sleep quality is needed.
Autism.
Prevalence of sleep disorders in children with autism ranges from 40% to 80% [168][169][170]. Sleep in autistic children is characterized by long sleep latency, nocturnal awakenings, short sleep duration, low sleep efficiency, circadian rhythm disturbances, increased REM density and stage 1 sleep, reduction of REM sleep and SWS, and decreased spindle activity (for review see [171]). Moreover, behavioural insomnia syndromes and REM sleep behaviour disorder have been often observed [172].
Different studies have found a deficit in melatonin secretion in autistic patients [173][174][175] that seems to represent a risk factor (and not a consequence) of autism [176]. Recently it has been proposed that learning disabilities in autisms are related to an abnormally high LTP linked with pineal hypofunction, low serum melatonin levels, and sleep dysfunction [177]. According to the authors, promoting sleep by means of a melatonin treatment may reduce learning disabilities by restoring the synaptic plasticity. Melatonin treatment improves sleep quality in autistic patients [178,179] and secretin, a hormone that stimulates melatonin [180], induces a temporary improvement of autism symptoms [181,182].
Sleep-Dependent Plasticity and Rehabilitation: Future Directions
The promotion of sleep between therapy sessions, the prompt treatment of associated sleep disorders, and the improvement of sleep quality by adequate environment and melatonin administration could be considered as general indications to support the beneficial effects of sleep on synaptic plasticity in different clinical conditions. However, future studies should be designed to develop methods that directly enhance sleepdependent plasticity, in order to optimize its role in functional recovery. Massimini and co-workers [183] have found that TMS at a frequency of <1 Hz applied during NREM sleep triggers slow waves in humans. It would be interesting to understand if this kind of stimulation has a fallout on memory performance. Recently, it has been observed that the application of a weak electric anodal current at 0.75 Hz (the frequency of the sleep slow oscillations in humans) during NREM sleep induces an increase in slow oscillations and spindle activity and facilitates declarative memory consolidation [77]. Conversely, transcranial direct current stimulation (tDCS) at 5 Hz (theta frequency) during NREM sleep provokes a general decrease in slow oscillations, a frontal reduction of slow EEG spindle power, and decreases declarative memory consolidation, as well as increases gamma activity when applied during REM sleep [184]. Slow oscillations and spindle activity can be enhanced also by an auditory stimulation in phase with the ongoing oscillatory EEG activity (auditory closed-loop stimulation) during sleep, again with beneficial effects on declarative memory consolidation [78]. These findings suggest that we can influence subsequent memory performance by modulating different EEG rhythms during sleep, probably affecting synaptic plasticity [77]. Future studies should investigate the plastic process underlying the effect of transcranial stimulation and auditory closed-loop stimulation during sleep on memory, and if such different kinds of stimulations can have long-term beneficial effects on memory performance in patients affected by different clinical conditions characterized by memory impairment. Similar questions arise for what concerns the possible clinical usefulness of the previously-quoted (see Section 3) phenomenon of memory consolidation improvement induced by reexposure to odor cues during SWS [71]. An implication of the link between sleep and plasticity is that monitoring SWA during sleep in patients undergoing a rehabilitative treatment may represent a useful method to better understand how that specific rehabilitative protocol works. For example, Sarasso and co-workers [185] have recently used the quantitative analysis of sleep high-density EEG, in post-left hemisphere stroke patients with nonfluent aphasia, aimed to assess plastic changes induced by the Intensive Mouth Imitation and Talking for Aphasia Therapeutic Effects (IMITATE), a computer-based therapy for post-stroke aphasia rehabilitation [186]. Results showed that a single intensive IMITATE session can induce a SWA increase during the initial 30 minutes of the first NREM sleep cycle over the left premotor and inferior parietal areas. A SWA increase was also observed in both left and right frontal areas, but with a peak at the right sites. At a behavioural level, IMITATE induced an improvement in language skills, assessed by the Western Aphasia Battery Repetition Scale [187]. Albeit these are only very preliminary results (only 4 patients), they are indicative of the possible applications of the sleep EEG topography analysis for the assessment of plastic changes induced by a rehabilitation protocol, opening new perspectives for the comprehension of the neurophysiological basis of rehabilitation.
Conclusions
The nature of the link between sleep and synaptic plasticity is not completely definite. Different processes of synaptic renormalization seem to occur during sleep, but the definition of their functional role needs further investigation. Nevertheless, sleep and synaptic plasticity seem to be strongly related. The induction of plastic changes during wake produces coherent and topographically specific local changes in SWA during the subsequent sleep. Moreover, sleep seems to restore synaptic plasticity, with beneficial effects on learning processes, while sleep deprivation induces alteration in LTP/LTD mechanisms, increases cortical excitability, and has negative consequence on learning. Growing evidence suggests that promoting sleep may be useful to restore synaptic plasticity in different pathological conditions. Since plastic processes are essential for functional recovery, a management of sleep-wake cycle in patients and an adequate treatment of associated sleep disturbances could be crucial for the rehabilitation outcome. Moreover, different methods like TMS, tDCS, auditory closed-loop stimulation, and odor cues, applied during sleep, can promote memory performance, probably enhancing synaptic plasticity. Future studies should test the possibility to use such techniques to support functional recovery in neuropsychological patients. Finally, monitoring SWA during sleep may help to understand how a rehabilitative protocol affects plastic mechanisms.
Conflict of Interests
This paper contains no actual or potential conflict of interests on the part of any of its authors.
|
v3-fos-license
|
2023-08-30T15:10:03.415Z
|
2023-08-01T00:00:00.000
|
261323545
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0011572&type=printable",
"pdf_hash": "a76c4424c599ed2af0ee1477495ed75886bbf10a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45487",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6d267452a42357f56b8816f35da111299e283b71",
"year": 2023
}
|
pes2o/s2orc
|
Analysis of News Media-Reported Snakebite Envenoming in Nepal during 2010–2022
Background Snakebite envenoming is a well-known medical emergency in the Terai of Nepal in particular. However, there is an epidemiological knowledge gap. The news media data available online provide substantial information on envenomings. Assessing this information can be a pristine approach for understanding snakebite epidemiology and conducting knowledge-based interventions. We firstly analyzed news media-reported quantitative information on conditions under which bites occur, treatment-seeking behavior of victims, and outcomes of snakebite envenomings in Nepal. Methodology/Principal findings We analyzed 308 Nepalese snakebite envenomed cases reported in 199 news media articles published between 2010 and 2022 using descriptive statistics, Wilcoxon, and Chi-square tests to know why and how victims were bitten, their treatment-seeking behavior, and the outcomes. These envenomated cases known with substantial information represented 48 districts (mostly located in the Terai region) of Nepal. These envenomings mostly occurred in residential areas affecting children. Generally, envenomings among males and females were not significantly different. But, in residential areas, females were more envenomed than males. Further, victims’ extremities were often exposed to venomous snakebites while their active status and these episodes often occurred at night while victims were passive during snakebites indoors and immediate surroundings of houses. Snakebite deaths were less among referred than non-referred cases, males than females, and while active than passive conditions of victims. Conclusion/Significance The most of reported envenomed patients were children, and most envenomings were due to cobra bites. Consultation with traditional healers complicated snakebite management. In most cases, deaths that occur without medical interventions are a severe snakebite consequence in Nepal. Further, several deaths in urban areas and mountains and higher hills of Nepal suggest immediate need of snakebite management interventions in the most affected districts. Therefore, there is an urgent need to immediately admit Nepalese snakebite victims to nearby snakebite treatment centers without adopting non-recommended prehospital interventions. The strategies for preventing snakebite and controlling venom effects should also include hilly and mountain districts where snakebite-associated deaths are reported.
Introduction
Globally, snakebite results in 1.8 to 2.7 million envenomings [1,2] and 81,000 to 138,000 deaths annually [3].The highest incidence of snakebite envenomings and deaths occurs in South Asia [1], particularly in India and Pakistan [4] where it has considerable social and economic impacts.In Nepal, 20,000 to 37,661 people are bitten by snakes resulting in 1,000 to 3,225 deaths annually [5,6].However, there is an epidemiological knowledge gap due to inconsistent and incomplete hospital medical records of admitted snakebite cases [7,8], and limitations exist in community-based snakebite studies [9].One recent community-based study in the Terai region of Nepal reported the fatality rate of snakebite envenoming to be 22.4 per 100,000 [6] which is over five times of the recent estimate of the fatality rate for India [4].During the cross-sectional survey between 30th November 2018 and 7th May 2019 in the 23 districts of Terai region of Nepal [6], the authors excluded towns and cities where snakebite envenomings and associated mortalities are frequently reported [10][11][12], but proportion of envenomings and human population is less.The inclusion of rural areas having high snakebite incidence but less human population caused higher proportion of envenomings and associated consequences compared to the urban areas.Hence, in Terai of Nepal, there might be a cause of higher fatality rate than that is reported in India.At least, this exclusion failed to represent snakebite envenoming in tropical urban areas of Nepal's Terai.
The news media data available online provide useful information on envenomated cases even from towns and cities (being treated in healthcare centers of Nepal, consulting traditional healers, and adopting other domestic remedies [13]), with addresses of snakebite locality, demographics, treatment-seeking behavior, and outcomes.Such information can be used to understand the epidemiological situation.Further, these data are useful to design the regionally or nationally representative snakebite study by including rural, semi-urban, and urban areas at the risk of snakebite to determine actual burden due to snakebite envenomings.
Nepal has significantly progressed in print to online news media and journalism sectors since the restoration of multiparty democracy in 1990 [14] although the first print news media-Gorkhapatra was published in 1858 [15].The number of outlets and news media coverage has rapidly increased across the country.At least 863 news media publish news regularly in the Nepali language predominantly, followed by English and indigenous languages [16].
Assessing news media-reported snakebite case reports can be a novel approach for examining the demographics and circumstances of envenoming, treatment-seeking behaviour of the snakebite patients, and outcomes in Nepal having inadequate healthcare facilities, although this approach has advantages and some limitations [17].Since data from news media are typically available in near real-time and provide earlier estimates of epidemic issues [18], the trend of snakebite envenomings and associated consequences can be understood by analyzing media-reported snakebites.Therefore, analyzing news-media reported snakebite envenomings is essential to understand snakebite epidemiology and conduct knowledge-based interventions [19].
Although our news media-based data set is a subset of all envenomed cases known from healthcare facilities and out-of-hospitals and likely to represent a skewed sample due to various biases in reporting, this data set analyzes out-of-hospital cases and deaths, which do not appear in a hospital-based data set.Further, this subset of cases also includes substantial details of snakebite events from urban to rural areas in the Terai, hills, and the mountains of Nepal, which are often absent from hospital-and community-based studies.However, there has yet to be an analysis of media-reported snakebite cases in this country.Therefore, assessing media-reported snakebite envenomings with substantial information can be a new approach for making these under-studied snakebite envenomings a critical issue because envenomings are extending vertically towards hills and mountains as well.
Herein, we analyzed news media-reported snakebite envenomings that occurred in Nepal between 2010 and 2022 to determine the distribution of snakebite envenomings in districts and along the altitudes and the climatic zones and understand the demographics (age, sex, and occupation of envenomed patients), circumstances (locality/places of snakebite envenomings, victims' activities when bitten, bitten body parts, time, month, and season of snakebite envenomings, and snakes responsible for envenomings), victims' time to hospital arrival and treatment-seeking behavior, and outcomes.Further, we tested hypothesis to know whether snakebite envenomings occurred according to the demographics and circumstances and to identify any association between multiple variables (for example: human habitations and seasons, activeness of victims and sex, etc.).
Study site and population
Nepal has diversified lands extending 60-8,848 m above sea level (asl).Nepal's Terai (i.e. the plains running parallel to the lower ranges of the southern Himalayas of Nepal) is characterized by a hot tropical climate, the mid-elevation Chure hills by a mild subtropical climate, and the high-elevation Mahabharat ranges by temperate and subalpine climates (Fig 1).The Terai is the northernmost Ganges plains extending south to north by 30-40 km at 60-200 m asl occupying 4% of the total area of the country and the Chure hill ranges is Siwalik region extending at 200-1,500 m asl occupying roughly 13% of the country area.The Mahabharat ranges are the middle hills extending east to west continuously from 1,500-4,000 m asl occupying 68% and mountains are highlands extending above 4,000-8,848 m asl occupying 15% of the total area of the country [20] ( Fig 1).
Nepal is traversed by three main river systems: the Koshi in eastern, the Narayani in central, and the Karnali in western Nepal.During the monsoon season large flooding occurs in these river systems.This increases human-snake confrontations resulting in an increase of snakebites [21].Therefore, we defined seasons according to the rainfall pattern as: the pre-monsoon (March-May), the monsoon (June-September), and the winter (October-February) to understand the seasonal influence on snakebite envenomings.In Nepal, at least 18 species of medically relevant venomous snakes are distributed [22,23] within a small area (147,181 km 2 representing about 0.1% of the global landmass) extending east to west by 885-900 km and north to the Himalaya and to its Terai in the south by 130-260 km, respectively [20,24].Elapid snakes, particularly the Spectacled Cobra (Naja naja) and the Common Krait (Bungarus caeruleus), cause most of the mortality in Terai of Nepal [25].Viperid snakes, particularly, pitvipers are the cause of snakebite morbidity mostly in hills and mountains up to 5,000 m asl [22].Russell's Vipers (Daboia russelii) are distributed mainly in Terai region of Nepal where their venom effects are associated with chronic wounds (which sometimes, may lead to amputation of the bitten body part) [10,22,26].Krait species are active at night, cobras at dawn and dusks, and vipers during day and night.Kraits (B.caeruleus, B. niger, and B. lividus) and cobras (N.naja) are perianthropic species [10,22,[27][28][29].Humans and these snakes interact in residential areas often.Similar interactions of Russell's Viper (D. russelii) are reported often from agricultural lands in this country [10].Pitvipers (mostly Trimeresurus spp.and Ovophis monticola) encounter with humans often in bushy areas in the hills and mountains [30,31].
According to Nepal Population Census 2021, total population of Nepal is 29,192,480.Total population of 48 districts (Table 1), from where narration of snakebites was reported in news (S1 Table ), is 21,620,037 i.e., 74% of national population (Table 1).Since 2015 Federal Democratic Republic of Nepal is divided into seven provinces (the first level administrative units), 77 districts (the second level units), 753 local bodies (the third level units: six metropolitan cities, 11 sub-metropolitan cities, 276 municipalities, and 460 rural municipalities), and 6,443 wards (the last level units) (cited in: https://en.wikipedia.org/wiki/Village_development_committee_(Nepal), accessed in 17 July 2023).Formerly, Nepal was divided into five development regions (the first level units), 14 zones (the second level units), 75 districts (the third level units), and 58 municipalities and 3,157 VDCs (the fourth level units).Each of VDCs was divided into-on the average-nine wards (the last level units) depending on the population.
Data sources and sample size
During August-December 2021 (while searching for traditional snakebite healers for a study [13]) and from January 2022 to the 11th of January 2023, we retrospectively searched international, national, and local news outlets including radio and television channels (e.g., BBC, Alzajira) in some extent for snakebites that occurred between January 2010 and December 2022 in Nepal.We searched at google.com using specific keys, "All filters and News", and customized date range and relevance options under the "Tool" given in https://www.google.com/.We also tracked the original web links for snakebite news (if any) posted on Facebook (we did not search other social media).We used snakebite-related terms "snakebite," "snake bite," "bitten by a snake," "Krait bite," "Cobra bite," and "Viper bite" in association with "Nepal" and the name of snakebite-prone districts listed by Bista et al. [32] and name of hospitals supplied with antivenom in English as well as Nepali language to find Nepalese snakebite related newspaper articles.Nepali unicode software (https://nepali-unicode-romanized.software.informer.com/download/) was applied while using respective keys in Nepali.We followed the additional news links for snakebite cases in the initially selected newspaper if links for other snakebite issues from Nepal were provided.Additionally, we used the search option for old archived news whenever available.Also, we retrospectively searched printed newspapers.These data were primarily based on Nepalese hospitals (Hs) /snakebite treatment centers (STCs) and Nepal Police Offices (NPOs).In Hs, snakebite cases accessed, but antivenom availability was unclear.The STCs are healthcare facilities dedicated to snakebite treatment where Nepal Government supplies antivenom free of costs for the use by Nepali citizen.The STCs included To estimate the appropriate sample size (i.e., the expected number of envenomings from the aforementioned population of Nepal), we used Yamane formula [33]: n ¼ N 1þNd 2 where, n = sample size, N = population size, d = margin of error (aka precision) which was 10% of average prevalence of snakebite envenomings (i.e., 58%) reported in different studies conducted in hospitals of western Nepal (54.3% and 69%, respectively, [34,35]), a tertiary care center in the eastern Terai (88%, [36]), and communities of southcentral (42%, [37]) and southeastern Nepal (52%, [38]]).The estimated sample size was 297.As our samples (i.e., snakebite envenomings) were taken randomly from the media reports and represented 62% of total districts and 74% of total population of this country, it was adequate to test the hypothesis of the research objectives.
Inclusion and exclusion criteria
We included news media reporting Nepalese snakebite cases that occurred between January 2010 and December 2022.Eligible reports were one of two types: individual case reports or incidence and case reports.For both types, only those with identifiable location were included.These included: certain healthcare facilities (hospitals or STCs), NPOs, or geographic locations (addresses, districts, or provinces where snakebite occurred).The location of the victim(s) was required to be in Nepal, even if the case was referred to the Indian healthcare system.In addition to the date and location, case reports were also included if they contained demographics, circumstances of the bite, prehospital care, and outcome.
We excluded only incidence data with known date, location, and counts of envenomings, deaths, bites without envenomings, and undetermined snakebites.We excluded news mediareported snakebites that occurred out of Nepal (herein, Indian snakebite patients) receiving treatment from Nepalese healthcare systems across the Terai of Nepal bordering India.We excluded news media describing only STCs or antivenom shortage, venoms, antivenom production, or expert opinions or recommendations on snakes and/or snakebites, the "Naagpanchami" (a snake festival celebrated by Hindus), familiarizing snake rescuers, involvement of traditional snakebite healers (TSHs) in snakebite care.We excluded journal articles describing snakebites, personal websites, and blogs.We also excluded Facebook describing snakebites without original webpage links of the news media.Further, we excluded the duplicate news reports of the same case reports.
Data collection and management
We amassed information such as the districts where bites occurred, age, sex, and occupation of snakebite cases, circumstances of the snakebites (i.e., time, month, and year of snakebites, localities where snakebites occurred, activity of the victims at the time of the bites, body parts bitten, and types of snakes involved in bites), prehospital intervention (the duration to reach the STCs/hospital [DOH]) and treatment-seeking behavior (i.e., carried to TSHs and/or healthcare facilities), and outcomes (discharged with complete recovery or disability, referred to the higher center, deaths (including places where deaths occurred such as personal homes, TSHs' homes, hospitals, on the way to hospitals), and length of time between the snakebite and declaration of deaths [LOD]).We managed data using MS Excel.We crosschecked patients' names and demographics (including addresses where snakebites occurred, sex, and age), and assigned alphanumeric codes to avoid multiple entries of snakebite cases.We used "https://www.hamropatro.com/date-converter" to convert Nepali dates of snakebites and publications of news into respective English dates.To avoid duplication of snakebite incidence, we crosschecked the year and month of reports with the name of sources from where journalists extracted snakebite data.However, we extracted data from multiple news to make more comprehensive database of case reports if news articles had duplication in news titles with additional information about the cases (S1 Table ).
Data analysis
We analyzed only envenomated case reports with substantial information.For the analyses of demographics, circumstances, prehospital interventions, and outcomes of 308 snakebite envenomings, we analyzed envenomed cases with known geographic locations (i.e., village, town, municipality, and district) where snakebite occurred or STCs which provided case reports, sex, age, and age or age group without defining the sex of patients.We included envenomed cases with known: i) geographic locations and ii) sex (when we found information on everything else including sex but did not find age, we included the case for the analyses) and/or age/age group without sex (when we found information on everything else including age or age group but did not find sex, we included the cases for the analyses).However, when we found information on everything else but did not find sex or age/age group, we excluded the cases from analyses.Except for an envenomed case reported from the Terai of Nepal, 307 cases were provided with addresses for which we generated latitude, longitude, and elevation of localities where envenomings occurred using Google Earth Pro.We grouped victims aged 1-17 years (y) into children, 18-40 y adults, and 41 y and above into elders.We grouped the cases with "child, i.e., Balak in Nepali" or "young or youth, i.e., Yuwa in Nepali" without specific age into children and adults, respectively.Further, to know the occurrence and trend of envenoming at age intervals by 10 y and sex, we developed a line graphs.
To analyze the circumstances of envenoming, we categorized the place of bites that occurred as either in or out of human habitations (i.e., buildings including their yard/backyard, barns, and sheds).Further, we defined victim activity based on the description of what they were doing when actually bitten.At the same time, snakebites occurred during victim's inactive status (such as sleeping or resting) and active status (e.g., walking, playing, among other activities).Also, we analyzed the bitten body parts under the reported circumstances.To interpret the reported time and seasonal patterns of snakebite, we grouped the bite time as early morning (if the reported bites occurred during 03:00-04:59 h), morning (05:00-09:59 h), day (10:00-16:59 h), evening (17:00-19:59 h), and night (20:00-02:59 h) time bites and bite season as: pre-monsoon, monsoon, and winter bites.Whenever the periods were mentioned without the specific time of the bites, we placed them into the respective categories as such.For the reports without a defined month of bites, we considered a month of news published as the month of the actual snakebites.It was because news media published snakebite events quickly to draw their readers' attention.Then, we analysed the risk periods and months/seasons of snakebites using frequency distributions to understand the temporal and seasonal influence on snakebites.
Whenever a newspaper reported the type of snake responsible for biting, we expected journalists to correctly identify the snake based on photographs of the responsible snakes taken by victims or their family members or dead snakes brought to healthcare facilities by consulting with snake experts.Based on available guidelines to identify snakes [22,39], we identified English, genus, or species names for the given vernacular names of snakes involved in envenoming.We divided these snakes into elapids [i.e., Bungarus spp.(kraits), Naja spp.(cobras)], viperids [i.e., Daboia russelii (Russell's Viper), Trimeresurus spp.(Green Pitvipers), Ovophis monticola (Mountain Pitviper)], and unidentified venomous snakes.
We analyzed prehospital interventions by assessing time to hospital arrival [DOH; it was converted into hour (h)] and those following TSHs and/or modern healthcare institutions for snakebite treatment.Outcomes were measured in terms of the number of envenomed cases who survived, died, and were under treatment.Further, we measured length of time between the snakebite and declaration of deaths [LOD].
Continuous data (herein, altitudinal ranges where envenomings occurred, age, DOH, and LOD) for those envenomated cases with substantial information were not normally distributed.We used box plots, the Grubbs test, and interquartile ranges (IQR) to identify any outliers in the dataset and histogram and the Shapiro-Wilk test to examine the normality of data distribution.Hence, we presented continuous data as medians, IQR, and ranges.We measured 95% confidence interval (CI) of elevation of areas where envenoming occurred and age, DOH, and LOD of patients using two-tailed Wilcoxon signed rank test with continuity correction in which median (instead of mean) was used.This CI was used to estimate the location of expected population median within the CI.We analyzed categorical data as proportions with percentages from the total eligible envenomed cases (for example, 308) but not from the article numbers (i.e., 533, 296, and 199, Fig 2).Following complete case analysis [40], we excluded missing values (i.e., NA).This reduced the cases from a total of 308 envenomed cases while analysing percentage in some cases.We used these percentages next to each of mentioned number of patients.We defined the absence of values for the aforementioned variables in news articles as "Not available" or "NA".These missing of values were completely at random and unrelated to any of the variables involved in our analysis.Therefore, we adopted a complete case analysis [40] and excluded all missing values (i.e., NA) on variables involved in the inferential statistical analysis and figures.
We compared snakebite across age-classes, sexes, occupations, snakebite locations, activities performed by victims while snakebite, bitten body parts, temporal patterns (time, months, and seasons) of snakebite envenomings, and types of snakes involved in bites using Pearson's Chisquared test for goodness of fit (PCTGF).To compare the categorical variables, we used Fisher's exact test (FET) when there was at least one cell in the contingency table of the expected frequencies of observations was below five (variables: activeness and occupations of victims, activeness and bitten body parts, activeness and time of snakebites, human habitations and time of envenomings, etc.) and the Pearson's Chi-square test for independence (PCTI) when the expected frequency of observations in each cells was at least 5 (variables: human habitations and sex, activeness of victims and age groups, activeness and localities of snakebites, etc.).
We performed all analyses with the R statistical program (R version 4.1.2(2021-11-01), The R Foundation for Statistical Computing Platform), the Microsoft Excel, and the Google Earth Pro.All statistical tests were performed at 5% significance level.We round figured the p-values less than 0.001 to <0.001.
Ethics statement
No ethical approval was sought because we analyzed publicly available data.We used name of patients and known geographic locations to cross-check and avoid duplications.The known geographic locations where the envenomings occurred were used to generate the approximate coordinates of localities and identify any urban areas where envenomings were commonly reported.To support designing more sophisticated survey including snakebite affected urban areas, too, we mentioned some locality information up to the level of towns.However, considering ethics, we analyzed data anonymously and without mentioning the last level of administrative units.
i) Demographics
The median age of 249 patients was 19 y [CI: 21.5-27.5,range: 0.75-83 y, IQR: 10-40 y, Fig 4].Among 59 cases without known age, 23 cases were with known age group (20/23 cases were known with sex; 3/23 children were reported without sex) and 36 cases were without known age group but with known sex in addition to other substantial information.Among a total of 249 cases with known age, the most victims were children (n = 115, 46%) with a median age of 9 y ranging from 0.75-17 y (CI: 7.8-9.5 y, IQR: 4-13 y), followed by adults [n = 74, 30%; median: 27. 2).However, the distribution of envenomings according to age groups was independent of both activeness of the victims (Table 3) and habitations used by them while snakebites (Table 4).
We determined that sex was known in 304 cases.Among four cases without sex, one case was with known age and three with age group only.Although slightly more bite victims were females (n = 160, 53%) than males, snakebite envenomings in males and females were not significantly different (p-value: 0.359, Table 2).Among a total of 248 cases with known sex and age, more victims were females (n = 133, 54%, with a median age of 18 y, CI: 18.5-25.5y, IQR: 9-35 y, range: 0.75-78 y than males (n = 115, 46%, median: 23 y, CI: 23-33 y, range: 1-83 y; IQR: 11-50 y].However, there was no association between sex and age-class of respective patients [p-value: 0.076 (PCTI)].Further, the occurrence of envenomings according to females and males was independent of the activeness of victims (p-value: 0.591, Table 3) but dependent with the habitations used by victims while snakebites (p-value: 0.019, Table 4).Among 195 cases in which sex of envenomed patients and types of human habitations where snakebites occurred were known, envenomings were most often to the females at residential areas (n = 84, 43%, Table 4).Among known occupations, although students (n = 11, 44%) and farmers and workers (n = 6, 24%) were more frequently envenomed than other occupational people, the envenomings did not occur according to occupation (p-value: 0.089, Table 2).Occupational occurrences of envenomings were independent of victims' activeness (p-value: 0.381, Table 3) and use of places in and out of human habitations (p-value: 0.922, Table 4).
c) Bitten body parts
Among 47 cases in which body parts involved and the activeness of victims while snakebites were known, bites were most often to the upper extremities (p-value: 0.017, Table 2).Which body parts were bitten by snake was dependent on activeness of victims (p-value: 0.017, Table 3).Among inactive victims, the head and neck were bitten whereas extremities were often bitten while victims were active.But, there was no association between involvement of body parts and types of habitations used by victims while snakebites (p-value = 0.378, Table 4).
d) Temporal patterns of snakebite envenomings
A total of 267 out of 308 (87%) envenomings occurred during rainy months of the year, mainly during the night (p-value: <0.001 each, Table 2).Although envenomings occurred in all periods of the day throughout the year, the peak of envenomings occurred during July, August, and June and at night (Table 2).Envenomings that occurred according to time periods were dependent on activeness and types of habitations used by victims (p-value: <0.001 each, Tables 3 and 4).These periods, however, were not associated with the months [p-value: 0.608 (FET)] and seasons [p-value = 0.669 (FET] of snakebite envenomings.
Discussion
This is the first study to analyze news media-reported Nepalese snakebite envenomings with substantial information.Dependency on traditional snakebite healers, the long distance between snakebite localities and STCs, and snakebite occurred at night were the major barriers to accessing healthcare facilities by people bitten by snakes in Nepal.These barriers are also common in India and Bangladesh [41].Hence, this study's findings have significant policy implications in Nepal and other countries where the socio-economic, cultural, and geo-climate context of envenoming and associated consequences are similar.
i) Demographics
Highly productive population of Nepalese communities were largely affected by snakebite envenomings [95% CI: 21.5-27.5 y].This has direct impact on national economy of this country.Our findings of more children than other age classes being affected by snakebite envenomings mostly in and around houses are analogous to carelessness of minors and snake preferred surroundings in and around houses of Nepal [10,21].A multicluster survey carried out in Terai of Nepal [6,42] also reported children at risk of venomous snakebites in Nepal.We found both sexes being equally vulnerable to snakebite envenomings (Table 2).However, there are reports of female dominance in recently conducted studies in different parts of Nepal [a survey carried out in the Terai [6,42] and a study of snakebite envenomings admitted in Bheri Hospital having service areas in western Terai to lower-middle hills of Nepal [11]].Other distant past studies (a study of snakebite envenomings admitted in Bharatpur Hospital having service areas in south-central Terai to lower-middle hills of Nepal [43] and the subsequent study of confirmed Common Kraits and Russell's Viper envenoming known from western, central, and eastern Nepal [10]) showed male dominance.Interestingly, we found females being mostly affected by snakebite envenomings when these episodes occurred in residential areas (p-value: 0.019, Table 4).The prevalent patriarchal culture leading to an increase of female activities and trends of more out-migrating male laborers than female laborers [11,44] might reflect the slight female preponderance of snakebite envenomings in and immediate surroundings of houses of Nepal.This suggests that females engaged in residential areas are at risk of venomous snakebites reflecting influence of patriarchal communities in Nepal where most of works at residential areas are often allocated to females.
Although there were frequent reports of students and farmers being mostly envenomed in different parts of Nepal [A Ph.D. dissertation carried out in eastern, central, and some parts of western Nepal [10] and next study carried out in a referral hospital in the mid-western Nepal [11] reported envenomed students (27-35%) and farmers (24-43%)], we found all occupational people being equally vulnerable to venomous snakebites (Table 2).Our analyses showed that people engaged in non-agricultural occupations (Table 2) were also affected by snakebite envenoming.Journalists largely missed reporting occupations of the snakebite victims, although it was essential to understand the impact of snakebite in occupational groups.However, educational intervention for prehospital care and pragmatic prevention of snakebites in residential areas in particular should include children and consider socio-cultural contexts to minimize the risks of envenomings and associated deaths.
ii) Circumstances of snakebite envenoming
We identified that the envenomings in rural and non-rural (i.e., urban and semi-urban) areas of Nepal are not significantly different (p-value: 0.068, Table 2).Further, remoteness of envenomings was independent of victims' activeness (p-value: 0.236, Table 3) and their use of in and out of human habitations (p-value: 0.877, Table 4).The metropolitan cities, sub-metropolitan cities, and municipalities of Nepal include both urban and semiurban areas (sometimes, extremely rural areas, too).Therefore, most of these administrative bodies located in tropical Terai region have agrarian communities and possess greater diversity and abundance of medically relevant snakes [10,22].A Ph.D. dissertation also reported several confirmed Common Krait and Russell's Viper bites from urban, semi-urban, and rural areas of Nepal [10].Next, our analyses declared an 'outbreak of snakebite envenomings' in urban areas of three districts.By translating a clinical consensus from an international panel of toxicologists who recommended as few as three toxic cases within 72 hours appearing in the same city to consider an 'outbreak of toxicity' [45], we believe that there exist outbreaks of snakebite envenomings in several urban localities of Nepal due to intensive activities of snakes and humans, particularly during the monsoon at dark hours of the day.Alike our findings, Igawe et al. [46] reported snakebite outbreak in the 9th, 19th, and 25th of May, 2016 in communities (wards) of the Donga Local Government Area of Nigeria.Therefore, our findings highlight the severe snakebite aspects of envenomings and the need to empower healthcare systems urgently.Those outbreaks, even in urban areas, suggest that large-scale community-based surveys [6] should also include cities and towns because the urban periantrhopic ecosystems particularly in the high snakebite prone tropical Terai districts of Nepal might be associated with the human-venomous snake conflicts.But, a recently conducted multicluster random survey of snakebites in Terai of Nepal excluded urban communities [6].Therefore, our findings suggest the need of designing a large-scale, nationally representative epidemiological survey including snakebite prone rural as well as non-rural areas of this country.
Our district-wise incidence of those envenomings (Table 1) can support Nepalese medical authorities to estimate antivenom and healthcare personnel requirements and their appropriate distribution in the snakebite affected areas because snakebite risk area is extended in some parts of Nepal (Fig 5).The recent habitat suitable modeling for medically relevant snakes of Iran [47] showed similar extension of snakebite risks in its mountains.The records of envenomings and deaths in the higher hills and mountain ranges of Nepal (Fig 5) suggest transcending of venomous snakes from Terai, probably due to the increment of road transport in the Chure and Mahabharat ranges of Nepal (Fig 1).Snakes could be transferred along with goods carried in trucks.Further, global warming might support their flourishing in those regions [48].For the effective snakebite management in districts representing higher hills and mountains (Table 1), there is need of immediate response of Nepal Government authorities.
Human-inhabited areas particularly in tropical regions of Nepal were at the highest risk for snakebite envenomings.These envenomings mostly occurred while people sleep (Tables 2 and 3).Similar observations were reported in a hospital-based study of envenomated cases from 11 districts [11] representing far-and mid-western Nepal and a community survey in 23 districts [6] across the Terai region of Nepal.Although envenomings while victims' active and passive conditions were not significantly different (p-value: 0.823, Table 2), people should be wary at bed time in residential areas, particularly in the tropical regions of Nepal because we found the maximum envenoming while sleeping (p-value: <0.001,Table 3).
Unlike a report from a similar study in USA [17], we found rare intentional human-snake interactions causing envenomings.However, in western Nepal, some harmful religio-cultural practices, such as sleeping/resting in Chhaupadi hut (a special house used while menstruating by women and girls in the Chhaupadi culture) [49], and reverence for venomous snakes residing on the premises [21] may increase the risks of snakebites.These traditions continue to cause snakebite envenomings and deaths in remote areas of Nepal.
Unlike the findings in a cross-sectional survey in villages of 23 districts of the Terai region of Nepal [6], such envenomings occurred mainly while asleep like in envenomed cases from next study performed in far-and mid-western Nepal [11].Although the recall bias in community-based surveys might cause this variation, our findings correspond to the distribution and diversity of nocturnal medically relevant snake species distributed in Nepal [10,11,25,27].
Similar to a contemporary, hospital-based study in western Nepal [11], we found the next highest envenomings that occurred under natural conditions during agricultural activities by stepping on or placing the hands or legs near an unseen snake.Envenoming occurred while intentional human-snake interactions (i.e., during attempts to keep them away from populated areas).Interestingly, similar to a case revenging against a venomous snake, it was also exhibited by a person inhabiting Ajnawa village of Mahisagar District in Gujrant of India (https://www.onlinekhabar.com/2019/07/782014). Overall, the risky places and activities vary depending on the involved species of snakes [50], noticeably greater number of envenomings during sleeping and agricultural and labor-intensive activities than other activities of victims (p-value: <0.001, FET) also suggested need of an educational interventions ensuring safety from snakebite while sleeping and agricultural activities in all snakebite affected districts of Nepal.
Unlike previous studies [6,10], we found upper extremities being mainly bitten (Table 2) particularly while victims were active (Table 3).A study of envenomated cases representing far-and mid-western Nepal also observed the majority of cases (i.e., 40%, n = 57) being bitten in the upper extremities [11].But, there was an exposure of lower extremities of victims when bitten by kraits and Russell's Viper that is distributed along the Terai of Nepal [10].Overall, the snake bitten upper and lower extremities suggest that extremities disturbed snakes which in turn caused snakebites.Further, we found 58% of the envenomings occurring on extremities during victims' active status which could expose their body parts to the venomous snakes.
The highest risks of envenomings occur in July and at night in Nepal (Table 2).Similar risks for envenomings were also reported in other fragmentary studies in this country [10,13].A study of snakebite envenomings based on Bheri Hospital in western Nepal reported the peak of envenomings in September, but at night [11].This suggests that nights of July to September are noticeably at high risks for snakebite envenomings mainly while people are at passive conditions (Table 3) and in human-inhabited areas (Table 4).Except during day hours, the envenomings reported in other periods occurred predominantly in human-inhabited areas (Table 4).Similarly, envenomings that occurred at night happened while people were inactive (Table 3).Our findings of most of envenomings during dark hours of the day in monsoonal months while people were inactive and using residential areas correspond to increased activities of snakes for foods and breeding and people for crop cultivation and harvest.
The majority of cases being envenomed by cobras (p-value: <0.001,Table 2) correspond to the highest frequency of cobras known from a study of medically relevant snakes based on nine STCs in Nepal [10] and from snake-photo album displayed to surveyed participants in the Terai of Nepal [6].However, the fragmentary studies of snakebite envenoming carried out at Bharatpur Hospital in the south-central [43] and Bheri Hospital in the south-western Terai of Nepal [11] reported kraits (Bungarus spp.) to be the most common cause of envenoming in the service areas of respective hospitals.However, from south-eastern Nepal, cobra bites are more common than krait bites [25].However, bites due to these snakes were independent of activeness and in and out of human habitations used by victims (Tables 3 and 4).There are proven envenomings due to pitviper species bites in the hilly districts of Nepal [a Mountain Pitviper (O.monticola) envenoming in Kathmandu [30], a White-Lipped Green Pit Viper (T.albolabris) envenoming in Gorkha [31], pitviper (species not mentioned) envenomings in Achham [51]].Further, envenomings due to Greater Black Krait (B.niger) at 1515 m asl in Ilam District [29] as well as our findings of several fatalities in the hilly districts (Fig 5 ) suggest the urgent need of multi-center and large-scale study on preserved snakes brought by bitten patients or their visitors to know the variation in the composition of species involved in envenoming attitudinally and in the eastern, central, and western Nepal.Additional studies on geographical distribution patterns and movement behavior of these medically highly relevant snakes [10,22] can give more detailed ideas on their involvement in snakebites.
The media reports of snakebite envenoming are increasing in Nepal.Therefore, the biases in snakebite envenomings reported by journalists are likely due to unseen snakes.Similar to news media-reported snakebite analyses in the USA [17], our findings of 228 envenomings without the type of snakes (Fig 2) indicated that the majority of envenomings from unseen snakes appear to be the norm in Nepal.Therefore, evidence-based information on snakes and their ecology is essential to understand the precise distribution patterns and movement of medically relevant snake species involved in envenomings elsewhere.Snake population studies may also give ideas on whether snakebite envenomings correlated to the populations of the medically relevant snake species.Because of snakes' ecological roles and medicinal values [21], eradication of snakes is impractical.Therefore, the pragmatic educational interventions targeted to snakebite-prone regions should include the aforementioned risks factors of snakebite envenomings.
iii) Consequences of prehospital intervention (DOH) and treatmentseeking behavior
Delayed hospital admission requiring a median of 3.3 h is still a challenge for snakebite management in Nepal.The consultation of traditional snakebite healers (TSHs) by many snakebite victims instead of seeking antivenom therapy directly (Table 5) delayed the hospital admission keeping them at the risk of death.Usually, TSHs apply tourniquets, incision of bitten body part and sucking of blood, chanting, and use of herbal and non-herbals [13].For the appropriate care, World Health Organization recommends a maximum travel time limit of 1 h to reach healthcare facilities supplied with antivenom and ventilators [52], although this duration varies greatly depending on the venom effects of snake species and mode of transport.A hospitalbased study of envenomings in south-western Nepal reported 0.3 to 95 h (median 5 h, IQR: 2-14 h) required to reach a qualified healthcare facility [11].Therefore, Nepal requires mass awareness to increase the timely admission of envenomed patients in a STC nearest to the areas where snakebite occur and minimize fatalities due to delayed antivenom treatment.
The dependency on traditional healers is still commonplace in the Terai and the hills of Nepal.A recent study of snakebite traditional healers in eight districts of Nepal [13] mentioned the ingrained faith of people in traditional healing, unaffordable modern care, and wishes for early treatment of snakebites to be the leading causes of this dependency.Similar consultations of TSHs by snakebite envenomed patients were reported in a Ph.D. dissertation carried out in Nepal (i.e., 15%) [10].Seeking treatment for snakebites from TSHs exposes patients to useless and/or non-recommended interventions for snakebite management.Therefore, consulting TSHs is a major cause of delay in receiving definitive care for snakebite in Nepal [13,37,38,43,53].However, consulting TSHs for treatments of snakebites is less compared to the past when hospitals were further away and roads and transport networks were inadequate.This might cause no association of consulting TSHs on outcomes (Table 5) although the treatment seeking at a healthcare facilities was associated with consultations of victims with TSHs (p-value = 0.001, PCTI).Compared to the past reports, treatment-seeking behavior of people bitten by snakes in Nepal has been improved.However, reliance on TSHs for snakebite treatment is still a major challenge for snakebite management in this country.
This dependency exists elsewhere in other countries in Asia [Vietnam [54], Myanmar [55], India [56], Sri Lanka [50]] and Africa [Ghana [57], Kenya [58,59]] where it contributes to increasing snakebite fatality rates.Therefore, mass awareness campaigns and trainings on proper prehospital care are essential to ensure immediate access to a snakebite treatment centre in areas where people mostly rely on TSHs for snakebite care.This approach can diminish the dependency on TSHs, help to standardize the health of rural populations [37], and can play a vital role in reducing the number of fatalities resulting from venomous snakebites.Further, additional causes of this dependency [13] should be identified and addressed accordingly to improve the treatment-seeking behavior of people inhabiting snakebite-prone regions elsewhere.
iv) Outcomes
The geographical locations of reported deaths (Fig 5) signify the snakebite hotspots in Terai to Mahabharat ranges of Nepal for the first time.These deaths more frequently occurred in tropical regions than in subtropical and temperate regions [p-value: < 0.001 (PCTGF)].The report of snakebite deaths from some mountain and hilly districts, however, raised more serious concerns of managing snakebites nationwide.Significantly large number of deaths (55%, n = 106) occurred before reaching healthcare systems [p-value: <0.001 (PCTGF)] i.e., on the way to a hospital, at homes, at a traditional healer's homes, and Chhaupadi huts.Our findings of dependency of outcomes on modern treatment seeking of victims (p-value = 0.002, Table 5) support the most of out-of-hospital deaths.Seeking traditional healing prior to antivenom therapy, referral of cases to distantly located healthcare centers, inadequate roads and transport networks, and ignorance of people about the snakebite envenoming and STCs nearby their activity areas delayed timely receiving of antivenom therapy resulting in deaths.Similar consequences were reported in hospital-based studies of envenoming in several fragmentary studies (south-central [43] and western Nepal [11,60] and community-based study in southeastern Nepal [38]).The majority of out-hospital deaths are reported in India, too [61].However, the majority of deaths at STCs [n = 92, (47%)] indicated improvement in treatment-seeking behavior of people bitten by snakes in Nepal.Nearly two decades ago, 40% of deaths occurred in the village, 40% on the way to the hospitals, and only 20% occurred in the hospitals in south-eastern Nepal [38].The recent hospital-based study carried out in south-western Nepal showed 60% prehospital deaths and 40% deaths at the hospital during treatment [11].Although there is improvement in treatment-seeking behavior in Nepal, adoption of the dual care systems (Table 5) can be detrimental.Such dilemma is prevalent in Nepal [13] and elsewhere [62].To measure the declining influence of TSHs on snakebite care, there is a need of analyzing community-and hospital-based snakebites altogether [56].This depicts the whole gamut of burden due to snakebite envenomings, prehospital care practices, and associated consequences.
An envenomed patient referred from Taplejung Hospital reached the snakebite treatment centre at Charali of Jhapa District on the third day of the snakebite.By that time, his condition was serious.Hence, he was referred to BP Koirala Institute of Health Sciences, Dharan, Sunsari District, where doctors managed to save him by amputating the bitten finger (Fig 2).Further, referred cases often died on the way because the higher healthcare centers were far away from the original centers.So, if cases are going to be referred to a higher center located far away from the existing center, all necessary medications and ventilation support should be arranged adequately.The frequency of medical attention after snakebite (Table 5) suggests a need for multifaceted community health education programs [63,64] to expedite patients' follow-up to the antivenom therapy in a timely manner which increases the percentage of survival/recovery and decreases fatalities.Children's carelessness and inadequate knowledge about snake and snakebite [65] expose them to human-snake conflicts [21] resulting in more envenomings and subsequent deaths.To understand the greater mortality of children than adults and elders, there is a need of more sophisticated study on comparison of the effects of to coagulotoxic and neurotoxic snake venoms among envenomed minors and adults [66].However, considering larger number of minors' deaths and greater proportions of injected venom per kg body weight, pediatric intensive care units should be established in STCs at an accessible distance from the origin of snakebites to ensure treatment on time.
Like in India [61], the majority of deaths occurred during the rainy season (Table 5) and in the tropical Terai of Nepal.More snakebite envenomings and associated deaths occurred in tropical region than in other regions could be associated to the greater diversity of highly venomous and medically relevant elapid snake species (Tables 2 and 5, Fig 5) and their population density in the tropical region of Nepal.Although outcomes of envenomings were independent of remoteness of snakebites in Nepal (Table 5), the greater proportion of snakebite envenomings and associated deaths in urban and semi-urban areas of Nepal indicated populations inhabiting non-rural areas at the risk of snakebite envenomings and deaths, too.However, more sophisticated study is needed to confirm these associations.Envenoming to sleeping people delays treatment compared to those who are alert because krait bites are often painless, snakes are invisible, and victims know effect of venom late.Consequently, the delay in proper treatment causes fatalities.Therefore, apparent envenoming in sleeping people should be considered seriously and carried to the nearest STC as soon as possible.
The majority of deaths (i.e., 43%) were caused by Russell's Viper (Daboia russelii) in India [61], whereas cobras (Naja spp.) followed by kraits (Bungarus spp.) were responsible for the majority of deaths in Nepal.Although the snake types involved were not associated to the outcomes, the more deaths of victims who were passive during snakebite (Table 5) suggested envenoming by intruded nocturnal elapids (herein, kraits) into residential areas in search of prey animals [27].Therefore, knowing the movement ecology and food habits of these snakes in snakebite-prone zones (Fig 5) is essential to evaluate any association of snake ecology with patterns of envenomings, and with the behaviour of people engaged in different activities while snakebites.This helps to predict high-risk factors for snakebite envenomings and develop pragmatic prevention strategies against cobras (Naja naja, Naja spp.), kraits (Bungarus caeruleus, B. fasciatus, Bungarus spp.), true viper (Daboia russelii), and pitvipers (Ovophis monticola, and Trimeresurus spp.) which often cause envenomings and deaths in Nepal.
Because the proportions of deaths depend on other multiple causes (Table 5), the duration between time of snakebite and declaration of death of envenomated patients was highly variable like the length of time between snakebite and initial noticeable symptoms [in an elapid snakebite from Bharatpur Metropolitan City of Nepal [43], ptosis developed after 26 h of snkaebite].However, public misconception of immediate deaths after snakebites [13,65,67] can be minimized referring to our findings of the noticeably greater median length of time (i.e., 6 h) between snakebite and declaration of deaths.This helps to pacify the envenomed victims.Notably, the study of situations and activities of snakes in ecosystems where community people frequently interact with snakes, public perceptions on snakes [21,67], and knowledge of people on the availability of antivenom treatment facilities within an accessible distance from the origin of snakebites should be integrated to develop evidence-based control measures and minimize the snakebite deaths and meet the World Health Organization's goal of halving snakebite related deaths and disabilities in the South-East Asia by 2030 [68].
v. Limitations
Our search could collect only a subset of all envenomed cases from Nepal.Follow-up reports of the envenomed cases were not available.So, we were unable to report the outcomes of the cases being under treatment.Many snakebites occurred in the far-flung villages might have yet to be reported in the news media included in this study.Similarly, it was also possible that we might have yet to come across some of the reports because of search keys and engines that we used retrospectively.Many factors not available in this study might confound effective treatment and outcomes.Searching news media data prospectively by using additional search strategies can increase case reports.We primarily accessed newspapers.Searching of snakebite issues announced by televisions and radios may further increase the snakebite case reports.Further, encouragement of journalists to publish detailed snakebite case reports including clearly defined age, sex, occupations, means of transport used, dosage of antivenom administered, length of time between snakebite and hospital access, discharge, death, and referral (if any) of victims' to higher healthcare center, authentically identified venomous snake types, etc. increases the opportunity for more powerful statistical analyses by prospective researchers.
Conclusions
Snakebite envenomings mostly affected children and both males and females had equal probability of being envenomed due to venomous snakebites in Nepal.These envenomings more frequently occurred on upper extremities due to bites of cobras followed by kraits.Residential areas at night and monsoonal months (July followed by August and June) during sleeping are at risk of envenomings.Noticeable fatalities of envenomed victims while sleeping suggest the urgent need of formulating pragmatic and research based prevention and prehospital care strategies.Properly designed community-based educational interventions by integrating governmental institutions (e.g., hospitals, schools, etc.), epidemiologists, clinicians, and toxinologists, serpentologists, and community-based non-governmental organizations are essential to spread measures for snakebite prevention (e.g., improvement of sleeping behavior of people (by encouraging to sleep on cot-bed using mosquito net and to examine bed-sheet and pillow before each bedtime and discouraging to sleep on floor-bed) and their housing with screened doors and windows to keep snake prey animals away from houses, etc.) and prehospital care.
The dependency on TSHs for snakebite treatment is still a major challenge for snakebite management in this country.Therefore, educating mass people about venom effects (i.e., evolution of symptoms) and its consequences (deaths and morbidity) can motivate people to seek appropriate treatment timely.For the further attraction towards modern care of snakebite patients, Nepal Government should establish health insurance policies covering travel and treatment costs for the impoverished snakebite victims.Subsequently, carrying people to the hospital in a timely manner after snakebite mitigates snakebite management challenges, thus increasing survival and decreasing fatalities.Several snakebite deaths known from hills and mountains suggest an urgent need of snakebite treatment facilities.Therefore, prevention and treatment strategies should include entire Terai region and some mountain and hilly districts of Nepal to reduce snakebite fatality substantially.Further, the Nepal Government should update the envenoming risk map to ensure effective and efficient management of snakebites nationwide.Our findings can be used to design more representative epidemiological study to precisely extrapolate snakebite burden by considering the epidemiological situations in snakebite prone, tropical urban and semi-urban areas of Nepal's Terai.Further, to reduce fatalities and morbidities associated to snakebites, our epidemiological mapping of snakebite envenomings can be used to protect populations at-risk of envenomings and deaths, particularly children, and carry out more effective interventions nationwide and elsewhere having similar geo-climatic conditions and socio-economic status of people, particularly in countries where the government has no interest in registering cases of snakebite envenomings.
Fig 1 .
Fig 1. Divisions of Nepal's topography and geographic locations of the major sources (i.e., the "STCs" which stands for Snakebite Treatment Centers; hospitals where antivenom supply was unclear, and Nepal Police Offices, S2 Table)for news media-reported snakebites included in this study.Those news (S1 Table)reported snakebites from 53 districts that are displayed in numerals 1 through 53 from the eastern to far-western Nepal below:
Fig 5 .
Fig 5. Geographic locations of the news media-reported snakebite envenomings and deaths with substantial information [Locations of envenomed cases are shown in black open triangles and localities from where deaths were reported are indicated with red "X" symbols].The details on sources of envenomings and deaths are mentioned in Table 1, S3 Table, and S1 Data.The coordinates of mentioned locations are in data file (i.e., S1Data).[The first author of this study created this map in ArcGIS 10.1.The source of the basemap shapefile onto which data has been plotted was used from an openly available source (https://gadm.org/data.html)].https://doi.org/10.1371/journal.pntd.0011572.g005
Table 4 .
(Continued) Ω = In for human residential areas such as yard, indoor, etc. and Out for out of human residential areas such as crop fields, roads, etc. % = percent ¥ = Pearson's Chi-squared test for goodness of fit • workers included 1 labor-worker and 1 worker known without defined type of work •• the employment included 2 healthcare professionals (i.e., 1 midwife and 1 healthcare volunteer), 2 armed forces (i.e., 1 army and 1 police), 2 teachers ••• others included 1 social worker and 1 traditional healer k = municipalities and metropolitan cities † = village councils, aka rural municipalities https://doi.org/10.1371/journal.pntd.0011572.t005
|
v3-fos-license
|
2023-02-23T15:11:26.351Z
|
2021-10-21T00:00:00.000
|
257091407
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41885-021-00100-8.pdf",
"pdf_hash": "0cf1d5ff760fd933005b321b2a7801c300be3867",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45489",
"s2fieldsofstudy": [
"Environmental Science",
"Geography",
"Economics"
],
"sha1": "0cf1d5ff760fd933005b321b2a7801c300be3867",
"year": 2021
}
|
pes2o/s2orc
|
Extreme Weather Events and Internal Migration: Evidence from Mongolia
This article examines the effects of extreme weather events on internal migration in Mongolia. Our focus is on dzuds, extremely harsh winters characterized by very cold temperature, snowfall anomalies, and/or storms causing very high livestock mortality. We exploit exogenous variation in the intensity of extreme winter events across time and space to identify their causal impacts on permanent domestic migration. Our database is a time series of migration and population data at provincial and district level from official population registries, spanning the 1992-2018 period. Results obtained with a two-way fixed effects panel estimator show that extreme winter events cause significant and sizeable permanent out-migration from affected provinces for up to two years after an event. These effects are confirmed when considering net change rates in the overall population at the district level. The occurrence of extreme winter events is also a strong predictor for declines in the local population of pastoralist households, the socio-economic group most affected by those events. This suggests that the abandonment of pastoralist livelihoods is an important channel through which climate affects within-country migration.
Introduction
Extreme weather events, like droughts, floods, storms, and hot spells, cause considerable economic losses. Rural farm households in developing countries suffer more than others from such weather shocks due to their geographical exposure and their dependency on 1 3 rain-fed agriculture (Harrington et al. 2018). In the absence of effective post-shock coping and long-term adaptation strategies, exposed households may resort to migration when climate-sensitive livelihoods are threatened by extreme weather events (Jha et al. 2018). The sudden occurrence of extreme weather events may lead to migration choices that are forced, rather than the result of a carefully planned process (Berlemann and Steinhardt 2017). If changing climatic conditions indeed matter for population mobility, the number of climate migrants is likely to accelerate in the years to come (Hunter and Nawrotzki 2016), as extreme weather events are predicted to increase both in their frequency and their intensity with global warming (Pachauri et al. 2014;WMO 2020). This outlook has stimulated an increased interest among policy stakeholders and the academic community alike in what role changing climatic conditions play as drivers for internal and international migration (Hoffmann et al. 2020).
The empirical literature on the climate-migration nexus has evolved rapidly in the new millennium (Berlemann and Steinhardt 2017). While most studies identify extreme weather conditions as a relevant driver for population mobility, the empirical evidence does not provide a clear-cut picture. Different results are obtained for internal and international migration as well as for the effects of gradual climate change and sudden weather events. The effect size also varies substantially across approaches and data used (Hoffmann et al. 2020). A further source of heterogeneity in results stems from the specific weather conditions and the institutional context considered. Besides climate, migration is also shaped by cultural, geographical, institutional, and socio-economic factors at the place of origin (Grecequet et al. 2017). In order to advance the state of knowledge, studies examining the climate-migration nexus in individual countries are particularly warranted (Berlemann and Tran 2020), as climate-related internal migration flows are more pronounced than crossborder migration (Hoffmann et al. 2020). This is particularly relevant in the context of developing countries, where internal migration is often more feasible and affordable to households as compared to costly international movements (Beine and Parsons 2015).
In this paper, we investigate internal migration dynamics in Mongolia. This East Asian country is particularly exposed to extreme winter events, locally referred to as dzuds, which puts Mongolia among the most severely affected countries by natural hazards globally (CRED 2020). 1 Such extreme winter events cause mass livestock mortality by starving or freezing animals to death. Through livestock mortality, winter events destroy the income, consumption, and asset base of pastoralist households, thereby directly threatening the livelihood of large parts of the rural population that live from animal husbandry. Our analysis aims at quantifying if, and to what extent, extreme winter events drive internal migration in Mongolia. We draw on a long time series of annual in-and out-migration and population data from population registries at the provincial and district level, spanning the 1992-2018 period. Using a two-way fixed effects panel estimator, we exploit spatial and temporal variation in the intensity of extreme winter events to identify the causal effects of these events on migration and population dynamics.
The exceptionally rich data at hand allow us to expand the existing literature on internal migration and climate in four ways. First, existing macro-level studies almost exclusively proxy internal migration with urbanization rates, a rather imprecise measure that overlooks certain forms of internal migration, such as rural-to-rural migration (Hoffmann et al. 2020). In contrast, the availability of annual population registry data at the provincial and district levels allow us to study internal migration dynamics across administrative units throughout the country. Second, in existing micro-level studies that draw on population census data, a timely attribution of extreme weather events is complicated by long census intervals and potential biases stemming from self-reported migration information. Studying longer-term within-country migration dynamics using existing household panel surveys is also challenging, as panel surveys are scarce for developing countries and often only cover narrow regional settings and time periods. Against this backdrop, the yearly migration data at hand spanning almost three decades allow us to directly link the occurrence of extreme weather events with migration responses. Third, our analysis provides insights into the channels through which extreme weather events affect migration by considering net changes in the local population of pastoralist households whose livelihood is immediately affected by these events. Fourth, while existing studies on climate-induced migration focus on extreme temperatures (Hirvonen 2016;Thiede and Gray 2017), precipitation (Thiede et al. 2016), flood (Ruiz 2017), storms (Groeger and Zylberberg 2016;Koubi et al. 2016;Mahajan and Yang 2017), and drought (Dallmann and Millock 2017;Ruiz 2017), we provide evidence from another type of extreme weather event that has received less scholarly attention -extremely harsh winter conditions featuring extremely cold temperature, snowfall anomalies, and/or storms.
Results from the two-way fixed effects panel estimator show that extreme weather events occurring during the 1992-2018 period trigger significant and sizeable net out-migration from affected provinces for up to two years after an event. The finding is robust to the inclusion of time-varying controls for provincial characteristics. The district-level analysis confirms these results: We find a significant, negative, and strong effect of extreme weather events on the net population change rate across districts. Lastly, both province and districtlevel analyses reveal that extreme weather events significantly reduce the local population of pastoralist households.
The paper is organized as follows. Section 2 reviews the existing literature on climaterelated internal migration dynamics. Sections 3 and 4 provide background information on extreme weather events and migration patterns in Mongolia. Section 5 introduces the empirical model and the data employed in the study. Results are discussed in section 6, while section 7 summarizes the key findings and concludes.
Review of the Evidence on Climate-Induced Internal Migration
The Intergovernmental Panel on Climate Change (IPCC) stressed the significance of climate-related migration and displacement as early as 1990. In that year, the IPCC put forth that the single most significant impact of climate change could be human migration, through the displacement of millions of people through the occurrence of shoreline erosion, coastal flooding, and agricultural disruption (Brown 2008). Since then, various predictive studies aimed at estimating the expected number of climate-induced migrants in the decades to come. For internal migration, Rigaud et al. (2018) estimate that without global and national climate action, climate change will displace more than 143 million people within their countries by the year 2050 in Sub-Saharan Africa, South Asia, and Latin America alone.
The empirical literature quantifying whether and how climate change affects migration flows only started to evolve in the early 2000s (Berlemann and Steinhardt 2017). With the increased availability of weather and migration data, the literature has developed rapidly since then. Existing studies differ in a number of dimensions, including the type of migration considered (international versus internal), the push factors analyzed (extreme weather events versus gradually changing climate), and the approach taken (micro versus macro). In the following, we outline developments and limitations in the empirical literature on climate and within-country migration. 2 The two main approaches used in existing research -macro-level approaches capturing gross migration flows and micro-level approaches building on survey and census data -are discussed in turn.
In the existing macro-level literature, internal migration flows are most commonly proxied by national urbanization rates (Hoffmann et al. 2020). For instance, Barrios et al. (2006) estimate the impact of rainfall shortages on urbanization patterns in a cross-country dataset, using a year-and country fixed effects approach. Findings suggest that declines in rainfall are an important determinant of urbanization in Sub-Saharan Africa. Applying a similar approach to fine-grained data, Henderson et al. (2017) exploit district-level heterogeneity in precipitation to analyze the determinants of urbanization rates across Sub-Saharan African countries and find that drier conditions increase urbanization in regions where cities are likely to be manufacturing centers. In contrast, in market cities that provide local services to farmers and lack structural transformation, drying has little impact on urbanization or total urban incomes. A positive link between rainfall shortages and urbanization is also documented by other studies of Sub-Sahara Africa (e.g., Brueckner 2012; Marchiori et al. 2012). Beine and Parsons (2015) examine urbanization in the aftermath of both sudden-onset disasters recorded by the Centre for Research on the Epidemiology of Disasters (CRED) as well as gradual changes in precipitation and temperature patterns, using cross-country data. When looking at the sub-sample of developing countries, Beine and Parsons find that extreme weather events significantly increase urbanization, while no significant effects are found for international migration. One limitation of these macrolevel approaches is the rather narrow focus on rural-to-urban migration, thus providing an incomplete picture of overall internal migration dynamics (Hoffmann et al. 2020). Furthermore, the multi-country approach and, in turn, the usage of a country fixed effects specification makes it impossible to examine the role of socio-economic and contextual factors that are specific to individual countries.
Micro-level studies building on census or survey data from a single country are partly able to overcome these limitations as micro data usually allow controlling for a wide array of local characteristics (e.g., Carvajal and Pereira 2009;Goldbach 2017;Gray and Mueller 2012;Groeger and Zylberberg 2016;Koubi et al. 2016;Paul 2005). Given that surveys are often only collected in selected regions within a country, movements of whole households outside the survey area lead to attrition bias if households are not traced. For this reason, studies building on household survey data tend to focus on the determinants of migration decisions of individual household members. In most of these studies, variants of the gravity model are used as a flexible approach to modeling spatial interactions, assuming that migration varies with the degree of the force of attraction and is inversely proportional to distance (Poot et al. 2016). In more advanced applications, the gravity model is extended by variables representing economic, climatic, and other characteristics of the place of origin and destination (Tsegai and Bao Le 2010). Within this group of studies, gradual climate change and extreme weather events are often found to matter for migration (e.g., 1 3 Carvajal and Pereira 2009;Gray 2009;Groeger and Zylberberg 2016;Koubi et al. 2016). Notwithstanding, the overall evidence remains mixed, with other studies finding no systematic evidence of weather-induced within-country mobility (Bohra-Mishra et al. 2014;Di Falco et al. 2012;Goldbach 2017;Gray and Mueller 2012;Paul 2005).
Most closely related to our approach, another group of studies analyzes migration flows between regions by aggregating micro-level data to higher administrative divisions. In a study on Costa Rica, Robalino et al. (2015) analyze the impact of extreme weather events on internal migration flows over the 1995-2000 period. The authors use population census data from 2000, aggregated to the canton level, which are combined with the DesInventar database that records both the frequency and intensity of natural disasters, such as floods and landslides. In a gravity model framework, cross-canton gross migration rates are modeled as a function of population size, distance, as well as a set of push and pull factors that influence migration decisions, such as education levels, health infrastructure, security, and amenities. Robalino et al. find that natural disasters resulting in fatalities decrease outmigration, while the opposite holds for those disasters not causing numerous deaths. Using a similar approach, Saldana-Zorilla and Sandberg (2009) draw on population census data from Mexico, aggregated to the municipality level, and merge this database with secondary data on the occurrence of natural disasters. An increase in disaster frequency significantly increases out-migration from affected municipalities between 1990 and 2000. This effect is particularly pronounced in those regions defined as marginalized by government agencies, where agricultural production continues to constitute the main resource of livelihoods. In a study on Vietnam, Berlemann and Tran (2020) test if exposure to natural disasters cause households to temporarily or permanently emigrate from their communes. The database used in this study is commune-level data collected as part of a household panel survey implemented in 2012, 2014, and 2016. The measure of shock intensity -whether a given commune was affected by floods, typhoons, and droughts in the one and two years preceding each survey wave and whether each disaster type became worse over the last decade -is recorded from administrative officials in the commune questionnaire. Using a commune and year fixed effects approach, Berlemann and Tran show that droughts primarily cause temporary out-migration, while flood events tend to induce permanent out-migration from affected communes. In contrast, typhoons remain without any significant effects in both the short and the long runs.
Yet, studies building on micro-level data are often constrained by the quality and availability of suitable data. Specifically, there is a trade-off between the frequency of observations and geographical coverage in micro data, a common issue in the migration literature (Berlemann and Steinhardt 2017). While drawing on population census data allows for undertaking nationwide studies of internal migration, these are typically only available at five-year intervals. This makes a timely attribution of adverse weather effects difficult, which is problematic for sudden-onset disasters. Although socio-economic household surveys tend to be collected at shorter intervals, these are often limited in their geographical scope. In addition, household panel survey data is typically not available for long time horizons. This is especially the case for developing countries, where migration in response to adverse climatic conditions is likely to be most pronounced. Further, information on migration is usually self-reported through retrospective survey questions on censuses and surveys, while several studies also rely on measures of weather shocks that are selfreported by respondents. This renders studies based on micro-level data prone to reporting and reinterpretation biases.
We contribute to the existing literature by exploiting a long time series of annual inand out-migration data at the provincial and district levels in Mongolia, which allows us to capture heterogeneity in the frequency and intensity of extreme weather events across time and space. The availability of yearly data allows us to attribute the effects of extreme weather events to migration patterns in the same year. From a methodological perspective, the rare possibility to exploit long-term population registry data at the sub-national level offers two advantages. First, as the measure of migration is based on reliable registration data, our analysis is not subject to reporting bias. Second, we capture migration rates across administrative units, while existing studies often have to draw on urbanization rates as an imprecise proxy for internal migration dynamics. Thus, our analysis overcomes common shortcomings of both micro-and macro-level studies on internal migration in the aftermath of extreme weather shocks.
Rural Livelihoods and Extreme Weather Events in Mongolia
Mongolia is already severely impacted by climate change. Temperature data recorded at weather stations across the country show that the annual mean air temperature has increased by 2.24 degrees Celsius between 1970 and 2015, a figure well above the global average (Ministry of Environment and Tourism 2018). Evidence also suggests that precipitation patterns and intensities are changing (Nandintsetseg et al. 2021;Goulden et al. 2016). Aside from gradual climatic changes, the country is increasingly exposed to extreme winter events (Palat Rao et al. 2015;Nandintsetseg and Shinoda 2015). In Mongolian, extreme winter events are referred to as dzuds, which literally means the mass deaths of livestock without attributing an exact underlying cause. 3 Extreme winter events may result from the interplay of several unfavorable weather phenomena, while the exact triggering conditions differ across winters (Lehmann-Uschner and Kraehnert 2018). The Mongolian language uses various terms to distinguish between different types of dzud (Hahn 2017, p. 42 f.;Murphy 2011, p. 32 f.): In tsagaan dzud, deep snow inhibits animals from reaching the grass underneath the snow cover, thus causing animal to die from starvation. A khar dzud is characterized by a lack of snow (often in combination with harsh and cold winter storms), thereby reducing the available forage and the main source of drinking water for animals during winter. A tumer dzud features excessive precipitation during the winter months, followed by a sudden temperature drop that creates a shield of ice that is impenetrable for animals and, in turn, leads to animal starvation. A khuiten dzud is characterized by extremely low temperatures, causing animals freezing to death, which may occur jointly with harsh winter storms. Lastly, a khavsarcan dzud is identified by a combination of deep snow and extremely cold temperature.
Extreme winter events have severe impacts on the rural economy, especially the agricultural sector. In 2018, 40% of the labor force living outside of the capital of Ulaanbaatar derived their livelihood solely from animal husbandry (NSO 2021). Herding continues to be the single most important occupation in rural areas. Most pastoralist households keep large shares of their wealth in their herds, holding an average of 288 animals as of 2018 (ibid.). The five most commonly held species -goats, sheep, horses, cattle, and camels -not only provide food and income to households but also serve as collateral for loans (Hahn 2017). Practicing an extensive system of livestock production, Mongolian pastoralists graze their animals on open rangelands year-round, which makes them directly dependent on weather conditions. Most, though not all, pastoralists are either semi or fully nomadic, moving their herd between two and 14 times per year (Teickner et al. 2020), typically performing the same cycle of movements every year. Extreme winter events that cause livestock to freeze to death or die of starvation within short periods of time pose an immediate threat to the viability of pastoralist livelihoods (Hahn 2017). Sudden mass livestock mortality is often aggravated if drought conditions in the preceding summer led to a situation where animals are not starting the winter months at full strength (Palat Rao et al. 2015).
Extreme winter events have occurred throughout Mongolian history. However, such events have become both more frequent and more severe, with historic records indicating that 15 extreme winter events in the eighteenth century were followed by 31 extreme winter events in the nineteenth century and 43 events in the twentieth century (Hahn 2017). Figure 1 illustrates that five extreme winter events that affected large parts of the country occurred between 1992 and 2018, the period under study here. Further extreme events occurred in specific regions over this time period.
With more than 10 million dead animals, the 2009/10 extreme winter caused roughly 24% of the total national livestock to die, the largest livestock losses recorded in a single winter in the last 50 years (NSO 2021). Some 40% of all herding households lost more than half of their herd during the 2009/10 winter (UNDP NEMA 2010). With these tremendous losses of livestock, it can take years for herders to rebuild their herds following an extreme winter event (Bertram-Huemmer and Kraehnert 2018). Exposure to extreme winter events also increases the likelihood that pastoralists are forced to abandon the herding economy (Lehmann-Uschner and Kraehnert 2018), particularly if their herd size is pushed below the threshold of 100 animals that is often considered the minimum necessary for sustaining a pastoralist livelihood in the long term (Goodland et al. 2009). There is also large spatial heterogeneity in the intensity of any given extreme winter event (Middleton et al. 2015). Figure 2 shows that different areas of the country were hit by extreme events in different years, while their intensity differed even across neighboring provinces and districts.
Migration Patterns in Mongolia
Throughout the twentieth century and beyond, internal migration has played an important role in Mongolia (IOM 2018a). In the era of centrally planned economy, influenced by the Soviet Union, industrial centers were established in urban areas across the country. While migration was controlled by the administration, large parts of the rural population were attracted to these centers because of employment prospects (Guinness and Guinness 2012). Thereby, the share of the urban population rose dramatically. The fall of the Iron Curtain brought profound political changes and the collapse of wide parts of the industrial sector. The resulting freedom of movement led to reverse migration dynamics from urban to rural areas (ibid.). The herding economy offered a promising prospective for many Mongolian families, as livestock ownership was privatized and collectives disappeared.
Since the late 1990s, Mongolia has experienced renewed rural to urban migration, in particular to the capital of Ulaanbaatar. The percentage of Mongolians living in urban areas increased from 53% in 1995 to 68% in 2018 (NSO 2021). From one million inhabitants in 2007, Ulaanbaatar grew to 1.5 million in 2018 (United Nations 2021). Most migrants arriving in Ulaanbaatar seek shelter in the so-called ger districts, where most dwellings are traditional Mongolian tents (gers) (Sigh et al. 2017). In 2014, approximately 60% of Ulaanbaatar's population lived in these districts, which are mostly located on the outskirts of the city (Engel 2015). As many of the ger districts formed in a hastily and uncontrolled way, they are often poorly connected to urban utilities and infrastructure. Many inhabitants have to buy and fetch drinking water from government-run kiosks, overall sanitation is poor, and waste removal is organized irregularly (Henreckson 2018). As dwellings in ger districts are poorly insulated and stoves are fired by coal or other biomass, ger districts are hotspots for air pollution, especially in winter.
Intensive urbanization is a major topic in Mongolian politics and urban planning. At the beginning of 2017, the governor of Ulaanbaatar, together with the mayor of the capital, issued a law officially prohibiting domestic permanent migration from rural areas to the capital city (IOM 2018b). In 2018, the migration ban was extended to the beginning of 2020 (NSO 2020). As the law prevents migrants from rural areas from registering in the capital, the number of unregistered migrants in Ulaanbaatar has risen. Without an official resident status, it is difficult to find stable employment and impossible to access basic services, such as schools, daycare, health care, and social welfare, causing high vulnerability among unregistered migrants (IOM 2018b).
In the Mongolian context, the driving forces of internal migration are not well understood. The few existing qualitative studies associate rural to urban migration dynamics with poverty, low agricultural incomes at origin, income opportunities at destination, environmental degradation, and climate change (IOM 2018a; IOM 2018b; Guinness and Guinness 2012). We are only aware of a single study, by Xu et al. (2021), that examines the drivers of internal migration with a quantitative approach, using cross-sectional data from the Mongolian Labor Force Surveys implemented in 2006, 2010 find that being male, young, better educated, and married are strong predictors for rural to urban migration decisions of single household members. Yet, the analysis does not examine the effects of extreme winter events on migration.
Empirical Strategy
We exploit plausibly exogenous variation across time and space in the occurrence of extreme winter events to study their impacts on internal migration dynamics. We estimate the following two-way fixed effects model: As the outcome of interest, we employ various proxies for internal migration M i, t , measured at province (or district) i in year t. Extreme event i, t measures the intensity of an extreme winter event in a given province (or district) and year. Several lags of this measure are included to examine the timing of migration in the aftermath of such events. C i, t is a vector of time-varying control variables. Province (or district) fixed effects α i control for the unobserved time-constant heterogeneity across administrative units. Year fixed effects λ t capture events and developments affecting all administrative units in the same way, while province-(or district-) specific linear time trends μ i t control for different long-run trends in migration figures of individual provinces (or districts). ε i, t denotes the unexplained residual. Standard errors are clustered at the province (or district) level.
The empirical analysis builds on data at the level of provinces and districts, the first-and second-level administrative subdivision of Mongolia, respectively. Aside from the capital city, the country consists of 21 provinces (aimags), which are subdivided into 331 districts (soums). The capital of Ulaanbaatar is subdivided into nine so-called düüregs, which are often considered equivalent to districts. In the empirical analysis, we follow this categorization and, furthermore, treat Ulaanbaatar as one province. Our sample consists of 22 provinces and 340 districts, for which yearly data spanning the 1992-2018 period is available.
Province-Level Analysis
The first outcome is the net migration rate per province and year, which we calculate as follows: where I i, t stands for the number of in-migrants entering province i during year t, E i, t captures the number of out-migrants leaving province i during year t, and P i, t is the mid-year population of province i in year t. A positive value reflects net immigration, a situation where an excess of persons are entering a given province, while a negative value mirrors net emigration, a situation where an excess of persons are leaving a given province. Specifically, a value of −10 in the net migration rate for a given year means that 10 out of 1000 inhabitants leave their province in the course of one year. Across all provinces and years, the net migration rate has a negative mean (−10.3) because of international out-migration. 4 Data on the number of in-and out-migrants and the total population come from official population records maintained by the National Statistical Office of Mongolia (NSO 2013). Mongolian law requires migrants to de-register at their place of origin and re-register at (1) (2) 4 Note that the mean does not account for population sizes in respective provinces. The mean national-level net migration rate is −0.169 for the 1992-2018 period.
destination within ten days after moving. Note that this outcome does not capture temporary migration, for instance, individuals searching for seasonal employment outside of their registered place of residence or nomadic herders crossing province boundaries as part of their annual cycle of movements. One limitation that goes along with using official registration data is that it does not capture any form of unofficial movements of migrants who either choose not to register or are banned from registering at their destination. 5 Our results should be considered as lower bound estimates of total internal migration. The second outcome is the net change rate in the local number of pastoralist households, 6 the socio-economic group we expect to be most immediately affected by extreme weather events: where PPbegin i, t is the number of pastoralist households in province i at the beginning of year t, PPend i, t is the number of pastoralist households in province i at the end of year t, and PPmid i, t represents the mid-year number of pastoralist households in province i and year t.
We proxy the intensity of extreme winter events with livestock mortality per province and year. Livestock mortality is considered an appropriate measure for the intensity of such events (Murphy 2011;Skees and Enkh-Amgalan 2002). 7 The data come from the annual Mongolia Livestock Census, which the NSO has been implementing since 1918. Each year in December, enumerators record the number of livestock held by herders across the country as well as the number of livestock that died in the previous 12 months, both broken down for each of the five commonly held species. Based on this historical data, we proxy the occurrence of an extreme winter event with a dummy variable taking the value of one if the average livestock mortality rate across species exceeds 6% for a given province and year. Our choice of the 6% threshold is informed by the operating index-based livestock insurance, where a livestock mortality rate of 6% triggers indemnity payouts to insured households. 8 We employ an alternative threshold as well as the continuous livestock mortality rate as robustness test.
(3) 5 A survey conducted by the International Organization for Migration in 2018 of some 1000 migrant households arriving in the provinces of Selenge, Dornogovi, and Ulaanbaatar, finds that only one-third of the migrant households registered within the legal timeframe (IOM 2018a). In urban areas of Selenge and Dornogovi, about 93 and 71% of the surveyed households registered, respectively, while in Ulaanbaatar only 49% registered. While the high number of non-registered migrants in the capital is likely due to the migration ban, the figures suggest that a considerable share of migrants remains unregistered. 6 The NSO defines pastoralist households as one or several herders and their nuclear family who conduct livestock husbandry around the year for their main purpose of livelihood and source of income (NSO 2015). The number of pastoralist households is recorded each year in December as part of the Mongolia Livestock Census. 7 Modeling the intensity of extreme winter events with weather data is challenging because the specific weather conditions triggering each extreme winter vary considerably across events. As outlined in section 3, triggering conditions include, but are not limited to, extremely cold temperature, harsh winter storms, excessive snowfall, rainfall combined with sudden temperature drops, as well as summer-season drought conditions. Among climate scientists modeling extreme weather events in Mongolia, there is no consensus regarding what weather data and variables best measure the intensity of extreme winter events (Tachiiri et al. 2008;Palat Rao et al. 2015;Nandintsetseg et al. 2018). 8 Note, however, that a livestock mortality rate of 6% at the district level triggers index insurance payouts, while we define livestock mortality at the province level.
M(net change rate pastoralist households)
To select a set of province-level control variables, we draw on the existing literature that employs variants of the gravity model for migration (e.g., Berlemann and Tran 2020;Borjas 1987;Dallmann and Millock 2017;Tsegai and Bao Le 2010). 9 To proxy the local economic performance, we include the revenue of the provincial government, which consists of tax income; revenues from interests, dividends, and fines; as well as transfers and grants from the central government and the fund for local development. Another economic measure is the unemployment rate, which reflects the economic attractiveness of a province. As proxy for the quality of the local infrastructure, we account for the number of households with access to portable water. As this indicator does not represent the situation of all nomadic households, we additionally include the total number of water supply stations per province. 10 The number of physicians per 10,000 inhabitants is used as measure for the quality of health care provision. Lastly, we employ the share of students continuing from the first to the fifth grade as proxy for the quality of the local educational system. Overcontrolling is an issue that often arises when quantifying the impact of major shocks on an outcome variable in a gravity model framework (Berlemann and Steinhardt 2017;Dell et al. 2014). If a control variable is in itself influenced by the shock, any empirical migration model that includes such control is likely to capture only partial effects of the shock on the outcome (Berlemann and Tran 2020). We approach this issue by presenting results both obtained from a model without controls and with the full set of controls.
District-Level Analysis
We further estimate migration dynamics at the district level, which increases the number of observations by factor 15. As data on the number of migrants is not publicly available at the district level, we employ the net population change rate as a proxy for overall migration dynamics as main outcome, which we define as follows: with Pbegin i, t representing the resident population in district i at the beginning of year t, Pend i, t stands for the population in district i at the end of year t, and Pmid i, t is the mid-year population of district i in year t. Besides migration, the net population change rate is also shaped by the number of births and deaths. 11 A value of −10 in the net population change rate for a given year means that the district population decreased by 10 out of 1000 inhabitants in the course of one year. The second outcome is the net change rate in the total number of pastoralist households per district, which is calculated analogously to eq. 3 above.
(4) 11 We did not come across reports of human casualties caused by extreme winter conditions. 9 The gravity model assumes that individuals compare characteristics between destination and origin region and maximize their utility while accounting for the cost of migration. Aside from wage differentials, regional attributes, such as infrastructure, environmental conditions, and socio-economic characteristics at origin and destination are typically taken into account (Tsegai and Bao Le 2010). For Mongolia, only single-sided migration data is available at an aggregate level, which renders it impossible to estimate a full gravity model. Instead, we refer to the gravity model in more general terms as a reference point for selecting control variables. 10 Water supply stations are wells designated to supply drinking water. Wells are either connected to the central water supply system or filled by water tank trucks of authorized entities.
M(net population change rate) i,t = Pbegin i,t − Pend i,t Pmid i,t ∕1, 000
For the period of interest, annual district-level data from the Mongolia Livestock Census is only available for the total number of living animals (by species), but not for the number of deceased animals. We approximate the district-level livestock mortality with year-to-year changes in overall livestock numbers. 12 To proxy the occurrence of an extreme winter event, we define a dummy variable that takes the value of one if the yearly growth rate in total livestock numbers across species is below −6% for a given district and year. As no time-varying controls are available at the district level, we employ the same controls as in the province-level analysis.
Summary statistics of the key variables of interest are tabulated in Table 1.
Results
Results from the baseline two-way fixed effects OLS regression on the determinants of the province-level net migration rate are displayed in Table 2. When considering the impact of extreme winter events on the net migration rate in the same year (column 1), the estimated coefficient of the extreme events proxy is negative, indicating higher total emigration, albeit not statistically significant at conventional levels. When lagging the shock measure by one year (column 2), the effect size more than triples in magnitude and is statistically significant at the 1% level. The occurrence of an extreme winter event on average decreases the net migration rate by 7.045 in the year after the event strikes. This corresponds to a net out-migration of more than 7 individuals per 1000 or 0.7% of the provincial population. This is a sizable effect, constituting roughly 70% of the sample mean and 36% of the standard deviation. 13 The effect of extreme winter events on the net migration rate in affected provinces remains statistically significant (at the 10% level), although it is slightly smaller in magnitude, two years after an extreme weather event (column 3). Exposure to such events has no significant effect on net migration rates three years after an event (column 4). Column 5 displays results when including the full set of time-varying province-level controls in model 2, which yields the strongest results. 14 The effect of the extreme event is highly significant, though smaller in magnitude compared to the model without controls. This is in line with expectations, as more and possibly endogenous controls absorb parts of the total effect. In column 6, we interact the shock proxy with the five regions of Mongolia. The marginal effects of the lagged extreme weather event on the net migration rate for individual regions are negative and significant in the Western, Khangai, and Central regions, while it is positive and significant for the capital city of Ulaanbaatar. This finding is in line with qualitative reports suggesting that internal migration in Mongolia is particularly driven by rural-to-urban migration. All baseline results hold when we restrict the sample to a more narrow time window, including the 1995-2018 period, the 1992-2016 period, and the 1995-2016 period ( Table 7 in the Appendix). These findings assure us that potential anomalies in the net migration rate in the direct aftermath of the fall of the Iron Curtain and the migration ban to Ulaanbaatar enacted in 2017 are not individually or jointly driving the results. In a further robustness test, we only consider the most extreme events, in which the annual livestock mortality rate exceeded 15% in a given province and year (Table 8 in the Appendix). As in the baseline specification, the effect of particularly severe extreme events on the net migration rate is statistically significant and economically large one year after the event (column 2). In contrast to baseline results, the effect is also significant in the year the disaster occurs (column 1). We obtain similar findings -statistically significant and sizable effects of livestock mortality on the net migration rate in the same year as well as one and two years later -when proxying shock intensity with a continuous measure of livestock mortality ( Table 9 in the Appendix). 15 Table 3 displays results from the district-level model. The outcome is now defined more broadly as the net population change rate, which, besides migration, is also shaped by the number of births and deaths. Results confirm the patterns found in the baseline model: The occurrence of a local extreme weather event significantly and strongly lowers the population in districts up to two years after an extreme event (column 1-3), while the effect is no longer significant three years after an event (column 4). Again, the estimated effect remains comparable in terms of significance level and magnitude when including the full set of time-varying controls (column 5). The inclusion of regional interaction terms in column 6 shows that the negative effects on the population change rate are particularly pronounced in the Western and Khangai regions. When differentiating between the net population change rate among the female population (column 7) and the male population (column 8), the estimated coefficients of extreme weather events are of similar size (we cannot reject the null hypotheses of equality of coefficients, with the p value of a Wald chi square test accounting for the simultaneous (co)variance matrix of the coefficients of extreme weather events being 0.47). This suggests that entire households migrate as response to extreme events.
Next, we investigate the effects of extreme winter events on the net change rate of pastoralist households, the population sub-group whose livelihood is most immediately affected by such events and climate change in general (Table 4). The occurrence of an extreme event significantly reduces the number of pastoralist households in affected districts in the year the disaster strikes by about 6% (column 1). The effect becomes smaller, but remains significant at the 1% level, one year after the disaster (column 2). Including the full set of time-varying controls yields similar results (column 5). 16 When differentiating the effect by region (column 6), we find that extreme events significantly reduce the population of pastoralists in all five regions of Mongolia, including Ulaanbaatar.
Lastly, we explore how extreme winter events affects the number of pastoralist households with different herd sizes. We separately estimate the model for seven wealth categories of pastoralist households: (1) households with less than 51 heads of livestock, (2) 51-100 livestock, (3) 101-200 livestock, (4) 201-500 livestock, (5) 501-999 livestock, and (6) 1000 and more livestock. Data on the number of pastoralist households in each wealth category in each province is collected as part of the annual Mongolia Livestock Census. Table 5 displays results. When using the net change rate in the total number of pastoralist households per province as outcome, irrespective of wealth (column 1), we obtain qualitatively similar findings as in the district-level analysis for the 1992-2018 time period displayed in Table 4. When differentiating the effects by wealth category (columns 2-7), we find that extreme winter events significantly reduce the number of pastoralist households owning more than 100 heads of livestock. Indeed, the effect size is largest (−53%) for the wealthiest category of pastoralists owning more than 999 heads of livestock and becomes smaller with decreasing herd size. In contrast, extreme winter events have a significant and positive impact (8%) on the net change rate of pastoralist households in the poorest wealth category who own 1-50 heads of livestock (column 2). We draw two conclusions from these results. First, the occurrence of extreme winter events not only reduces the total number of Mongolian pastoralist households over time. It also increases the number of pastoralists with marginal herd sizes that are considered too small to sustain a herding livelihood in the long term in the harsh Mongolian environment. The poorest category of herders is particularly vulnerable to the impacts of future extreme events. Hence, there is reason to expect that the downward trend in the population of pastoralists will persist if extreme weather events continue to strike in the future. Second, it appears that livestock wealth does not protect households from the adverse effects of extreme winter events, as even the group of wealthiest pastoralists diminishes after such events. This again underlines the sheer magnitude of such extreme events.
Robustness
One potential threat for our identification strategy is potential positive autocorrelation across extreme weather events over time, which would result in biased estimates. If the occurrence of extreme weather events is positively autocorrelated, inhabitants of strongly exposed areas may systematically differ in their migration behavior from households in low-risk areas. In order to test for positive autocorrelation across events, the province-level livestock mortality rate for the 1992-2018 time period is regressed on its lagged values in a two-way fixed effects model (Table 6). In addition, we employ an Arellano-Bond estimator. Across specifications, we only find significant effects of livestock mortality lagged by two years on livestock mortality in the current year. As none of the employed models reports positive effects, positive autocorrelation and foresighted migration decisions should not be a major concern.
As an additional falsification test, the baseline model is estimated with lead values of the shock proxy (Table 10 in the Appendix). In line with our expectations, extreme weather events that lie 1, 2, or 3 years in the future do not have significant effects on the provincelevel net migration rate (columns 1-3, respectively).
Conclusion
Our analysis documents that the occurrence of extreme weather events is an important push factor for internal migration in Mongolia. The country is increasingly affected by extremely harsh winters that result in very high livestock mortality, thereby threatening the livelihood of large parts of the rural population. We examine the causal impacts of extreme winter events on internal migration spanning the 1992-2018 period in a two-way fixed effects panel estimator, drawing on migration and population data at the province and district levels.
Findings show that extreme winter events have significant, negative, and sizeable effects on internal migration in Mongolia. The local occurrence of an extreme winter event triggers net outmigration from affected provinces and reduces the overall population in affected districts. Reductions in the local population are strongest in the year an event strikes and remain statistically significant up to two years following an event. The negative effects on population dynamics are particularly pronounced in the Western and Khangai regions. Results are robust to the inclusion of time-varying controls, alternative definitions of the shock proxy and the outcomes, as well as the censoring of the data to more narrow time windows. Furthermore, results do not appear to be driven by positive autocorrelation of extreme weather events over time.
Moreover, the local occurrence of extreme winter events significantly and strongly reduces the population of pastoralist households. This effect is observed across all regions of Mongolia, including the capital city. Wealthiest pastoralists, owning 1000 animals or more, face the strongest reductions in their population size in the aftermath of a winter event. In contrast, the group of pastoralists in the poorest wealth group, owning up to 50 animals, grows significantly in the aftermath of an extreme winter.
One limitation of our study is that with the official population registry data at hand, we are unable to capture informal migration where individuals either chose not to register or are banned from registering. Additionally, temporary migration, such as seasonal employment, is not covered by the available data. The obtained results should be interpreted as the lower bound of actual effects.
Extreme winter events in Mongolia have been increasing in both intensity and frequency throughout the 20th century and beyond (Hahn 2017). The pressure on the herding economy as well as migration responses to weather disasters will in all likelihood further intensify. This has three policy implications: First, there is a need to accommodate a growing urban population, especially in the capital city. This may warrant investments in urban infrastructure, for instance by expanding the water supply network, sewage system, electricity grid, transport system, as well as educational and medical services. Former pastoralists may need to acquire labor market qualifications tailored to the urban job market, while the demand for job opportunities in urban areas will likely rise in general. Second, there is the need to assist pastoralists in adapting to and better coping with future extreme events. Index-based livestock insurance (Bertram-Huemmer and Kraehnert 2018) and early action cash transfers (FAO 2018) appear to be promising tools. Third, more generically, there is a need to foster climate change mitigation and to avoid high-end climate scenarios. Table 9 Robustness test: Determinants of net migration rate at the province level with continuous shock proxy, 1992-2018 (1) (3)
|
v3-fos-license
|
2021-10-23T13:11:06.260Z
|
2021-10-22T00:00:00.000
|
239459856
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.740484/pdf",
"pdf_hash": "24ca1e987198c4cbd29fe5b98ae898d9f1af4b7c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45490",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "24ca1e987198c4cbd29fe5b98ae898d9f1af4b7c",
"year": 2021
}
|
pes2o/s2orc
|
Identification of Gene-Set Signature in Early-Stage Hepatocellular Carcinoma and Relevant Immune Characteristics
Background The incidence of hepatocellular carcinoma (HCC) is rising worldwide, and there is limited therapeutic efficacy due to tumor microenvironment heterogeneity and difficulty in early-stage screening. This study aimed to develop and validate a gene set-based signature for early-stage HCC (eHCC) patients and further explored specific marker dysregulation mechanisms as well as immune characteristics. Methods We performed an integrated bioinformatics analysis of genomic, transcriptomic, and clinical data with three independent cohorts. We systematically reviewed the crosstalk between specific genes, tumor prognosis, immune characteristics, and biological function in the different pathological stage samples. Univariate and multivariate survival analyses were performed in The Cancer Genome Atlas (TCGA) patients with survival data. Diethylnitrosamine (DEN)-induced HCC in Wistar rats was employed to verify the reliability of the predictions. Results We identified a Cluster gene that potentially segregates patients with eHCC from non-tumor, through integrated analysis of expression, overall survival, immune cell characteristics, and biology function landscapes. Immune infiltration analysis showed that lower infiltration of specific immune cells may be responsible for significantly worse prognosis in HCC (hazard ratio, 1.691; 95% CI: 1.171–2.441; p = 0.012), such as CD8 Tem and cytotoxic T cells (CTLs) in eHCC. Our results identified that Cluster C1 signature presented a high accuracy in predicting CD8 Tem and CTL immune cells (receiver operating characteristic (ROC) = 0.647) and cancerization (ROC = 0.946) in liver. As a central member of Cluster C1, overexpressed PRKDC was associated with the higher genetic alteration in eHCC than advanced-stage HCC (aHCC), which was also connected to immune cell-related poor prognosis. Finally, the predictive outcome of Cluster C1 and PRKDC alteration in DEN-induced eHCC rats was also confirmed. Conclusions As a tumor prognosis-relevant gene set-based signature, Cluster C1 showed an effective approach to predict cancerization of eHCC and its related immune characteristics with considerable clinical value.
INTRODUCTION
Liver cancer is the fourth leading cause of cancer-related deaths worldwide (1). Hepatocellular carcinoma (HCC) accounts for more than 90% of liver cancers and is non-negligible reason of most patients' death (2). Over the past 20 years, the detection of patients with HCC was increased, and surgical resection obviously ameliorated the 5-year overall survival (OS) (3), while even in these cases, the high recurrence ratio and no effective adjuvant therapies presently available are common (4). Moreover, the metastasis strength is responsible for most HCCassociated morbidity and mortality (5,6). Investigation of molecular and systematic mechanisms of HCC may be useful to predict early-stage HCC (eHCC) and prevent the progress to advanced stage. The HCC tumorigenesis and metastasis are multistep processes and are known to be regulated by tumor immune microenvironment (7,8). Although immune disorder in HCC has been well studied (9,10), the dysregulated tumor immune microenvironment in eHCC is still far from clarified, especially in immune cell conditions. A better understanding of how specific cellular tumor transcriptome functions contribute to HCC stratification and specific tumor microenvironment (TME) is needed to enable customized treatment design and novel immunotherapy exploitation.
Oncogene-driven immune mediators allow tumor cells to immune evasion and thrive in the TME (11). Most studies of HCC showed that oncogene expression is associated with patients' OS, somatic driver mutations, and abnormal immune cells (12,13), but whether heterogeneity in different subtypes of HCC can be stratified by gene set-based signature has not been well established. Furthermore, genetic alteration-related gene expression plays an important role in HCC formation, which was also significantly higher in eHCC (14). Studies have shown that the treatment response and survival outcome of HCC patients not merely depend on tumor stage but also are associated with TME heterogeneous and molecular features (15)(16)(17)(18). Strategies to identify the subset of HCC likely to have different transcriptome and immune characteristics are important for diagnosis and additional clinical therapy (19)(20)(21). Biomarkers, especially gene expression in tumor tissues, are reliably related to HCC prognosis and TME characteristics (22,23). Recently, the higher mutation of PRKDC has been regarded as a new target for checkpoint blockade immunotherapy, which was identified as one of the top most frequently mutated DNA repair genes in liver cancer (24). In addition, knockout PRKDC has shown the ability to enhance anti-PD-1 antibody treatment in tumor models (24). Therefore, the further analysis based on large and comprehensive datasets in combination with more potential markers may provide an opportunity to identify signature for eHCC and to improve personalized medicine.
The TME context, consisting of heterogeneous populations including tumor cells themselves, infiltrating immune cells, and secreted factors, has been reported to highly associate with tumor progression, prognosis, and therapeutic responses (18,25,26). The interaction between tumor cells and immune cells was gradually recognized and then updated into the emerging hallmarks of tumor until 2011 (27). In the liver, it is important to distinguish between the TME of eHCC, a common condition in primary HCC, and the TME of non-tumors. The TME components based on computational evaluation have been utilized to predict cancer prognosis and design more effective therapeutic strategy (28)(29)(30), which also connects with tumor subtype stratification (31). Recently, Zeng et al. established a comprehensive TME model as a prognostic biomarker and immunotherapeutic benefit indicator of stomach cancer (32). To date, the comprehensive landscape of TME-related gene setbased signatures in the eHCC has not yet been elucidated.
To address these issues, we stratify HCC according to clinical stage and integrated multiple cohorts with gene expression data to develop and validate individualized gene set-based survival, and mutational and gene expression signature for eHCC. Furthermore, the relationship between stratified HCC and the TME immune characteristics was estimated to investigate the immune-disorder mechanisms and therapeutic targets. Finally, we applied eHCC rat models for experimental verification to prove the stability and reliability of gene-set predictive value and potential target.
Samples and Clinical Data Description
We systematically searched for HCC gene expression that were publicly available and reported with pathological stage annotations. We downloaded the publicly available expression data for filtering and analysis. In total, 18 eligible HCC cohorts were divided into three groups according to the three different expression platforms, as The Cancer Genome Atlas (TCGA) and Genome U133. We downloaded the raw array Affymetrix ® "CEL" files (Table S7) from the Gene Expression Omnibus (GEO) accession viewer and adopted a robust multiarray averaging method with the affy package default parameters to perform background adjustment and quantile normalization. Gene expression values of all probes were adjusted by dplyr software in each dataset. To identify the risk of HCC development and most likely to suffer from genetic dysregulation, we defined the eHCC (Bclc 0-A or early marker) and advanced-stage aHCC (Bclc B-C or advanced marker) patients in GEO dataset. As to datasets in TCGA and GTEx, TCGA tumor and GTEx non-tumor sample RNAseq expression data (transcripts per million reads (TPM) value) were downloaded from the UCSC Xena browser; the data were extracted and preprocessed with Toil workflow software in default parameters [a reproducible, open-source scientific workflow for big biomedical data analysis in UCSC (33)]. Toil pipeline uses the single script to compute gene-and isoform-level expression in multiple platforms, which can efficiently decrease the batch effect with the normalized TPM value. The baseline information of each eligible ESCA data was obtained from TCGA, such as available follow-up time and pathological stages. The eHCC (Stage I-II) and aHCC (Stage III-IV) from TCGA dataset were used in the current study. The batch effects between different datasets within the same platform were adjusted by ComBat algorithm (34). Then, we use three different platform data for analysis: 1) TCGA and GTEx; 2) Affymetrix Human Genome U219 platform: GSE63898; and 3) Affymetrix Human Genome U133 platform: GSE101685, GSE45436, GSE6222, GSE62232, GSE6764, GSE9843, GSE102079, GSE121248, GSE49516, GSE112790, GSE19665, GSE29721, GSE45267, GSE58208, GSE84402, and GSE88839.
Somatic Mutation and Copy Number Variation
The somatic mutation data (MuTect2) of TCGA LIHC patients were also achieved from TCGA dataset (https://portal.gdc. cancer.gov/) and summarized using maftools (35). For each gene, the mutation frequency in corresponding eHCC patients was ranked in order. The LIHC dataset from Affymetrix SNP 6.0 was applied for individual copy number variation (CNV) analysis. The sequence data for the cis-expression quantitative trait locus (cis-eQTL) study was filtered based on somatic mutation files, and forward stepwise conditional analysis implemented in MatrixEQTL was conducted (36).
Generation of Immune Cell Infiltration
We partially established a predictive immune infiltration pattern from the immune cells metagenes, which were combined with the sources reported by Ru et al. (38) and Bindea et al. (39). The selected immune cell metagene includes 15 categories according to T cell-related immune cells, such as regulatory T cells (Tregs), dendritic cells (DCs), and subtypes of T cells. To quantify the proportions of immune cells in the HCC samples, we used the single-sample gene-set enrichment analysis (ssGSEA) algorithm to evaluate the relative abundance of each cell infiltration from three independent cohorts.
Correlation Between Cluster Gene Signature and Other Related Biological Processes
For crosstalk analysis of the different elements in HCC, we integrated the Cluster gene signatures to further investigate its function in subtypes of HCC, and we termed it as signature score. The expression of each gene in the Clusters was first transformed into a z-score. Then, a principal component analysis (PCA) was used to predict selected Cluster gene signature, and principal component 1 was extracted to serve as the signature score. This approach has the advantage of focusing the score on the set with the largest block of well-correlated (or anticorrelated) genes in the set while down-weighting contributions from genes that do not track with other set members. Subsequently, the estimated signature score was used to infer the correlation between different clusters and immune cell infiltration in subtypes of HCC. The correlation coefficients were computed by Pearson's test.
Construction of Overall Survival and Prognostic Signature
After removal of the patients without complete clinical information in TCGA, 365 samples with complete OS information were finally obtained and used for further analysis. Survival analysis associated with selected differentially expressed genes (DEGs) was performed by the Kaplan-Meier analysis, and the cutoff point of each dataset subgroup was determined using the survminer R package. The "surv-cutpoint" function, which iteratively tested all possible cut points in order to find the maximum rank statistic, was adopted to dichotomize patients into low-and high-risk groups based on the maximally selected log-rank statistics to decrease the batch effect of calculation (threshold filtering >30%). Meanwhile, according to cluster subgroups, pathological stage, and immune infiltration, patients were divided into multiple groups. The multivariate survival curves for the above groups were generated via the Kaplan-Meier method and log-rank tests to determine significance of differences. Moreover, based on "surv-cutpoint" function, we obtained immune cells related a higher-risk group with maximum rank statistic for poor prognostic signature analysis. The poor prognostic signature frequencies were calculated by maximum rank statistic in both tumor and non-tumor samples.
Functional and Pathway Enrichment Analysis
The Gene Ontology (GO) function (40) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway (41) enrichment regarding the DEG expression in different stratifications of HCC were analyzed using the Clusterprofiler R package (42). The GO function including biological process (BP), molecular function (MF), and cellular component (CC). To explore the underlying function between high-and low-immune infiltration groups, GSEA (http://software.broadinstitute.org/gsea/index.jsp) was implemented to determine the enrichment of a certain gene rank in the pre-defined BPs. p < 0.05 was chosen as the cutoff criterion. A developing R package enrichplot (https://github. com/GuangchuangYu/enrichplot) implements several visualization methods to help in interpreting enrichment results and was adopted to visualize immune-relevant gene clusters. Furthermore, we measure the functional similarity among Cluster C1 proteins by ranking their average value inside the interactome. Functional similarity, which is defined as the geometric mean of their semantic similarities in BP, MF, and CC aspects of GO, is designed for measuring the strength of the relationship between each protein and its partners by GOSemSim package (43) using the Wang method.
Protein-Protein Interaction Network Analysis
The genomic association between Cluster C1 genes and PRKDC was querying in STRING (44) and exploring their relevant network, which was based on retrieval of interacting Genes/ Proteins. The combined score was generated from co-expression, experimentally determined interaction, homology, database annotation, and automated textmining.
Animal Model
Five-week-old male Wistar rats (Nomura Siam International, Bangkok, Thailand) were housed and acclimated in specific pathogen-free cages of laboratory animal center, Chiang Mai University, under a 12-h light/dark cycle at 21°C ± 1°C and 50% ± 10% humidity. All animals had free access to food and water. Quality of life of all animals was monitored during the experiments according to the suggestion of the animal ethical committee. For construction of HCC model, rats were intraperitoneally injected with diethylnitrosamine (DEN; Sigma) at 50 mg/kg (b.w.) once a week and were continuously housed without DEN induction for 4 weeks (defined as eHCC) and for 8 weeks (defined as aHCC). For healthy rats, the rats were intraperitoneally injected with normal saline (4 ml/kg, b.w.) once a week for 4 weeks and were continuously housed without any induction for 8 weeks. At the time of sacrifice, rats were anesthetized using isoflurane, and liver tissues were collected for histological analysis and RNA sequencing. Animal experiment has been approval by Animal Ethics Committee of Chiang Mai University.
To verify our prediction in such HCC rat model, two rats per group (normal, eHCC, and aHCC) were selected to perform RNA sequencing. Selection criteria for eHCC and aHCC initially relied on gross appearance of tumor nodules in the liver at the time of sacrifice. Livers of DEN-induced rats without clear appearance of tumor nodule was chosen as the eHCC model, while that having several tumor nodules was chosen as the aHCC model. In addition, H&E staining was also conducted to investigate morphological change in each group.
RNA Isolation and Library Preparation
Total RNA from liver tissue of normal, eHCC, and aHCC rats was isolated using NucleoSpin RNA Plus kit (Macherey-Nagel, Catalog no. 740984.50). The quality and integrity of total RNA were checked by an Agilent Bioanalyzer 2100 system and agarose gel electrophoresis. After a quality control (QC) procedure was performed to check quality and integrity of total RNA, mRNA was purified using poly-T oligo-attached magnetic beads; and the cDNA libraries were constructed, according to manufacturer's recommendations (Novogene Corporation, Beijing, China). All libraries were sequenced using the Illumina HiSeq PE150 platform bp. The library construction and sequencing were performed by the Novogene Corporation. The raw RNAseq data were first processed by the Hisat2 software (default parameters) to remove the rRNA contamination and filter the user-specified adaptor sequences by Python. The purified data were used for QC tool (tmkQC.py) with both quality check (base threshold >20, proportion of low-quality bases in reads <10%) and data processing capability. Then, the high-quality and clean reads were aligned (mm10 mouse reference) with UCSC assembly and aligned by Hisat2 software with default parameters. Raw read counts for rat model were assigned to gencode.vM23 genes. The gene expression values were fragments per kilobase of exon per million mapped fragments (FPKM) normalized by htseq-counts software and converted to TPM.
Statistical Analysis
All statistical analyses were performed using R (version 4.0.2) with several publicly available packages and GraphPad Prism 8.0. The Kruskal-Wallis tests were used to conduct difference comparisons of three or more groups (45). IGV software was used for sequencing data visualization (46). Correlation coefficients between the expression of genes were computed by Pearson's and distance correlation analyses. The package pROC (47) was used to construct receiver operating characteristic (ROC) curves to ascertain the area under the curve (AUC) and confidence intervals to estimate the diagnostic accuracy of specific genes in eHCC and immune characteristics. p-Values of less than 0.05 were considered statistically significant, and the p-values were two-sided.
Development and Identification of the Specific Early-Stage Hepatocellular Carcinoma Gene Set in The Cancer Genome Atlas and Gene Expression Omnibus Cohorts
At present study, we managed three major steps to establish accurate and reliable gene-set signature for eHCC ( Figure 1). First, DEGs in both TCGA (train set) and GEO (validation set) were filtered and integrated to conform those with higher mutation frequency and significant prognosis. Then, selected DEGs were clustered into three groups in the train set and validated in the two independent validation sets. The significant cluster group was further used for immune characteristics analysis. Next, through the GO function analysis, we identified the selected signature genes biological functions and interactions. The core marker PRKDC-related expression, immune characteristics, and genic alteration were also analyzed. Finally, we construct HCC rat model; the sequencing data were used for the validation of the signature and PRKDC functions and related characteristics. The immune cell characteristics from different subtypes of HCC and connection to the specific gene set-based signature were investigated. A total 879 patients with HCC and 529 non-tumor samples from 19 independent gene expression datasets with available clinical information were applied for the study (Table S1). DEN-induced HCC rat model was constructed to verify the reliability of the predictions.
For commonly altered genes in eHCC, the survival analysis via the Kaplan-Meier method revealed that the expression levels of 53 DEGs were significantly associated with prognosis in TCGA HCC. Among which, the polarized prognostic risk signature between two types of DEGs was concurrent ( Figure 2B), including 38 poor prognosis indicators (upregulated DEGs) and 15 favorable prognosis indicators (downregulated DEGs) ( Table S3). The PCA utilizing all prognostic risk signature DEGs reveals clear separation for eHCC/aHCC and non-tumor groups ( Figure 2C). However, the same trend was not observed for HCC patients in early and advanced stages. To explore the biological behaviors among these distinct DEG patterns, we performed gene-set variation analysis (GSVA) enrichment analysis between three groups. As shown in Figure 2D and Table S4, these were markedly enriched in cell cycling carcinogenic activation pathways such as E2Ftarget, PI3K/AKT/mTOR pathway, and Wnt/Beta-Catenin pathway (48)(49)(50).
Dual Analysis of Prognostic Indicator Gene Expression Identifies Subgroup Function of Hepatocellular Carcinoma
In order to stratify eHCC and aHCC based on these prognostic indicator gene expression levels, we utilized transcriptome data FIGURE 1 | Flowchart of the study. Nineteen public HCC datasets containing 1,408 tumor and non-tumor cases were included and categorized into three independent cohorts according to the data platform. We developed the eHCC-related gene-set signature in the training set and two validation sets. Further, we integrated gene-set signature with immune characteristics, prognosis, genic alteration, and biological functions to investigate the prognostic value. HCC, hepatocellular carcinoma; eHCC, early-stage hepatocellular carcinoma. from TCGA and validated in GEO database. To enrich for tumor-specific mRNA, filtering was performed to exclude the non-tumor samples in three cohorts. Genes belonging to reactome gene sets "upregulated DEGs" (n = 38) and "downregulated DEGs" (n = 15) were selected for analysis. Previous studies have demonstrated HCC heterogeneity in gene expression, including metastasis, relapse, and prognosis, between biologically distinctive tumor types (51,52). To aid in selecting genes co-regulated within each group and relevant to subtypes of HCC, we applied consensus clustering to identify two groups of robustly co-expressed upregulated DEGs (Cluster C1; n = 13) and downregulated DEGs (Cluster C3; n = 15) to be used for eHCC evaluation in TCGA ( Figure 3A). The unsupervised cluster results showed a concordance in both TCGA ( Figure 3A) and GEO datasets ( Figure S1). In TCGA, the median expression of Cluster C1 and Cluster C3 genes was calculated for each sample and used in assigning one of four prognostic signature profiles associated with these to cluster subtypes: quiescent, poor prognosis, favorable prognosis, and mixed ( Figure 3B). Expression levels of Cluster C1 and Cluster C3 genes across the subgroups are presented in Figure 3C, including non-tumor samples. In order to determine if Cluster C1-and Cluster C3-based DEGs have an impact on different subgroups of HCC, we performed multi-Kaplan-Meier survival curves using the identified four phenotypes in TCGA eHCC (log-rank test; p = 0.028) and aHCC (log-rank test; p = 0.034) ( Figure 3D). Notably, a poor survival outcome was observed in cases with aHCC associated with the shorter median survival (log-rank test; p < 0.0001, left). Moreover, Cluster C1 determined that poor prognosis cases had the shortest median OS in both eHCC and aHCC subgroups ( Figure 3D, right panel)
Construction of the Prognostic Gene Signature and Immune Functional Annotation
To identify the underlying biological characteristics of these prognostic gene modification phenotypes in the eHCC, we fix our attention on TCGA cohort and validated results in two GEO cohorts, which comprised more than 590 eHCC, 210 aHCC, and 600 non-tumor cases and offered the most comprehensive functional annotation. There were significant distinct patterns of Cluster C1 and Cluster C3 signature in three proposed subtypes of TCGA and consistent with the above outcomes ( Figure 4A). Higher Cluster C1 signature score was associated with poor overall OS ( Figure 4B). The stratification between non-tumor, eHCC, and aHCC was significantly accompanied with decreased Cluster C1 and increased Cluster C3 signature score. Then, we evaluated the association between two clusters and T cell-related immune cells infiltration from transcriptomic data in both TCGA and GEO validation sets. The eHCC patients were characterized by the type of dysregulated immune cells and presented variable association with different cluster types; Cluster C1 signature showed negative association with activated CD8 T cell (CD8 Tam), Effector memory CD8 T cell (CD8 Tem), and cytotoxic T cell (CTL) infiltration levels in eHCC and aHCC groups and simultaneously presented positive correlation in non-tumor group; Cluster C3 signature showed positive association with CD8 Tam, CD8 Tem, and CTL infiltration levels in eHCC, aHCC, and non-tumor groups ( Figure 4C). In addition, differences in clinical subgroups of HCC were assessed in TCGA series, and a lower CD8 Tem and CTLs level was significantly associated with tumor development from normal to aHCC ( Figure 4D). Consistent with these findings, the correlation analyses between Cluster C1 signature and CD8 Tem and CTLs in two GEO validation cohorts also showed an inspiring result ( Figure 4E), and the CD8 Tem and CTLs also negatively associated with HCC development in two independent cohorts ( Figure 4F).
To further characterize and understand the immune cell clinical differences among these HCC patients, we proposed subdividing tumor into two subtypes as high-infiltration group and low-infiltration group. Differences in the CD8 Tem-and CTL-based molecular subtypes were evaluated in TCGA cohort, and lower infiltration in HCC was significantly associated with poor prognosis (HR, 1.606; 95% CI: 1.112-2.321; p = 0.002; Figure 4G). The specific immune cell infiltration was also investigated in between eHCC and aHCC patients to explore whether the association of immune disorder affected the ability of Cluster C1 to predict the eHCC and survival outcomes, and the survival shortcoming of the low infiltration in both patients who suffer from eHCC and aHCC was the most obvious (log-rank test; p < 0.0001) ( Figure 4H). In addition, within the mRNA expression of the eHCC and non-tumor samples, we used CD8 Tem-and CTL-related high/low infiltration as a pattern recognition variable. Based on 13 Cluster C1 members from TCGA and 25 Cluster C1 members from GEO validation set 2, the clustering and changing trends of each sample were visually displayed on the PCA map ( Figure 4I). Despite individual variability, the graphics show appreciable separation of infiltration condition between two cohorts. We next interrogated TCGA and GEO validation cohorts' prediction value with Cluster C1 and Cluster C3 signature. We evaluated the diagnostic performance of two clusters in discriminating the eHCC from non-tumor group. In TCGA cohort, the analysis demonstrated that Cluster C1 (AUC = 0.946; 95% CI: 0.924-0.968) and Cluster C3 (AUC = 0.977; 95% CI: 0.963-0.99) signature possessed a high accuracy in predicting eHCC ( Figure 4J, upper left). Moreover, combining Cluster C1 and Cluster C3 signatures improved the predictive value compared with that of Cluster C1 or Cluster C3 alone in both TCGA and GEO validation set 2 ( Figure 4J, upper right). We then evaluated the predictive value of Cluster C1 and Cluster C3 signature in TCGA eHCC group; and the predictive value of Cluster C1 (AUC = 0.647; 95% CI: 0.569-0.725) to CD8 Temand CTL-related immune infiltration was also confirmed ( Figure 4J, lower left). Meanwhile, combining Cluster C1 and Cluster C3 slightly improved the predictive value and presented a similar tendency in three independent cohorts ( Figure 4J, lower right). Moreover, GO enrichment analysis of Cluster C1/C3 signature gene function in eHCC immune subgroups (TCGA) was conducted using the R package clusterProfiler, which was used to discover the potential regulatory relationships among these signature genes in biological functions. The BPs with significant enrichment are summarized in Table S6. These Cluster genes showed distinct BPs between high-and lowinfiltration groups ( Figure 4K), especially in cell cycling and proliferation regulation in eHCC TME. Surprisingly, the PRKDC showed enrichment of BPs remarkably related to transition of mitotic cell cycle and DNA replication and break repairingrelated MCM family member genes ( Figure 4L). Consistent with Figure 2A of gene mutation frequency, the cell regulation potential confirmed again that PRKDC played a non-negligible role in the eHCC TME. These findings could demonstrate that Cluster C1 signature and PRKDC modification patterns potentially predict the eHCC and tumor immune microenvironment formation.
Association of PRKDC Dysregulation and Immune-Related Prognosis Risk
Interactomics holds great promise in understanding the molecular mechanism of cells affected by biological factors. To examine Cluster C1-related proteins and their protein-protein interaction (PPI), the STRING database (53) was used to deduce enriched proteins and generated a PPI network ( Figure 5A). The PPI network depicted functional attributes of PRKDC to Cluster C1-related proteins, including CDC25C, MCM4, and TOP2A. To further identify the essential of PRKDC interactome in eHCC, we ranked Cluster C1 members by their average functional similarity relationships within the interactome (54). The MCM2/3/4 and PRKDC (cutoff value >0.54) were two types of top-ranked proteins potentially playing central roles in Cluster C1 ( Figure 5B). PRKDC, which has not yet been previously identified as an important partner of eHCC, has been previously reported to play an important role in HCC (55) and T cell-related immunodeficiency (56). As the PRKDC possessed the highest average functional similarity in our analyses, it is eligible for further investigation.
To determine oncogenic events across the different subgroups, we investigated the indels and CNVs affecting gene expression in eHCC. There was a moderate correlation of the expression of PRKDC genes with copy number-altered values in TCGA cohort, which presented a higher correlation in eHCC (Spearman's correlation rho = 0.2, p = 0.012) compared with aHCC (Spearman's correlation rho = 0.04, p = 0.77) ( Figure 5C). Meanwhile, we noted a significant increase of PRKDC expression between eHCC and non-tumor samples, which was associated with single-nucleotide polymorphism (SNP). The eQTL analysis observed that snp_01 site (statistics r: 2.263; b-Score: 2.726) and snp_07 site (statistics r: 1.952; b-Score: 2.355) were positively associated with PRKDC expression in eHCC ( Figure 5D). Furthermore, poor prognostic signature analysis based on the obtained different immune cells profile in adaptive and innate immunity, as well as pathological stages and PRKDC CNVs, was examined in TCGA. Each type of immune cell corresponding maximum rank survival statistic was selected as a poor prognostic indicator and dichotomized the HCC and nontumor samples. In TCGA cohort, the CD8 Tem (40% vs. 34%) and CTLs (55% vs. 53%) related risk frequencies were higher in the PRKDC genetic altered group of HCC patients ( Figure 5E). Concurrently, the CD8 Tem (non: 22%, early: 34%, advanced: 45%) and CTLs (non: 21%, early: 51%, advanced: 62%) were observed to have lower risk frequencies in non-tumor and eHCC, compared with aHCC ( Figure 5F).
Evaluation of Gene-Set Signature and PRKDC in Different Hepatocellular Carcinoma Rats
Next, we used the DEN-induced rat eHCC to test if Cluster C1 and PRKDC play prior roles in tumorigenesis. The application of DEN has an irreversible carcinogenic effect in rodents (58). In addition, repeated injection of low-dose 50 mg/kg DEN could generate the disease that more closely resembles the human pathology. H&E staining demonstrated a clear morphological change of eHCC and aHCC, compared with normal ( Figures 6A, B; Figure S2). Overall, we observed that Cluster C1 from TCGA presented a higher signature score in eHCC and aHCC groups compared with control group, which presented an adverse effect on human health and accelerates HCC malignant behaviors ( Figure 6C). In addition, the higher level of PRKDC can be significantly detected in eHCC/aHCC compared with normal rat tissue (p = 0.0036) ( Figure 6D). The immune characteristics and dysregulation of biomarker gene expression are very common and typically have a profound impact on the TME. Unexpectedly, our result found that the CD8 Tem and CTL cell infiltration level were obviously decreased in the eHCC/aHCC rat groups and negatively associated with PRKDC expression ( Figure 6E). In the context of PRKDC dysregulation, structural alteration results in their genomic mutation and substantial tumor-regulating roles in eHCC pathogenesis. In support of this hypothesis, evaluating the mRNA transcripts discovered read count amplification and nucleotide alterations in the three exon regions of eHCC/aHCC compared with normal rat tissues ( Figure 6F). The splice junction track shows the exon interaction in the genomic landscape map. The above results, in line with our prediction in public datasets, confirm that, in the eHCC development, unsupervised Cluster C1 signature and PRKDC dysregulation were important predictors and associated with CD8 Tem and CTL characteristics in TME.
DISCUSSION
In the present study, we showed increased gene expression and mutation in eHCC in association with tumorigenesis and immune milieu, supporting dysregulated specific metagene and immune cells as potential mechanism and predictors.
Having demonstrated the tumor-specific DEGs of Cluster C1, the prior role of PRKDC was found to be associated with CD8 Tem and CTL infiltration levels in eHCC. Of note, we observed that increased Cluster C1 signature and decreased CD8 Tem and CTLs were both independent poor prognostic factors for survival in HCC patients. Due to the absence of specific symptoms in eHCC and the lack of early diagnostic markers, most patients with HCC are often diagnosed in an advanced stage with poor prognosis (59,60), identifying that the characteristics of HCC initiation and development in the genomics and immune environment will contribute to enhancing our understanding of novel diagnostic markers for eHCC and TME pattern and guiding more effective immunotherapy strategies. The role of the tumor field effect of genomic instability and oncogene overexpression in HCC has gained much interest in recent years (61)(62)(63), and currently an altered TME is considered a promoter of cancer (64,65). Although under physiologic conditions immune disorder is an adaptive response to genetic alteration (66,67), when the immune disorder stimuli persist, the non-resolved immunodeficiency contributes to carcinogenesis (15,68). Along with these lines, the concept of genetic alteration and tumor immune microenvironment, such as TP53/GATA4 mutation, CXCL10 expression, and infiltrating immune cells (monocytes, T, B, and NK cells), has been previously associated with cancerization in the liver (61,69,70). With this study, we provide a comprehensive description of a diagnostic signature and immune microenvironment characteristics underlying the eHCC. To this end, we first identified 414 upregulated and 272 downregulated DEGs related to HCC development, as well as constructing mutational significance with eligible sample to define new biologically and clinically relevant genes not previously appreciated. From the 77 DEGs that were shown to have a higher mutation level and an association with OS, 53 feature genes were further screened. The analysis showed that PRKDC possessed the highest mutation frequency in the eHCC group compared with the aHCC group, which also significantly associated with poor OS in HCC patients (HR = 1.79, 95% CI: 1. 26-2.53). This finding is in line with previous reports suggesting that PRKDC mutation was closely connected to various tumors (24). However, integrated analysis of these feature gene expression revealed the difficult to stratify the pathological stage and functional annotation in the HCC patients. At the same time, we recognize that the more rigorous approach should be used to split the data into different groups with acceptable statistical power (71,72).
The heterogeneity in HCC gives rise to distinct tumor subclasses based on environmental factors, genetic heterogeneity, inflammation, and immune infiltration (73)(74)(75)(76), leading to a growing interest into translating this information into clinical practice for HCC treatment and prediction, as well as developing the personalized therapies based on unique intrinsic molecular signatures. To identify the most promising candidates for eHCC diagnosis, we conducted the patient-based unsupervised analysis using a compendium of feature gene sets recapitulating the tumor's specific molecular signature in three independent cohorts. The association between poor and favorable prognosis DEG expression and expression level of Cluster C1/C3 genes provides a biological significance to HCC stratification and supports targeting tumor specific markers as a mean to reprogram an aggressive tumor type. Importantly, Cluster C1 (upregulated DEGs) expression showed a better ability in distinguishing HCC stratification, which is significantly associated with the shortest OS in aHCC/eHCC subgroups and poor prognosis in aHCC compared with eHCC. Meanwhile, the survival benefit associated with Cluster C3 expression (downregulated DEGs) could be an indirect evidence to support the prior role of Cluster C1. Therefore, the correlation of Cluster C1 expression and prognostic subtypes corroborated the role of unique molecular signatures in tumor development in HCC.
Emerging data support the idea that the TME cells play a crucial role in liver cancerization, HCC development, chemoresistance, and recurrence (15,(77)(78)(79)(80). Here, we revealed a comprehensive landscape of crosstalk between the specific prognostic clusters, clinical characteristics of HCC, and immune cell infiltration. With the help of several computational algorithms, integrated analysis revealed that Cluster C1 not merely act as a prognostic biomarker for eHCC but also significantly associated with immune cell dysregulation in patients of different clinical subtypes. Patients with a lower level of CD8 Tem and CTLs and presented immunosuppressive nature of HCC TME and reduced protection against external stimulus (81)(82)(83) were significantly related to Cluster C1 signature scores in eHCC. Moreover, lower T lymphocyte infiltration in HCC was previously reported to associate with innate immunosuppression and tumor mutation burdens, such as Tregs, cytokines (TGF-b and IL-10), and marker gene mutation frequency (84)(85)(86). Considering the changes in the TME between non-tumor and HCC (87,88), our Cluster C1 signature has shown a predictive advantage in distinguishing the specific eHCC immune cell (CD8 Tem and CTLs) infiltration level from non-tumor samples. In this respect, in line with previous studies (89)(90)(91)(92), these two immune cells (CD8 Tem and CTLs) markedly elucidated the immune characteristics of HCC initiation and progress, which has also shown benefit in improving patient prognosis in both eHCC and aHCC. Therefore, we infer that the effect of Cluster C1 signature on the eHCC patients is probably related to the remodeling of specific immune cells in the TME. By applying ROC curve analysis (47), we also demonstrated the predictive value of Cluster C1 signature for the liver cancerization and the CD8 Tem/CTL-based immune infiltration in three separate cohorts of patients with eHCC. Of note, combining Cluster C1 and C3 signaling can slightly improve the prediction accuracy, although diagnostic accuracy of Cluster C3 signature alone was not acceptable. Taken together, these results provide new insights for immune cell omics research on the mechanism by specific genes regulating the survival of eHCC patients.
To explore potential therapeutic target mechanism for HCC patients with poor immune infiltration, we further performed biological functional analysis using gene expression data from TCGA. The result of Cluster C1 gene-set showed that the core molecular PRKDC and its associated genes were significantly correlated with the cell cycling and DNA replication. Previous studies indicated that both cell cycling and DNA replication impairment were related to T-cell inhibition and tumor cell death (93)(94)(95). Furthermore, our verification in TCGA-HCC patients confirmed that PRKDC dysregulation was mostly associated with its genomic instability, especially in eHCC patients. In addition to the transcriptional regulation, SNPs in eHCC are also significant cis-eQTLs for the PRKDC expression. The SNP locus in cancer was demonstrated to influence the checkpoint gene-related immune disorder and target gene expression (96,97), suggesting that the locus variation has important role link between gene expression and tumorigenesis. At present, the PRKDC heterozygous mutation has been reported to impair the DNA double-strand break (DSB) repair and contribute to immunodeficiency (57). Not surprisingly, the PRKDC genetic alteration is emerging as a predictive biomarker and drug target for anti-tumor immunotherapy in various malignancies (24). The PRKDC mutation in patients exhibited a skewed cytokine response typical of Th2 and Th1 cells (56) and influenced the immune responses (98). Moreover, higher PRKDC mutation and expression were correlated with ER − breast cancer immune pathway functions (99). In HCC, PRKDC expression was proved to be associated with shorter OS and immune cell infiltration (100). Our finding is interesting given the important role of PRKDC in specific immune cells (CD8 Tem and CTL)-related poor survival rate in the context of elevated CNV in HCC patients. Consistent with this, for CD8 Tem and CTLs, the lower prognostic frequencies suggested the immune cells' clinical effects in initiation and progression of HCC. On the other hand, in DEN-induced eHCC rat model, our experimental verification confirmed that both Cluster C1 signature and PRKDC expression were shown to be positively associated with tumorigenesis, as well as downregulated CD8 Tem and CTL infiltration level. In principle, somatic mutation shows its primary effects on the expression of cancer-relevant genes in tumorigenesis, indicating that it is a powerful driver of intratumoral heterogeneity and progression (101). Thus, the evaluation of PRKDC genomic instability and expression to enable a better understanding of tumorigenesis is an effort to provide fresh and novel insights for developing a biomarker in combination with bioinformatics prediction. Taken together, the preliminary findings suggest a diversity in HCC TME, offering a comprehensive view of the relative level of immune subtypes and providing insights about the crosstalk between specific target genes, eHCC, and immune characteristics.
CONCLUSION
In conclusion, we identified a gene set-based prognostic signature using a large number of individuals and effectively differentiate the eHCC from aHCC and non-tumor controls with a high accuracy. Our study demonstrated that the eHCC was characterized by specific immune cell disorder, namely, CD8 Tem and CTLs, both of which were closely associated with Cluster C1 signature. Of note, given the correlation among the genome instability, PRKDC expression, and immune cell-related poor prognostic signature, PRKDC can be a potential candidate to HCC patients' early diagnosis and selection for immunotherapy. These findings have implications in specific gene-signature and tumor immune environment characteristics in HCC patient stratification and could be of benefit in developing novel immunotherapies.
DATA AVAILABILITY STATEMENT
All the original data of the current study are available from the corresponding author on reasonable request. The public available datasets were summarized in the Table S7. The rat RNA-seq data is available from SRA hub, Bioproject number is PRJNA772097 (https://www.ncbi.nlm.nih.gov/sra/?term=PRJNA772097).
ETHICS STATEMENT
All animal experiments were approved by the Animal Ethics Committee of Chiang Mai University, Thailand (Protocol Number: 2563/RT-0015).
AUTHOR CONTRIBUTIONS
QZ: conceptualization, data curation, formal analysis, and writing-original draft. AC: methodology (animal experiments). RW: methodology (animal experiments). ZX: conceptualization, writing-review and editing, and supervision. CP: conceptualization, methodology (animal experiments), writing-review and editing, supervision, and funding acquisition. All authors contributed to the article and approved the submitted version.
ACKNOWLEDGMENTS
We would like apologize for the omission of any primary citations. We thank the patients and investigators who participated in UCSC, TCGA, and GEO for providing data. We also thank Laboratory Animal Center, Chiang Mai University, for animal facilities.
|
v3-fos-license
|
2021-05-05T13:12:03.606Z
|
2021-03-30T00:00:00.000
|
235350343
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bsj.uobaghdad.edu.iq/index.php/BSJ/article/download/5909/3421",
"pdf_hash": "730fc67dbeae3d2568e2e3fc199c22e6001c4722",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45491",
"s2fieldsofstudy": [
"Education"
],
"sha1": "ba3e0e3da429ac87883dfde5e5b4ca7657f63a36",
"year": 2021
}
|
pes2o/s2orc
|
Future of Mathematical Modelling: A Review of COVID-19 Infected Cases Using S-I-R Model
The spread of novel coronavirus disease (COVID-19) has resulted in chaos around the globe. The infected cases are still increasing, with many countries still showing a trend of growing daily cases. To forecast the trend of active cases, a mathematical model, namely the SIR model was used, to visualize the spread of COVID-19. For this article, the forecast of the spread of the virus in Malaysia has been made, assuming that all Malaysian will eventually be susceptible. With no vaccine and antiviral drug currently developed, the visualization of how the peak of infection (namely flattening the curve) can be reduced to minimize the effect of COVID-19 disease. For Malaysians, let’s ensure to follow the rules and obey the SOP to lower the R0 value from time to time, hoping that the virus will vanish one day.
Introduction:
In Wuhan, Hubei Province, China, a novel strain of Coronavirus (SARS-CoV-2) was detected in December 2019, causing a severe and possibly fatal respiratory syndrome, i.e., COVID-19. Since then, the pandemic announced on 11th March by the World Health Organization (WHO) has spread worldwide. Currently, social distancing, selfquarantine, and wearing a face mask have arisen as the most adopted method for preventing and managing the pandemic in the absence of a proper medication or vaccination.
Mathematical modelling is a field utilized to visualize a problem into a mathematical equation, which gives an overview, answer, and forecast to the problem. Researchers produce the mathematical equations that can tell the nature of the problem and prepare the solution to prevent it from becoming a disaster (1). To demonstrate the possible outcome of a pandemic and warn public health interventions, mathematical models may project how infectious diseases spread. Moreover, to determine parameters for different infectious diseases, models employ simple assumptions or gathered statistics along with mathematics to measure the results of various interventions (2). Modeling can determine which intervention(s) may prevent, test, or forecast potential development trends (3).
In the context of COVID-19, a mathematical model has become one of the main components in strategizing a preventive measure to minimize the spread's effect (4). Through the forecast, the government and Ministry of Health (MOH) may take proper action to prevent the spread of the virus (5), for example, initiating lockdown and movement control order (MCO) (MOH statement on 18th March 2020), social distancing, mandate the usage of mask and hand sanitizer in public and so on. The result obtained can also help the government prepare for action, mainly to prevent overwhelming in hospital beds due to a higher number of COVID-19 patients.
Numerous studies were performed in studying attributes related to the COVID-19 pandemic using the SIR model. For example, (6) used the procedure to fit a set of SIR and SIRD models, with time-dependent contact rate, to Moreover, (7) contributes to understanding the COVID-19 contagion in Italy, who developed a modified SIR model for the contagion, and used official data of the pandemic up to 30th March 2020 for identifying the parameters of this model. The non-standard part of the approach resides because they considered model parameters, the initial number of susceptible individuals, and the proportionality factor relating the detected number of positives with the actual (and unknown) number of infected individuals. Identifying the contagion, recovery, and death rates, as well as the mentioned parameters, amounts to a non-convex identification problem that was solved employing a twodimensional grid search in the outer loop, with a standard weighted least-squares optimization problem as the inner step. Additionally, (8) reported new analytical results and numerical routines suitable for the SIR model's parametric estimation. The manuscript introduces iterative algorithms approximating the incidence variable, which allows for analysis of the model parameters from the numbers of observed cases. The numerical approach is exemplified with data from the European Centre for Disease Prevention and Control (ECDC) for several European countries from Jan 2020 to Jun 2020.
With limited medical facilities and a high transmission rate, the study of COVID-19 progression and its subsequent trajectory must be analyzed in Malaysia. Consolidation of these additional parameters is helpful to provide a broader picture of COVID-19 dissemination in Malaysia. Forecasts about where and when the disease will occur may be of great usefulness for public decision-makers, as they give them the time to intervene in the local public health systems.
SIR Model:
SIR model (Susceptible -Infected -Recovered) is an epidemic model used to explain a virus's spread (9). The construct of the simple SIR model was first articulated by (10). There are three components in this model, Susceptiblepeople that might be infected, Infectedpeople that got infected, dan Recoveredpatients who recovered. Several literatures have also categorized the death of a patient as R (11). The SIR model may provide us with observations and forecasts that the recorded data alone will not transmit the virus into populations. Our study highlights the value of modeling the distribution of COVID-19 via the SIR model as it will aid by making useful forecasts to determine the effects of the disease. From these three components, these assumptions can be made, thus produced the related differential equations (12), as shown in where denotes the rate of infectivity.
nd ASSUMPTION
Given that infected people will infect a susceptible person, I(t), the number of infected people will increase. However, the amount will eventually decrease once they recover or die, denoted as R(t). Thus, the differential equation produced is given by
rd ASSUMPTION
In this case, consider that I(t) will eventually recover from the disease. Thus, the differential equation is given by = ( ).
(3) Under the condition that S + I + R = 1, these are the three components in the SIR model.
Parameter Setting:
Several parameters need to be set. First of all, it is considered that there are no import cases (11), where λ is the incubation period of the infected patient (i.e., the duration of the infected patient). For this case, λ =14 days is assumed as this value is considered the number of days required to undergo a mandatory quarantine for COVID-19 patients (13). The value is obtained using the following formula (11), given by From Equation (5), the value of ≈ 0.25 is obtained, where the results were analyzed using Wolfram Mathematica. Figure 2 shows that the infection reaches its peak at t = 70 days with I[70] is 0.356282 (around 36% of the total population), whereas the infection becomes plateau after t =140 days.
Figure 2. The graph for Susceptible S(t), Infected I(t), Removed R(t), based on the values R 0 = 3.5, a = 0.25, and incubation period of 14 days.
Data included in this analysis are those from 18 th March 2020 onwards. The graph of Fig. 2 is then compared with the official data released by MOH (Fig. 3). Upon comparing Fig. 2 with Fig. 3, the peak occurs on 7 th April 2020 (data from MOH), 20 days after MCO is implemented. This shows a clear difference in terms of the cases reaching their peak. It might indicate that our assumptions are wrong or data are incorrect. Several pieces of literature were studied to investigate parameters to obtain a more favorable result. From (14), the value of the parameter given is 0.6931163, which results in a more favorable output. ( ) and the incubation period of 14 days. Figure 4 shows that the infection reaches its peak at t = 22 days with I(22) with = 0.661743 (around 66.17% of the total population), whereas the infection becomes plateau after t = 90 days. It shows that the active case peak occurs at t = 22 days, closer to MOH data (20 days). This also implies that the R 0 value at t = 0 is closer to 9.7. Suppose that MCO implementation on 18 th March 2020 could reduce the amount of initial susceptible persons to 20%. There is a significant drop in the peak of active cases, as shown in Fig. 5 and 6. Figure 6 shows that the reduction of susceptible person S(0) may result in lower active cases in Malaysia. This is possible with the implementation of MCO and EMCO and abiding by the Malaysian government's SOP, such as wearing a facemask, practicing social distancing, frequently washing hands, practicing good respiratory hygiene, etc. By comparing the reported data with the data from our modeling, we infer that if adequate controls and firm policies are enforced to monitor the infection rates early after the spread of the disease, the spread of COVID-19 could be monitored in all populations considered.
Discussion:
According to the SIR model, there are significant differences between the expected number of infected people and how the S(0) parameter is defined. In addition, this paper did not consider those Malaysians returning from overseas (import cases). Active case detection also becomes a problem with the limited testing kit at the early stage and how contact tracing may affect the data. There is also a possibility of unreported cases, as stated by (15). During an epidemic situation, restricting human mobility is to reduce the spread of infectious disease. Minimizing the spread of the contagious disease will help the health care community by reducing or eliminating the surge in people coming to treatment facilities. Hospitals' surge capability is always small, and due to insufficient available resources, a higher than the average number of patients may result in poor treatment. Consequently, measures such as the lockdown of a region or a whole province as implemented by Malaysia during Movement Control Order (MCO) would result in "flattening the epidemic curve" to some degree. However, this model can forecast the time, t, taken for active cases to reach their peak, given that no source of infection from outside occurs (import case). Since COVID-19 is a new virus spreading worldwide, there are still many parameters yet remain unknown. More research still needs to be conducted. However, the visualization for the spread of COVID-19 can be made via a mathematical model. First, let us discuss the R 0 parameter, known as an indicator of the infection rate or disease spread (16). Moreover, the reproduction number, R 0 , denotes the average number of new cases of a disease that arise from a single case (17). From equations (2) and (3), the equation for R 0 given by, a. If R 0 < 1, for every 'one' case, it will result in less than 'one' new positive case in the community. In this case, the virus will die and eventually stop the spread of the virus. b. If R 0 = 1, every existing case will result in 'one' new case. c. If R 0 > 1, every 'one' case will result in more than 'one' new positive case. For R 0 > 1, there is a potential for the virus to keep spreading, resulting in a pandemic causing more severe trouble in the future (17).
From the formula above, the government can strategize on how to lower the R 0 value. This measure is essential to ensure our hospital capacity is enough to treat COVID-19 patients, thus flattening the curve. First of all, assume that the parameter remains unchanged since there is no antiviral drug specifically for the virus. In other words, all treatment protocols are based on the symptoms. Therefore, only two parameters can be used to lower down the value of R 0 . First is the value of S 0 , which signifies that the number of early susceptible persons can be reduced by administering a vaccine or initiating lockdown. Since the vaccine is still being developed, the only possible way is to initiate lockdown on the affected areas. From Fig. 7, the lockdown (MCO implementation by the government) was able to reduce the peak of infection significantly.
Figure 7. The difference when the lockdown is not initiated (blue line) and lockdown (red line) is initiated.
The second measure that can be taken is by lowering the infectivity rate, . This can be done by following the SOP outlined by MOH, such as wearing a face mask in public, using hand sanitizer, avoid going out if symptoms are shown, and social distancing. By abiding to the SOP, the infectivity rate can be lowered, as shown in Fig. 8. SIR model was constructed based on three compartments, depending on our assumption. It is difficult to predict the exact figure due to a lack of information and how it is handled in real life (18). SIR model can not predict a sudden increase in cases due to import cases (Malaysian coming from abroad and illegal immigrants). Proper quarantine measures are needed to prevent the virus spread to the community, as this is what happened with the Sivagangga cluster. The failure of those who return from abroad to quarantine themselves increased inactive cases in Malaysia, specifically in Kedah.
Conclusion:
COVID-19 hit the world hard, including Malaysians. Much research is still ongoing to study the effect and spread of COVID-19 along with producing the vaccine and cure. On the bright side, mathematical modeling can be used to forecast the spread of COVID-19. However, it is not perfect due to the unknown nature of the virus. Some researchers incorporated the parameter Exposure (SEIR) in the modeling, and also others consider the spread as a form of waves and more. With several assumptions, a forecast for active cases can be produced and thus prompt better implementation of the proper action to stop the spread of COVID-19. For Malaysians, make sure to follow the rules and obey the SOP to ensure R 0 is lowered from time to time, hoping that the virus will vanish one day.
|
v3-fos-license
|
2018-04-14T09:05:36.494Z
|
2016-03-17T00:00:00.000
|
4849125
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1088/0953-2048/29/5/054007",
"pdf_hash": "2916c5cf3bb95a6bd2ce03776a280427c8e92b0f",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45494",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "6be891016e89da334a95c69808933d2c5a9632e7",
"year": 2016
}
|
pes2o/s2orc
|
Thermal analysis for the HTS stator consisting of HTS armature windings and an iron core for a 2.5 kW HTS generator
Most present demonstrations of high-temperature superconducting (HTS) synchronous motors/generators are partially superconducting, only installing HTS coils on the rotor as excitation windings. The possible applicability of HTS armature windings is an interesting research topic because these windings can certainly increase the power density attributed to a potentially high armature loading capacity. In this study, we analysed the thermal behaviours of a developed 2.5 kW–300 rpm synchronous generator prototype that consists of an HTS stator with Bi-2223–Ag armature windings on an iron core and a permanent magnet (PM) rotor. The entire HTS stator, including the iron core, is cooled with liquid nitrogen through conduction cooling. The rated frequency is set at 10 Hz to reduce AC loss. The properties of the HTS windings and the iron core are characterized, and the temperatures in the HTS stator under different operation conditions are measured. The estimated iron loss is 11.5 W under operation in 10 Hz at liquid nitrogen temperature. Conduction cooling through the silicon iron core is sufficient to cool the iron core and to compensate for the temperature increment caused by iron loss. The stable running capacity is limited to 1.6 kW when the armature current is 12.6 A (effective values) due to the increasing temperature in the slots as a result of the AC loss in the HTS coils. The thermal contact between the HTS coils and the cooling media should be improved in the future to take away the heat generated by AC loss.
Introduction
Much effort has been exerted to develop high-temperature superconducting (HTS) rotating machines since the commercialized production of HTS wires. Given the advantages of HTS materials, namely, a high current carrying capacity and a near-zero ohm loss, replacing the copper windings of rotating machines with HTS coils can increase power density, thereby reducing volume and weight [1,2]. The benefit of HTS rotating machines is prominent especially for applications that require volume and weight to be restricted, such as wind turbine generators [3], ship propulsion motors [4] or aircraft power sources [5].
The power of a synchronous generator or motor can be expressed as follows [6]: Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Equation (1) shows that power P is proportional to the product of rotating speed n, gap field B 0 , armature loading A, winding factor k w and volume parameter D L 2 (diameter D and length L). In general, B 0 and A are related to the current capacity of DC excitation windings and AC armature windings, respectively. In most present demonstrations of HTS synchronous motors/generators, researchers have focused on the partial HTS configuration, installing HTS coils only on DC exciting windings to increase B 0 values over those of traditional copper ones [7][8][9][10][11][12].
Equation (1) also indicates that the power density can be further improved by combining HTS excitation windings and HTS armature windings. This improvement may enhance both B 0 and A, thereby producing fully HTS synchronous rotating machines. However, HTS armature windings cause AC loss, that can increase the heat load on cryogenic systems. Therefore, HTS armature windings have attracted minimal attention thus far. Several groups have conducted preliminary studies on HTS armature windings. A group from Cambridge University developed an HTS motor with YBCO armature coils and an YBCO bulk rotor [13]. At Kyoto University, Nakamura established a squirrel-cage-type HTS inductionsynchronous motor and tried to replace the copper armature windings with Bi-2223-Ag coils [14]. At Fukui University, Sugimoto generated an axial-flux-type HTS motor with HTS armature windings and permanent magnets (PMs) [15]. Li proposed a radial-gap-type HTS generator design with HTS armature windings and PMs installed on the rotor [16]. Furthermore, groups have also initiated an investigation into fully HTS generators based on YBCO and MgB 2 wires [17][18][19][20][21][22].
To study the feasibility of HTS armature winding, we successfully developed a 2.5 kW synchronous generator prototype with an HTS stator and a PM rotor (HTS-PM type) [23]. The HTS stator mainly consists of Bi-2223-Ag HTS coils and an iron core made from silicon steel sheets. AC loss is the main obstacle for HTS armature windings; nonetheless, this loss may be reduced to an acceptable level if the HTS armature windings operate in a low-frequency situation, as in wind turbines [23]. For ordinary rotating machines, B 0 would be 0.8 T or higher. The critical current (I c ) of HTS coils (77 K) is small for such a field. The iron core can significantly reduce the magnetic field on HTS coils, thereby increasing the current carrying capacity.
In this particular generator configuration considering an HTS stator, the means of efficient cooling is one of the major concerns. Two approaches can be considered: the conduction cooling and immersion cooling methods. Most of the developed motors/generators using HTS armature windings in stators [13][14][15] were immersion cooled. In [13,14], both the HTS stator and the rotor were entirely immersed in a liquid nitrogen bath. In [15], the HTS armature winding was separately cooled in a liquid nitrogen bath while the stator iron core and rotor were kept warm. The cryostat material was chosen as FRP to avoid eddy loss. Although the immersion cooling method is more efficient than the conduction cooling method, there are still some technical problems. For the total immersion situation, the rotor will stir the coolant, which leads to additional mechanical loss as well as troubles for the rotary sealing. If the HTS armature winding is separately cooled, the cryostat has to be built with non-metal materials to avoid eddy currents, and its structure will be quite complicated.
In our prototype machine [23], the HTS armature coils together with the iron core were cooled using the conduction cooling method, considering that it can be handled easily. The HTS armature coils were not cooled directly by liquid coolant but through the iron core or some other parts. The uncertainty is whether or not the thermal conduction through theiron is fast enough to remove the heat generated from the iron core and the AC loss. Otherwise the temperature increase would greatly influence the current carrying capacity of the HTS coils. Thus analysing and testing the temperature evolution of the HTS stator through the conduction cooling method during generator operation is necessary.
In [23], some preliminary electric properties of our generator in short-time running were tested. To test the longterm running properties and verify the feasibility of the HTS armature through conduction cooling, in the present study, the thermal behaviours of the HTS stator in our HTS-PM generator were investigated through three main aspects: conduction cooling efficiency, iron loss at liquid nitrogen temperatures, and the influence of AC loss on the thermal stability of the HTS stator.
General structure of the HTS-PM generator
The HTS-PM generator prototype contains a four-pole inner rotor composed of Nd-Fe-B PMs and a six-slot outer stator with HTS armature windings. Some of the basic parameters of this prototype are shown in table 1. All the Bi-2223/Ag racetrack coils were placed side by side in the slots to prevent coil end interference. The iron core was made of silicon steel sheets and was pressure mounted into the hollow cylinder cryostat. The outer surface of the iron core made close contact with the inner surface of the cryostat. To improve the contact condition, the outside surface of the iron core was painted with STYCAST 2850 FT before assembly. The structure is illustrated in figure 1(a).
As depicted in figure 1(b), HTS coils were placed into the iron core slots and fixed with different slot wedges. The slots were also filled with STYCAST 1266 mixed with aluminium oxide powder to enhance the cooling conditions. Two aluminium rings were fixed at the ends of the iron core to support the HTS coils and to generate additional cooling paths for the end parts of HTS coils. Both the HTS stator and the PM rotor were placed into a vacuum chamber. Liquid nitrogen was poured into the cryostat through the bottom pipe, and the vaporized nitrogen gas was released to the atmosphere through the top pipe. Additional structure details are provided in [23,24].
Experimental details
Six PT-100 platinum temperature sensors were buried in the middle of each slot to measure the temperature evolution of the HTS coils and in the iron core during cooling and operation, as shown in figure 1(b). The measured slot temperature (T slot ) can approximate the iron core temperature at the same radial position, given that the slot area is smaller than the entire iron core cross section and the slots were filled with epoxy. The other PT-100 sensors were placed at the ends of HTS coils for safety monitoring. All the resistances of these PT-100 sensors were measured with a Keithley 2700 multimeter. The vacuum degree was measured with a thermocouple gauge. Furthermore, the HTS-PM generator was driven by an 11 kW induction motor whose speed was controlled with a frequency converter. The rated working frequency of this prototype was 10 Hz.
To facilitate the thermal analysis of the HTS stator, the thermal conductivity λ within the plane and the specific heat C p of the silicon steel sheet (grade 50DW400, according to the Chinese GB standard) were measured with a Physical Property Measurement System at various temperatures.
All the experiments were performed once the HTS stator was cooled to a stable temperature above 77 K. Before cooling, the pressure in the vacuum chamber was pumped to 3.1 Pa. After cooling, the HTS-PM generator was driven by the induction motor at the rated rotation speed of 300 rpm. The frequency of the rotating magnetic field was 10 Hz. First the temperature evolution in the slots (T slot ) was measured in the no-load running test, during which the HTS armature windings were not connected to an external circuit and there were no working alternating currents I A,rms . Subsequently, T slot values were measured at different I A,rms during the with-load running tests. The I A,rms values were adjusted from 3.2 A to 23.7 A by reducing the load resistance from 13.7 W to 1.8 W.
Besides this, the I c values of a typical coil were also measured at various temperatures using the four-probe method.
Cool down process
During the cooling process, the HTS stator was cooled from room temperature. Figure 2 displays the typical T slot evolution processes recorded by PT-100 sensors at different positions. The liquid nitrogen level in the cryostat ramped up gradually from bottom to top, thus T slot at different positions varied a little during cooling. T slot dropped to a minimum of 82.1 K after over 3.5 h of cooling, and the pressure in the vacuum chamber decreased to 1.6 Pa accordingly. T slot dropped gradually after 3.5 h, after which a stable state was reached.
The stable temperatures in different slots should be the same due to symmetry. However, these T slot values vary slightly because of material inhomogeneity and geometry inaccuracy. In this study, a typical slot was selected for T slot measurement to represent the most probable situation.
No-load running test
T slot increased during the no-load running test due to the iron loss induced by the rotating magnetic field. Figure 3 exhibits a typical T slot as a function of the operation time during this test in both low-temperature and room-temperature situations. In cryogenic conditions, T slot increased rapidly from 82.1 K during the starting period. After approximately 1.5 h, this value increased very slowly, thus suggesting that the iron temperature was almost stable. The maximum increment in T slot was approximately 1.3 K in this test. In room-temperature conditions, T slot increased almost linearly during the test, and the increment would be higher than that in cryogenic situation if the test were going on. The evolution differences were caused by different thermal boundary conditions.
The measurement results showed that all T slot curves in different slots displayed similar evolution behaviours, including the increase rate and final increment value, though the absolute temperature values differed slightly. In this study, we selected a typical curve to represent the most probable situation.
In [24], an aluminium foil was glued to the inner surface of the iron core to shield the radiative heat transfer from the rotor. However, the aluminium foil was proved to become an induction heat source in the air gap after the analysis. Thus, this foil was removed before the experiments were conducted as described in this paper; the resultant temperature change behaviours deviated from those explained in [24].
I c of the HTS coil
I c of HTS coils were measured before and after assembly. The result is shown in [23]. Before assembly, I c of separate HTS coil was around 52 A to 58 A at 77 K (self-field). After assembly, it dropped to 30 A to 33 A at 82.1 K. The I c values were also measured at different temperatures during the cooling process, as shown in figure 4. I c dropped linearly with increasing temperature, and the linear slope was −1.67 A K −1 .
The temperature increment during the load test
Once the HTS armature winding was connected to the external circuit, the HTS armature coils carries the working alternating currents, then power was outputted. Figure 5 shows the relationship between the output power and the armature current I A,rms (effective value). The armature current induces AC loss and increases temperature further. Figure 6 depicts the evolution of the selected T slot measured at different I A,rms . The frequency is maintained at 10 Hz. The Pt-100 sensors were stuck directly to the straight side of the racetrack coils. Thus, T slot can represent the coil temperature. According T slot evolutions during the no-load running period in both low-temperature and room-temperature situations. In cryogenic condition, T slot increased rapidly from 82.1 K before 1000 s. After 5500 s, the increasing rate decelerated significantly. The maximum T slot increment in this test was approximately 1.3 K. In roomtemperature conditions, T slot increased almost linearly, and the increment was about 1 K. It would be higher if the test were going on. to the previous discussions, T slot of a typical slot was selected to represent the typical condition.
Given I A,rms = 3.2 and 6.4 A, the HTS generator worked quite stably for 1 h without quench and the T slot curves were similar to those in the no-load condition. Nonetheless, at higher I A,rms , T slot increased much faster. When I A,rms = 9.6 A, the measurement lasted for 40 min and the T slot increment was approximately 2 K. The final T slot increment was roughly 2.5 K for I A,rms = 12.6 A. This test lasted for only 27 min and did not reach a stable slot temperature due to the protection of the driven motor in case of overheating.
When I A,rms = 15.6 and 18.2 A, T slot increased rapidly within a short period and showed no inclination of reaching the stable state before the protection. At I A,rms = 18.2 A, T slot increased by 4.2 K within 500 s and continued to increase, whereas the temperature of another coil increased from 86 K to 92 K after 400 s and finally quenched, as shown in figure 7. The I c of the quenched coil at 92 K is less than 18 A, as per figure 4.
The thermal properties of silicon steel at cryogenic temperature
The measurement results of the thermal conductivity λ and specific heat C p of silicon steel are shown in table 2. At an almost-stable cryogenic temperature, λ is 9.6 W/(m K · ) at 76.7 K, and C p is 160.5 J/(kg K · ) at 79.5 K. These results can be applied to the calculation process as follows.
Heat leakage from the rotor to the stator
In this HTS-PM generator, the rotor is expected to remain warm when the stator is cooled down; this expectation induces heat leakage from the rotor to the stator and stabilizes > T slot 77 K, as shown in figure 2. A simple 2D model was built to analyse heat leakage and to estimate the temperature distribution in the stator, as indicated in figure 8. Rotors, . T slot evolution curves given different armature currents I A,rms . When I A,rms = 3.2 and 6.4 A, the temperature increase was similar to that in the case of no-load running. The dashed-dotted line represents the predicted T slot values when I A,rms = 12.6 A. When I A,rms was high, that is, 15.6 and 18.2 A, T slot increased rapidly and was not inclined to reach the stable state before the experiment stopped due to protection. stators and cryostat walls were generated from the interior to the exterior. The cold source is the liquid nitrogen that makes contact with the cryostat; this source can remove the heat generated from iron core, AC loss and the heat leakage from the rotor. The iron core is presumably simplified to a hollow cylinder, and the heat is generated uniformly. The PT-100 sensor is positioned in the middle of the slot. As a result, the fiberglass tube at the end of the stator cannot adequately influence the temperature measurement in the slot. Thus, the heat conduction through the fiberglass tube installed at both ends of the stator is ignored for simplicity. The heat conduction through air and radiation are the main sources of heat leakage. In principle, the quantity of heat leakage at the worst condition can be estimated.
According to [25], the radiation heat flow power Q 1,raḋ from the rotor to the stator can be expressed as The maximum heat leakage through air conduction can also be calculated. The pressure in the chamber is 1.6 Pa when the stator is cooled to a stable condition. The mean free path of air at this pressure between 3.9 and 0.68 mm, which is close to the air gap length of 5 mm. In this case, the heat transfer equation in the free-molecule regime can be used to estimate the heat leakage power through air conduction Q 1,aiṙ [25]. The following equation is established: In (4), k is a constant; the value for air is 1.2. a = 1 is a dimensionless factor depending on surface conditions for air [26], and p = 1.6 Pa is the pressure. Thus Q 1,aiṙ is estimated to be 34.5 W. The total heat leakage power = + Q Q Q 1 1,rad 1,aiṙ˙i s 36.6 W. Q 1 is overestimated because the temperature on the outer surface of the rotor should be lower than 273 K, thus Q 1 should be lower than 36.6 W. The radiative heat transfer is rather small compared to the heat conduction through air. The vacuum degree is the key factor to limit heat leakage. If the vacuum can be improved to 0.1 Pa, total heat leakage can be reduced to 4.3 W, which is one order of magnitude lower than that at 1.6 Pa. Hence, the temperature of the iron core can approach 77 K.
The stable temperature of the stator after cooling
In the stable state, the temperature distribution equation applied to the stator and cryostat wall regions is expressed as = T 0, On the exterior surface of the cryostat, the heat flux is transferred by liquid nitrogen via the nucleate-boiling regime given that the heat leakage is slight. The heat transfer flux Q 3 in nucleate-boiling mode is calculated by [25] =´-¢ Q A T T 5 10 , The temperature differs across the interface between the stator and the cryostat wall, that is, -¢ T T 2 2 = 2.9 K. This variation is caused by the equivalent thermal conductance C eq across this interface, which can be expressed as 1˙i s the heat transfer flux across that interface and A 2 is the area of that interface. Then, C eq = 85 W m −2 K −1 . The thermal conductance on the boundary is related to the pressure and surface conditions. At 77 K, C eq for a 4.45 MPa pressed steel-steel interface is´--3 10 W m K 3 2 1 [25], which is considerably larger than the C eq calculated previously. Increasing the contact pressure and polishing the iron core surface can increase the thermal conductance C eq further.
The stable ¢ T 1 is determined by the air pressure p and equivalent thermal conductance C eq for this machine. Considering equations (2)- (9), the relationship between ¢ T 1 , p and C eq can be obtained, as shown in figure 9. Compared with C eq , the air pressure is more controllable. Keeping a good vacuum degree, such as 0.1 Pa or lower, can guarantee the temperature of the iron core close to that of liquid nitrogen; C eq has a wide range at this air pressure, which is more convenient for the assembly process. In the case of p = 1.6 Pa, C eq has to be kept above 600 W m −2 K −1 in order to keep ¢ < T 1 80.02 K. In the case of p = 0.1 Pa, C eq above 30 W m −2 K −1 is sufficient to keep ¢ < T 1 78.8 K. It should be pointed out that the free mean path of air molecules is much small than the air gap length in the case of > p 5 Pa. Thus the heat transfer through air should be calculated by using heat convection equations, which is much larger than the result of (4).
To predict the behaviour of generators with larger power capacity, we may considering a model with larger iron according to figure 8. The inner radius of iron (r 1 ) varied from 0.075 m to 1 m. The thickness of iron was changed proportionally to r 1 . The thickness of the air gap and the cryostat wall remained the same. For better cooling efficiency, the air pressure is fixed at 0.1 Pa, and the stable ¢ T 1 at different r 1 and C eq is shown in figure 10. It is seen that for the case < r 1 0.4 m ¢ T 1 can stay below 78.83 K with a wide range of C eq . However, when > r 1 0.5 m, ¢ T 1 is never lower than 78.83 K even for C eq = 10 4 W m −2 K −1 , which exceeds the equivalent thermal conductance of a steel-steel interface at 77 K and 4.5 MPa. A probable way is to put distributed LN 2 pipes across the stator iron, in addition to the present external cryostat. This helps to further reduce the distance between the cold source and the cooling target.
The conduction cooling method can cool the iron core effectively for the present HTS-PM generator. The heat leakage to the HTS stator and the equivalent thermal conductance C eq between the stator iron and the cryostat are two main factors that influence the cooling effect. If the vacuum and the thermal conductance on the boundary are improved, the stable cryogenic temperature of the iron core can approach 77 K after cooling, even for larger dimension of iron core.
Temperature increment caused by iron loss
When the HTS generator begins rotating, the temperature of the iron core increases due to iron loss. The gap field in this prototype is 0.8 T, and the maximum flux density in the iron core is approximately 1.6 T. The iron loss in the silicon steel sheet (50DW400) is 0.96 W kg −1 at 0.8 T when f = 50 Hz and is 3.8 W kg −1 at 1.6 T when f = 50 Hz, according to the data provided by the manufacturer. The iron loss is Figure 9. The contour of the stable ¢ T 1 determined by p and C eq . The horizontal axis denotes the air pressure in the machine and the vertical axis denotes the equivalent thermal conductance across the iron core and cryostat interface. When the air pressure is 0.1 Pa or lower, C eq has a wide range and ¢ T 1 is close to 77 K. For instance, when C eq is as low as 30 W m −2 K −1 , ¢ T 1 is still lower than 79 K. Figure 10. The contour of the stable ¢ T 1 at different iron inner radius (r 1 ) and C eq . The horizontal axis denotes the iron inner radius and the vertical axis denotes the equivalent thermal conductance across the iron core and cryostat interface. The air pressure is 0.1 Pa. The iron core dimension changes proportionally.
approximately proportional to f 1.3 (f represents the frequency) [27]; thus, the iron loss is roughly 0.118 W kg −1 at 0.8 T and 0.469 W kg −1 at 1.6 T when f = 10 Hz. Given an iron weight of 33.14 kg, the iron loss of the stator at room temperature when f = 10 Hz is expected to be between 3.91 and 15.54 W. If the distribution of magnetic flux density within the iron is considered, the total iron loss at room temperature when f = 10 Hz is estimated to be 11.5 W by ANSYS Maxwell; this value stays in the same range, as expected.
The resistance of silicon steel decreases with temperature, and the iron core is subject to compression stress due to the mismatch in thermal contraction between the cryostat and the iron core. Both effects enhance iron loss [28][29][30][31][32]. At 50 Hz, the iron loss of the silicon steel sheet increases by approximately 18% at 77 K compared with that at room temperature [29]. However, the working frequency of the present generator is 10 Hz, which is considerably lower than 50 Hz; thus, the enhancement of the iron loss caused by the reduced resistance is limited [28].
If the iron core is simplified to a hollow cylinder that generates heat homogeneously, then the heat transfer equation is expressed as where q iroṅ , λ, ρ and C p represent the iron loss per volume, thermal conductivity, material density and the specific heat per weight, respectively. τ is the time. At the beginning of the no-load running process, the temperature distribution starts to shift from the stable state with » T 0 2 . Then, (10) can be expressed as is the initial temperature increase rate at τ = 0. G 0 can be calculated as approximately2.28 10 3 K s −1 by setting the silicon steel density ρ = 7650 kg m −3 and the specific heat per weight C p at cryogenic state as per table 2 and by applying the measured temperature data represented in figure 3. Then, q iroṅ can be calculated as 2.80 kW m −3 . The total iron loss is approximately 12.6 W, which is similar to the previously estimated value.
When the HTS generator operates in the no-load running state and the temperature stops increasing, the temperature increase rate t ¶ ¶ T is almost 0. Thus, (10) is written as follows for the iron core region: In the iron regime (between ¢ A 1 and A 2 ) of the simple 2D model shown in figure 8, the temperature distribution is expressed as where C 3 and C 4 are constant. The temperature distribution in the cryostat wall is the same as in (7). The heat leakage on the inner surface of the iron core, the equivalent thermal conductance C eq across the interface between the iron core and the cryostat and the liquid nitrogen heat transfer regime are similar to those described in section 5.2. Thus, the temperature at the PT-100 position can be calculated as 83.3 K by setting an iron loss of 11.5 W (estimated via FEA software) and the λ of the silicon steel to that indicated in table 2. This temperature is 1.2 K higher than the stable temperature prior to no-load running. The experimental result ( figure 3) shows that the temperature increment is approximately 1.3 K, which is close to the calculated results. The iron loss could greatly influence the HTS stator temperature. Figure 11 shows the ¢ T 1 evolution with different C eq and iron loss in the case of air pressure 1.6 Pa and 0.1 Pa. It is clear that ¢ T 1 increases with increasing iron loss and decreasing C eq . It is also seen that ¢ T 1 is generally higher at Figure 11. ¢ T 1 contour at different iron loss and C eq in the case of air pressures (a) 1.6 Pa (present machine situation) and (b) 0.1 Pa (preferred situation). The horizontal axis denotes the iron loss and the vertical axis denotes the equivalent thermal conductance across the iron core and cryostat interface. The temperature rise with iron loss increasing in (a) is slower than that in (b).
1.6 Pa than at 0.1 Pa. In the case where C eq = 85 W m −2 K −1 , p = 1.6 Pa, ¢ T 1 will increase to about 89 K if the iron loss is up to 60 W. With the same C eq , ¢ T 1 will be 85 K if p decreases to 0.1 Pa. Lower air pressure leads to smaller heat leakage across the air gap, which would also contributes to the lower ¢ T 1 . Improving C eq can also sufficiently suppress the ¢ T 1 increment when the iron loss becomes larger. If C eq were improved by one order of magnitude to 850 W m −2 K −1 , ¢ T 1 would be smaller than 83.09 K in the case of air pressure 1.6 Pa and iron loss 60 W, which is even lower than the experiment result of the present machine. For the case of air pressure 0.1 Pa, the evolution trend is nearly the same. Therefore, improving C eq is important for reducing the temperature increment caused by the iron loss.
From the above estimation, for our present HTS generator, the heat generated by iron loss at 10 Hz is rather small and the temperature increment could be kept less than 2 K by using conduction cooling through the iron core. Improving the equivalent thermal conductance on the interface is necessary for suppressing the temperature increase for future design of HTS rotating machines with similar structure.
Temperature increment caused by AC loss
The AC loss of the HTS coils is a key problem in HTS armature windings, especially when the conduction cooling method is used. When the HTS generator prototype begins rotating, the HTS coils produce heat that increases the temperature in the slot apart from the conducted heat and the iron loss.
Before assembly, the transport AC loss of a separate HTS coil without the iron core is measured at 11 Hz and 77 K (selffield) [23] via the method described in [33]. At 20 A (effective value), the AC loss is approximately 0.4 W with I c = 58 A. It is difficult to measure the AC loss of assembled coils directly because these coils are coupled with the surrounding stator iron, and the measurement data include the contribution of iron losses. When I A,rms = 20 A, I c = 32 A and f = 10 Hz, the transport AC loss of a coil is estimated to be 0.75 W [23]. The real AC loss of an assembled coil should exceed the estimated value. Both the iron coupling [34] and the leakage alternating flux generated by the rotor increase AC losses.
Although we do not know the precise AC loss value, we can see its influences on slot temperatures. Figure 6 depicts the temperature increment of the HTS coils when different I A,rms are carried during the load test. When I A,rms is low (3.2 and 6.4 A), the temperature curves are close to those of the no-load running condition. This finding suggests that the heat generated by the AC loss in the case of low I A,rms can be conducted easily through conduction cooling; thus, temperature hardly increases any further and the HTS generator can run stably. Given I A,rms = 12.6 A, the temperature increment after 1500 s is over 2.5 K; this increase is higher than 1.3 K in the no-load running test. The influence of AC loss at high I A,rms is therefore clearly important. In case of > I A,rms 12.6 A, the AC loss produced too much heat, that exceeds the cooling capacity of the present method. Thus the temperature increase in the slot is prominent and stable running cannot be achieved.
As per figure 1(b), the AC loss in the HTS coils is a local heating source within slots. The absolute value of this loss may not be very high because of the low working frequency; however, the loss per volume is large because the volume of the HTS coils is considerably smaller than that of the iron core. The heat generated by the AC loss must be conducted across the epoxy, contacting interface and the stator iron in the present HTS-PM generator. Moreover, the cooling condition within the slots determines the stable temperature during the load running. Improving the cooling condition of the HTS coils is important for increasing the stable working current and, consequently, the power of the HTS generator.
Summary
A prototype HTS generator was developed to determine the feasibility of HTS armature windings. HTS coils can be cooled down to 82.1 K through conduction cooling. The maximum heat leakage from the rotor was estimated to be 34.6 W at a vacuum degree of 1.6 Pa. The iron loss was about 2.8 kW m −3 at 10 Hz. The equivalent thermal conductance C eq between the iron core and cryostat was estimated to be 85 W m −2 K −1 . The final temperature increment of the stator during the no-load running period was 1.3 K. This increment was very close to the estimated value. Given low alternating currents I A,rms = 3.2 and 6.4 A, the AC loss of the HTS coil could hardly influence the slot temperature. At elevated working currents, however, the heat generated by the AC loss produced a rapid increase of T slot , and even led to quench of one HTS coil in case of I A,rms = 18.2 A.
It is suggested that the conduction cooling method is sufficient to cool down the iron core. This method can also compensate the temperature increment caused by the iron loss. The heat leakage to the HTS stator and the equivalent thermal conductance C eq between the stator iron and the cryostat are two main factors that influence the cooling effect. Using the analysis method, the thermal behaviours of HTS stators with similar structure can be predicted. For further reducing the stable temperature of the iron core and compensating temperature increase in a higher iron loss situation, the air pressure is preferred to be 0.1 Pa or lower, and C eq should be improved. Furthermore, the thermal conduction of the HTS windings inside the slots for this prototype may require improvement to take away the heat generated by the AC loss.
|
v3-fos-license
|
2021-07-25T06:17:02.274Z
|
2021-07-01T00:00:00.000
|
236211811
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/22/14/7372/pdf",
"pdf_hash": "1c2b8230752c8b14eee79490ef4dad28402a0f18",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45495",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "1ac06c974ce0e2f0ac0178069c1a50f73fdc1c2e",
"year": 2021
}
|
pes2o/s2orc
|
Bacterial Infection and Non-Hodgkin B-Cell Lymphoma: Interactions between Pathogen, Host and the Tumor Environment
Non-Hodgkin B-cell lymphomas (NHL) are a heterogeneous group of lymphoid neoplasms with complex etiopathology, rich symptomatology, and a variety of clinical courses, therefore requiring different therapeutic approaches. The hypothesis that an infectious agent may initiate chronic inflammation and facilitate B lymphocyte transformation and lymphogenesis has been raised in recent years. Viruses, like EBV, HTLV-1, HIV, HCV and parasites, like Plasmodium falciparum, have been linked to the development of lymphomas. The association of chronic Helicobacter pylori (H. pylori) infection with mucosa-associated lymphoid tissue (MALT) lymphoma, Borrelia burgdorferi with cutaneous MALT lymphoma and Chlamydophila psittaci with ocular adnexal MALT lymphoma is well documented. Recent studies have indicated that other infectious agents may also be relevant in B-cell lymphogenesis such as Coxiella burnettii, Campylobacter jejuni, Achromobacter xylosoxidans, and Escherichia coli. The aim of the present review is to provide a summary of the current literature on infectious bacterial agents associated with B-cell NHL and to discuss its role in lymphogenesis, taking into account the interaction between infectious agents, host factors, and the tumor environment.
Introduction
Non-Hodgkin B-cell lymphomas (NHL) are the most common hematological malignancies worldwide and the fifth most common cancer. It is a heterogeneous group of lymphoid neoplasms, including latent types such as marginal-zone lymphomas (MZL), follicular lymphomas, and aggressive diseases such as diffuse large B-cell lymphoma (DLBCL) and Burkitt's lymphoma, with complex etiopathology, a variety of symptomatology and clinical courses, and consequently different therapeutic approaches [1]. MZL are indolent diseases that arise from the small B cells of the marginal zone, which surrounds the lymph node and lies outside of the mantle zone. According to the latest WHO classification, there are three sub-types of MZL, namely extranodal MZL (EMZL), nodal MZL (NMZL) and splenic MZL (SMZL) [2,3]. Of the EMZLs, which are termed mucosa-associated lymphoma tissue (MALT) and account for 70% of MZLs, the most common lesions are located in the stomach (approximately 30-50%), with less frequent occurrences in the lung, skin, ocular adnexa, salivary glands, thyroid, breast and other sites (overall 8% of all NHLs). Furthermore, it is thought that autoimmune diseases such as Sjögren's syndrome and Hashimoto's disease may contribute to the development of MALT lymphomas, but the main driving factor has been attributed to chronic inflammatory processes [2,4,5]. For aggressive DLBCL and Burkitt lymphoma the epidemiology is different, and an intensive diagnostic and therapeutic approach should always be implemented as soon as possible.
Some specific bacterial species have been identified that correlate strongly with cancers. The microorganisms include Salmonella typhimurium which associates with hepatobiliary
H. pylori and MALT, DLBCL, and Burkitt lymphoma
H. pylori infection affects more than half of the world's population. In approximately 10-20% of infected individuals, it manifests as chronic gastritis and ulcers, and in only less than 1% does it lead to gastric cancer or MALT lymphoma [16]. Although the role of H. pylori in the development of adenocarcinoma is well documented and the bacterium has been recognized as a class I carcinogen, it is still not entirely clear why only a small percentage of infected individuals trigger carcinogenesis and develop gastric cancer and MALT lymphomas [17,18]. Gastric MALT lymphoma is a low-grade NHL, and is known to be a consequence of chronic inflammatory processes induced by H. pylori [9,19]. Intensive research in recent years using modern molecular techniques along with cellular and animal models has demonstrated the importance of H. pylori as a carcinogen and has greatly expanded the knowledge of the interaction of these spiral bacteria on the lymphoproliferative processes [20][21][22]. The pathogenesis of H. pylori-dependent chronic gastritis leading to metaplasia and activation of the carcinogenic processes is still the subject of intense research. H. pylori has unique properties that allow it to colonize the human gastric mucosa for extended periods of time [23]. The most important virulence factors are the CagA protein and the VacA toxin [20]. The CagA protein induces epithelial cells to secrete interleukin 8 (IL-8) through activation of NF-κB [20,23]. CagA is then transported into gastric epithelial cells through the type IV secretion system (TFSS), where Src and Abl family kinases can phosphorylate the tyrosine residues of the EPIYA (Glu-Pro-Ile-Tyr-Ala) fragment of CagA. After phosphorylation, CagA binds to Src homology-2 domain phosphatase (SHP-2) and modulates multiple signaling pathways by activation of endoplasmic reticulum kinases (ERK), the p38 mitogen-activated protein kinase (MAPK), and
H. pylori and Host Factors in Gastric Lymphogenesis
Research conducted in recent years has demonstrated that H. pylori colonization i crucial in the early phase of lymphogenesis ( Figure 2) [35][36][37]. Chronic H. pylori infectio induces the secretion of pro-inflammatory cytokines by macrophages and dendritic cells such as interleukin-1 beta (IL-1β), TNF-α, IL-8 and IL-6, and allows differentiation of Th and Th17 cells, which are involved in establishing persistent inflammation [38]. In gastri MALT-lymphoma, H. pylori-induced antigenic stimulation results in the formation of T cell infiltrates that invade and destroy gastric glands [39]. These reactive Th1 CD4 + lym phocytes produce high levels of IFN-γ, and facilitate the proliferation of neoplastic B cell through CD40L-CD40 co-stimulation, and the secretion of Th2 cytokines including IL- [40]. The majority of CD4 + T-cells are suppressive CD25 + forkhead box P3 (FOXP3) + regu latory T-cells (Tregs), which are themselves recruited by tumor B-cells; it has been suggested that higher numbers of tumor-infiltrating FOXP3 + cells are associated with better respons H. pylori VacA cytotoxin can form intracellular vacuoles, leading to damage and disintegration of gastric epithelial cells. The activity of different VacA toxin alleles affects its cytotoxicity [30]. H. pylori strains housing the vacAm2 allele were less biologically active in vivo, and less vacuolating in vitro; these strains are thought to be the predominant form in patients with gastric MALT lymphoma. Furthermore, the H. pylori strains possessing the vacAs1m2 genotype associated with iceA1 variants were found in MALT lymphoma patients at levels 5 times that of chronic gastritis [31]. Studies on overexpression of the p52 fragment of VacA toxin show that VacA induces the production of TNF-alpha, Il-1beta, nitric oxide, and oxygen radicals in THP-1 cells, and induces cell apoptosis ( Figure 1). Additionally, it causes activation of NF-κB, which triggers a cascade of reactions leading to the secretion of pro-inflammatory cytokines and cell apoptosis [32]. The VacA cytotoxin also depolarizes the cell membrane, alters mitochondrial membrane permeability, disrupts endosome and lysosome function, activates tumor process kinases, and inhibits antigen presentation and T cell activity [33,34].
H. pylori and Host Factors in Gastric Lymphogenesis
Research conducted in recent years has demonstrated that H. pylori colonization is crucial in the early phase of lymphogenesis ( Figure 2) [35][36][37]. Chronic H. pylori infection induces the secretion of pro-inflammatory cytokines by macrophages and dendritic cells, such as interleukin-1 beta (IL-1β), TNF-α, IL-8 and IL-6, and allows differentiation of Th1 and Th17 cells, which are involved in establishing persistent inflammation [38]. In gastric MALT-lymphoma, H. pylori-induced antigenic stimulation results in the formation of T-cell infiltrates that invade and destroy gastric glands [39]. These reactive Th1 CD4 + lymphocytes produce high levels of IFN-γ, and facilitate the proliferation of neoplastic B cells through CD40L-CD40 co-stimulation, and the secretion of Th2 cytokines including IL-4 [40]. The majority of CD4 + T-cells are suppressive CD25 + forkhead box P3 (FOXP3) + regulatory T-cells (T regs ), which are themselves recruited by tumor B-cells; it has been suggested that higher numbers of tumor-infiltrating FOXP3 + cells are associated with better response to H. pylori eradiation therapy [41]. Recent findings indicated that a proliferationinducing ligand (APRIL) expressed by neutrophils, eosinophils and tumor-infiltrating macrophages seems to be important for gastric lymphogenesis induced by H. pylori [42]. Recently, Blosse et al. characterized the inflammatory response associated with gastric MALT lymphoma in the stomach of transgenic C57BL/6 mice, and additionally found APRIL-producing eosinophilic polynuclear cells in the lymphoid infiltrates of patients with gastric MALT lymphoma [39]. The authors have demonstrated that the T reg -balanced inflammatory environment is an important contributor to gastric lymphogenesis [39]. In recent years, studies have examined the role of the PD-1 pathway in the context of H. pylori colonization, and T-cell dysfunction and PD-1 expression have been observed in these patients [43]. High levels of PD-L1 were found in human gastric biopsies taken from patients infected with H. pylori, when compared to H. pylori negative controls [44]. Shen et al. have demonstrated that the PD-1/PD-L1 checkpoint is involved in intraepithelial neoplasia and early-stage gastric cancer [45]. Furthermore, Holokai et al. hypothesized that H. pylori-induced PD-L1 expression within the gastric epithelium is mediated by the Shh signaling pathway during infection. The authors demonstrated that metaplastic cells may survive chronic inflammation by expressing the immunosuppressive ligand PD-L1, which would account for the persistence of the infection and progression to cancer [46]. The role of H. pylori-induced PD-L1 expression in lymphogenesis needs to be determined, especially in the context of new therapeutic options such as PD-1 and PD-L1 inhibitors, that act to control the immune checkpoints.
Previous observations on lymphogenesis have shown that host factors may also be important in the development of MALT lymphomas. It has been demonstrated that polymorphisms in TNF-α, GSTT1 and CTLA4 genes are associated with the risk of gastric MALT lymphoma [26]. Moreover, polymorphism of IL-22 was also associated with the susceptibility to gastric MALT lymphoma ( Figure 2) [47]. Liao demonstrated that the C allele at rs1179246, C allele at rs2227485, A allele at rs4913428, A allele at rs1026788 and T allele at rs7314777 were significantly associated with increased risks of the disease, and the pattern of IL-22 expression in gastric mucosa predicted treatment responses to H. pylori eradication in patients with H. pylori-induced gastric MALT lymphoma [47]. These findings suggest that host factors and their alterations may be crucial for the pathogenesis of MALT lymphomas. susceptibility to gastric MALT lymphoma ( Figure 2) [47]. Liao demonstrated that the C allele at rs1179246, C allele at rs2227485, A allele at rs4913428, A allele at rs1026788 and T allele at rs7314777 were significantly associated with increased risks of the disease, and the pattern of IL-22 expression in gastric mucosa predicted treatment responses to H. pylori eradication in patients with H. pylori-induced gastric MALT lymphoma [47]. These findings suggest that host factors and their alterations may be crucial for the pathogenesis of MALT lymphomas.
The Role of Epigenetic Factors and Molecular Factors in Gastric Lymphogenesis
The mechanisms underlying the progression of gastric MALT lymphomas are less clear, and are currently the subject of intensive research. It has been suggested that in later phases of the disease process there are some other factors, such as host, genetic and molecular factors, as well as changes in the tumor microenvironment, making the role of H. pylori is less relevant than during the initial stages of the disease [26,48]. Tumor progression is now known to be driven by an interaction between B-cell receptor (BCR)-derived signals and T-helper (Th) cell signals [42]. Moreover, it has been shown that chemokine
The Role of Epigenetic Factors and Molecular Factors in Gastric Lymphogenesis
The mechanisms underlying the progression of gastric MALT lymphomas are less clear, and are currently the subject of intensive research. It has been suggested that in later phases of the disease process there are some other factors, such as host, genetic and molecular factors, as well as changes in the tumor microenvironment, making the role of H. pylori is less relevant than during the initial stages of the disease [26,48]. Tumor progression is now known to be driven by an interaction between B-cell receptor (BCR)derived signals and T-helper (T h ) cell signals [42]. Moreover, it has been shown that chemokine receptors play a crucial role in malignant B-cell migration and transformation. It has been has found that the expression of the chemokine receptor, CXCR3 and its ligand Mig, which is expressed on activated T-cells and malignant B-cells, may be correlated with the metastatic migration of neoplastic B-cells into other organs. It has been suggested that this migration may indicate a loss of dependence on H. pylori and progression to advancedstage MALT lymphoma [26]. Deutsch et al. further suggested that the development of gastric MALT lymphoma is associated with increased CCR7, CXCR3 and CXCR7 expression and a loss of CXCR4. Transformation of gastric MALT lymphoma to extranodal DLBCL was accompanied by significant upregulation of chemokine receptors CCR1, CCR5, CCR8, CCR9, CXCR6, CXCR7 and XCR1 [49].
Epigenetic and genetic changes are also involved in lymphogenesis. Recent reports in murine models have revealed that micro RNA (miR), small non-coding RNA molecules play important roles in the regulation of cellular proliferation and apoptosis. Blosse et al. have demonstrated that miR-155, miR-150, miR-196a and miR-138 are upregulated, whereas miR-7 and miR-153 are downregulated in gastric lymphomagenesis in human samples compared to gastritis control samples [50]. In addition, it was found that the expression of miR-203 and its target ABL1 was dysregulated in MALT lymphoma biopsy samples [51]. miR-155 deserves special attention, as it plays a crucial role in the regulation of inflammation and immune responses. In vivo animal experiments have shown that miR-155 is necessary to control H. pylori infection through Th1 and Th17 responses contributing to bacterial persistence [50]. Furthermore, it has been demonstrated to be expressed at much higher levels in H. pylori-independent tumors than in H. pylori-dependent tumors [26,52].
These findings indicate that miRs are important molecules in the tumor environment, however further studies are needed in order to clarify the exact mechanism of miR dysregulation in lymphogenesis. Other studies highlight that epigenetic modifications through chromatin and DNA methylation are key events in the progression of the early stages of MALT lymphoma [26]. It was demonstrated that H. pylori causes DNA methylation and hypermethylation of the CpG islands, which result in the deletion and loss of expression of tumor suppressor genes [53]. Park et al. have demonstrated that the methylation of the cyclin-dependent kinase inhibitor p16 INK4A occurred in 75% of patients with gastric MALT lymphoma and it was more frequently in T (11:18) (q21;q21)-negative gastric MALT lymphomas [54]. Aberrant CpG methylation within certain genes, such as p16, MGMT and MINT31, was associated with H. pylori infection [26].
In light of the above, it cannot be excluded that other species of the Helicobacter genus, such as Helicobacter heilmannii, and gut microbiota interactions with H. pylori do not play important roles in lymphogenesis [55,56]. Recent studies have focused on intestinal microbes that may have impacts on local and distant tumor formation through disturbances in the ratio of its components, infection, microbial products or by modulating tumor immunosurveillance, however, the exact role of microbiota is not entirely clear [57,58].
H. pylori Eradication in B-Cell Lymphomas
Based on ESMO guidelines and Maastricht consensus recommendations, eradication of H. pylori infection using a combination of antibiotics and proton-pump inhibitors (PPI) for 7 days is the first-line treatment of gastric MALT lymphoma [59,60]. Epidemiological and clinical studies conducted over a number of years demonstrated regression of infiltrative lesions following H. pylori eradication in 60-90% of patients with low-grade gastric MALT lymphomas or early-stage DLBCL [20,61,62]. Based on clinical observations, eradication therapy is usually recommended for H. pylori-positive lymphomas but is also indicated in H. pylori-negative lymphomas, showing response rates of 83% [19,60,62]. Regression of submucosa localized lymphoma infiltrates has been reported in more than 80% of cases, whereas regression was present in around 50% of cases with deeper invasion [63]. In a study conducted by Moleiro et al., it was demonstrated that relapse occurred in 14% of the patients after a mean period of 21 months [64]. The authors indicated that relapse rates were higher in patients with H. pylori re-infection, in cases where more than one eradication regimen was used, and in cases with lymphomas localized in the corpus [64]. Based on the latest Maastricht consensus as the first line of empirical therapy, bismuthbased quadruple therapy or concomitant quadruple therapy can be used [60]. Due to insufficient numbers of prospective clinical studies, the optimal therapy regimens for gastric lymphomas with or without H. pylori infection have not been determined [19,65]. Other treatment options, including radiotherapy, chemotherapy and immunotherapy should be considered in all patients who do not respond to eradication therapy, or in patients with H. pylori-negative lymphomas. Moreover, similar recommendations should be made for patients with the presence of tumors housing the T (11;18) translocation that are less susceptible to eradication treatment. An emerging problem in all regions of the world is the increasing H. pylori antibiotic resistance, particularly to clarithromycin. This means that clarithromycin-based treatment cannot be considered without a further testing regime to see whether the organisms are sensitive [66,67]. Resistance of H. pylori strains to various antibiotics, mainly clarithromycin and levofloxacin, may be the one of the reasons for treatment failure.
There are some findings demonstrating that the lack of tumor regression after eradication therapy may be related to a complex process of multiple changes in the tumor immunological environment, such as lack of macrophage activity, upregulated expression of p-SHP, p-ERK, and Bcl-XL, nuclear translocation of NFATc1, and CD56+ NK cell activity. The member of the nuclear factor of the activated T cell family, NFATc1 has been detected in B-cell lymphomas, such as Burkitt lymphoma, Hodgkin lymphoma, and MALT lymphoma and it is supposed to be involved in lymphogenesis [26].
H. pylori and Gastric DLBCL
The association between H. pylori infection and DLBCL localized in the gastrointestinal tract has been theorized for a long time. DLBCL is the most common subtype of NHL and comprises a heterogeneous group of tumors. Among extra-nodal manifestations, primary gastrointestinal lymphomas are the most common presentation [63,68]. In some cases, MALT lymphoma may gradually progress to an aggressive DLBCL. For many years it was believed that MALT lymphoma is H. pylori-dependent and that this is lost with a high-grade transformation, but many reports published so far have demonstrated H. pylori dependence in both low-grade and high-grade gastric MALT lymphomas [28,29,69]. According to the WHO classification from 2016, it is recommended that when gastric MALT lymphoma patients demonstrate transformation into large-cell lymphoma, they should be re-classified as DLBCL; a distinct classification from de novo DLBCL, without histological infiltration of centrocyte-like cells in the lamina propria, and typical lympho-epithelial lesions [3,63,68]. H. pylori eradication is not a standard treatment for primary gastric DLBCL, however, some prospective studies have shown the regression of H. pylori-positive early stage (Lugano stage IE or II1) gastric DLBCL lymphoma after H. pylori eradication and complete response in two-thirds of patients [70]. Moreover, it was demonstrated by Kuo et al. that in all patients with H. pylori-positive DLBCL successfully treated with first-line eradication treatment, the overall survival was significantly higher than in H. pylori-negative DLBCL cases (76.1% vs. 39.8%, p < 0.001) [71]. Interestingly, it was suggested that H. pyloridependent gastric DLBCL possessed better overall survival than H. pylori-independent cases [71]. In the study published recently, Ben Younes et al. highlighted the role of the AKT signaling pathway in H. pylori-induced tumorigenesis and progression of MALT lymphoma into DLBCL [68]. The authors indicated that CagA localization in B-cells may predispose them to accumulate multiple genetic and epigenetic changes leading to loss of PTEN and/or cyclin A2 overexpression which were significantly associated with consecutive AKT activation [68]. Moreover, recent findings reported by Tsai et al. further demonstrate that most H. pylori-dependent tumors express CagA and BCL10, and CagA expression in post-H. pylori eradication biopsies is downregulated [70]. The authors pointed out that immunohistochemical assays for nuclear BCL-10 or NK-κB (p65) and CagA expression can help to predict H. pylori dependence in patients with early-stage gastric MALT lymphoma and gastric DLBCL (MALT) who receive first-line eradication treatment [70]. Researchers have also shown that some patients diagnosed with de novo DLBCL achieved complete remission after H. pylori eradication [69]. Moreover, Cheng et al. have demonstrated that positive H. pylori status was associated with better prognosis in patients with gastric de novo DLBCL [72]. Nevertheless, DLBCL is an aggressive tumor that may progress rapidly, and in some cases, relapses or progression after eradication treatment have been reported. This highlights that further studies are needed to help to stratify the group of patients with the risk of progression, to establish when H. pylori eradication therapy can be safely recommended and when and in which group of patients other treatment strategies should be implemented.
H. pylori and Burkitt Lymphoma
There are few cases in the literature, presented mainly as case reports, which demonstrate patients with gastric Burkitt lymphoma associated with H. pylori infection [73][74][75][76][77]. Burkitt lymphoma is a very aggressive type of NHL that is commonly localized in the gastrointestinal tract, although rarely in the stomach. While gastric Burkitt lymphoma in adults has a very low incidence, in children, its occurrence is even rarer. Most reports on the relationship between H. pylori and gastric Burkitt lymphoma presented cases of patients with a tumor mass primarily restricted to the stomach [74][75][76]. The possible role of H. pylori infection in the pathogenesis of Burkitt lymphoma pathogenesis needs to be evaluated, and some authors have speculated of a possible link related to the host immune response to CagA or other H. pylori virulence factors [76]. Another hypothesis is that gastric Burkitt lymphoma shares common developmental pathways with MALT lymphoma of the stomach [77]. The relationship between plasmocytomas and H. pylori, and more specifically the disappearance of these tumors after eradication treatment has also been reported [78].
Campylobacter jejuni and IPSID
Campylobacter jejuni (C. jejuni) is a microaerophilic Gram-negative bacillus that is responsible for asymptomatic carrier and mild, self-limiting gastroenteritis. However, more severe infections can lead to sepsis, and it is a known initiating agent of autoimmune diseases such as Guillain-Barre syndrome and reactive arthritis. Infection is quite common in humans worldwide and is associated with the consumption of contaminated food, mainly poultry and dairy products. Chronic antigenic stimulation in the course of campylobacteriosis has been linked to immunoproliferative small intestine lymphoma (IPSID), also known as alpha chain disease (ACD) [19,53]. This disease is classified as a variant of MALT which is localized primarily in the small intestine, but can be also detected in the stomach, colon, rectum, mesenteric lymph nodes, as well as other organs [79]. The disease has been described in young adults from the developing world, including India, the Mediterranean Basin, the Middle East, Africa, and South America, where low socioeconomic status, poor sanitation, malnutrition, and frequent enteric infections are common [80]. The disease is characterized by lymphoplasmocytic intestinal infiltrates with monotypic α-heavy chain expression. The pathogenesis is poorly understood and the association of C. jejuni infection with the development of infiltrative lesions is based on the detection of the genetic material of these bacteria in tissue from IPSID patients [81,82]. Moreover, clinical observations have shown the regression of infiltrative lesions in the course of IPSID, mainly from the macrolide or tetracycline group combined with antiparasitic drugs [81]. Nevertheless, the disease may progress from early plasmocytic lesions of low malignancy to a high-stage immunoblastic disease [19,80]. Cases of plasmoblastic lymphoma and DLBCL as progressions of IPSID have also been described [80,83]. The exact mechanism by which C. jejuni may contribute to the development of IPSID lesions is not clear. Some authors have hypothesized a role for C. jejuni CDT cytotoxin. It can be speculated that, as in the case of H. pylori, CDT-vacuolating cytotoxin enables the destruction of intestinal villi and invasion of the intestinal mucosa. Furthermore, CDT toxin leads to DNA damage as demonstrated in mouse models [84]. It has been demonstrated that cells exposed to CDT-DNA damaging agents suffer extensive genetic modifications that could cause apoptosis; hence, researchers speculate that the human clinical isolate C. jejuni 81-176 is able to promote colorectal cancer [85]. Adhesion of bacteria to endothelial cells in most cases leads to a strong immune response. However, it seems that, similar to H. pylori, Campylobacter infection can lead to asymptomatic fecal carriage in immunocompromized hosts [82]. The role of C. jejuni in intestinal lymphogenesis requires further investigation, along with a host of other pathogenic bacteria associated with the development of IPSID, such as Campylobacter coli, H. pylori, Vibrio fluvialis, Escherichia coli [80].
Borrelia burgdorferi and Cutaneous NHL
Primary cutaneous NHL represent the second most common location of NHL after lymphomas in the gastrointestinal tract, with B-cell lymphomas accounting for approximately 30% of all cutaneous lymphomas. The association of Borrelia burgdorferi infection with the development of indolent lymphomas such as cutaneous MZL and follicular lymphoma (FL), but also aggressive lymphomas such as DLBCL with cutaneous localization and Mantle cell lymphoma has been described [86]. Borrelia burgodorferi is a spiral bacterium responsible for Lyme disease and the spirochetes are transmitted by Ixodes damini ticks. Chronic infection in the course of Lyme borreliosis such as acrodermatitis chronica atrophica, and polyradiculoneuritis may lead to the development of B-cell lymphomas. The DNA of Borrelia burgdorferi has been detected in biopsy samples of patients from Europe and Australia diagnosed with cutaneous MALT lymphomas and also in tissue samples from patients diagnosed with FL and DLBCL [14,53]. Furthermore, clinical observations from case reports of patients with low-stage cutaneous MALT lymphoma treated with antibiotics alone, mainly from the cephalosporin and tetracycline groups, led to regression of infiltrative lesions [5,86]. Based on these reports, therapy with oral antibiotics is currently acceptable as first-line treatment [87]. Travaglino et al., in a meta-analysis, showed that Borrelia burgdorferi was significantly associated with primary cutaneous lymphoma in endemic areas such as North American and Europe (from southern Scandinavia into the northern Mediterranean countries of Italy, Spain, and Greece, east from the British Isles into central Russia and the northeastern and north-central United States) [86,88]. It seems that in the course of local Borrelia infection, atypical lymphoid follicles are formed in the skin, and lymphocytes may further infiltrate the dermis and produce borrelial "lymphocytoma" which can be difficult to distinguish histologically from MZL. There are data demonstrating that persistent inflammation in the course of Borrelia infection may lead to monoclonal Bcell proliferation and BCL-2 protein expression [89,90]. Cutenous MALT may be associated with Borrelia afzelii, following the presentation of a case report [91]. However, the role of other Borrelia species in the development of B-cell lymphoma is still controversial.
Chlamydophila psittaci (Ch. psittaci) and Ocular MALT
Another example of the relationship between the inflammatory process induced by intracellular bacteria and lymphogenesis is the association of Chlamydophila species with ocular adnexal lymphoma. Ocular adnexal MALT lymphomas account for approximately 5-15% of all MALT lymphomas and their incidence is increasing in recent years [92]. The prevalence is highest in elderly patients older than 65 years of age, living in rural areas with a history of chronic conjunctivitis [53]. The lymphoma lesions occur principally in the conjunctiva, orbital soft tissue, and lachrymal apparatus, with bilateral involvement in 10-15% of cases without ocular infiltration [53,93]. The etiology of ocular adnexal MALT lymphoma is currently unknown, and observations have yielded conflicting results. Some reports demonstrated the presence of Ch. psittaci DNA in 11-87% of biopsy samples from patients in different geographical regions [92][93][94]. However, and conflicting with the previous research, Zang et al. did not confirm the association between Ch. psittaci and ocular lymphomas [95]. In a recent meta-analysis, Travaglino et al. showed that the prevalence of Chlamydia in patients with lymphoma varies widely, being most common in Korea and Italy [92]. Furthermore, these authors demonstrated that not only was Ch. psittaci detected in tissues in the course of ocular adnexal lymphoma but also Ch. pneumoniae in patients in China and Ch. trachomatis in patients in Great Britain [92]. It has been speculated that Chlamydia may play a role in the development of MALT lymphoma of other sites, such as the lung, skin, uterus, bowel, and stomach [92,93,96]. Chlamydia was isolated from conjunctival swabs and from blood samples taken from patients with lymphoma [14,97]. Chlamydia is intracellular bacteria that infect humans through contact with infected birds, most commonly leading to asymptomatic infections, but can also cause chronic conjunctivitis, pneumonia, hepatitis, and pericarditis [14,94]. The properties of Chlamydia that allow it to establish persistent infections are related to their complex developmental cycle and their occurrence in three forms: elementary body (EB), which is its metabolically inactive infectious form, metabolically active intracellular growth stage form called the reticulate body (RB) and intermediate body (IB). The ability of the bacteria to modify their life cycle in response to a changing environment leads to resistance of the infected cell to apoptosis [97,98]. However, a link between chlamydial life cycle and lymphogenesis needs to be established. Persistent infections that induced polyclonal B-cell expansion and proliferation were evaluated by detection of somatically hypermutated immunoglobulin genes with an ongoing mutations pattern. Moreover, chronic antigenic stimulation may lead to chromosomal abnormalities with genetic and epigenetic alterations resulting in activation of the NF-κB pathway [98,99]. Persistent infection leads to the formation of cells that gradually become independent of their involvement in the microenvironment. Chronic stimulation due to Ch. psittaci infection may be favored by molecular mimicry, as Ch. psittaci is able to induce immune reactions that cross-react with the host self-antigens, leading to a failure to eliminate the pathogen and induce lymphogenesis [26,98]. Furthermore, the progression of OAMZL to the more aggressive DLBCL may be independent of chronic antigenic stimulation provided by the microorganism and instead be induced by mutations of tumor suppressor genes such as p53 and p16 [26,98,99].
Considering the fact that both the type of lymphoma and Ch. psittaci infection are rare diseases, there is currently no universal recommendation regarding the therapeutic approach. Often the lesions are located superficially, indolent and rarely lead to progression to more malignant types of lymphoma. Regression of lesions after antibiotic therapy with doxycycline was observed in 65% of patients in randomized control trials phase II [100]. Other treatment options include surgery and observation, radiotherapy, immunotherapy, radioimmunotherapy and immunomodulating agents in relapsed cases [97,98].
Role of Bacteria in Pulmonary Lymphogenesis
Pulmonary MALT-tissue lymphoma also known as bronchial-associated lymphoid tissue (BALT) lymphoma, is the most common B-cell lymphoma in lungs, and the prevalence of this type of disease accounts for 7-8% of all MALT types [101]. Considering that chronic inflammatory and autoimmune disorders play an important role in the pathogenesis of these lymphomas, a link between various micro-organisms and pulmonary MALT lymphoma has been sought, but so far the clear link with a particular infectious agent has not been identified. This disease is characterized by a slow-growing tumor, which infiltrates epithelial tissue and forms lympho-epithelial lesions. The disease, in most cases and as in other localizations of extranodal MALT lymphomas, is limited (IE or IIE) and a specific but relatively rare cytogenetic aberration for MALT lymphomas is T (11;18) (q21;q21), which is found in approximately 30-50% of MALT lymphomas with gastrointestinal and pulmonary localizations [101].
Achromobacter xylosoxidans and BALT
Achromobacter xylosoxidans is one candidate pathogen associated with pulmonary MALT, however recent work on the association of this bacterium with lymphogenesis brings conflicting results. Adam et al. examined lung tissue from 124 European patients with pulmonary MALT and genetic material of a Gram-negative rod from the Alcaligenes family was detected in 46% of patients vs. 18% in controls, with a significant difference in the prevalence rate for the different geographic regions, ranging from 33% to 67% [102]. In another study from a Japanese cohort analyzing tissue samples from 52 patients with pulmonary MALT and 18 patients of pulmonary DLBCL, the presence of Achromobacter xylosoxidans DNA was found only in 11% of DLBCL cases and 2% of BALT cases. The authors of the second paper speculated that differences in the prevalence of this bacterium between Europe and Asia may significantly affect the results of the analysis. Moreover, the diagnosis of pulmonary MALT itself is an important issue and may be difficult as it requires histopathological confirmation of clonal proliferation in all BALT cases to differentiate it from reactive lymphoid hyperplasia [103]. Achromobacter xylosoxidans is an opportunistic bacterium, with a low virulent potential although is very frequently resistant to antibiotics. The pathogen is usually isolated from patients with cystic fibrosis [104]. Clinical manifestations of infection are seen mainly in immunocompromised patients and include pneumonia, urinary tract infections, meningitis, and sepsis [104]. The question of whether this bacterium will be like H. pylori in gastric MALT is still open. In contrast, a recent metagenomic study by Borie et al. found no evidence that any bacterial, fungal, viral or parasitic pathogen is associated with pulmonary MALT [105].
The association between BALT and other bacterial pathogens such as Mycobacterium tuberculosis, Mycobacterium avium, Chlamydophila pneumoniae, Chlamydia trachomatis, Chlamydophila psittaci and Mycoplasma pneumoniae has been suggested [93,106,107]. Two cases of successful antibiotic therapy with clarithromycin for the treatment of BALT are described [108]. The role of infections in the development of BALT remains unclear. Some authors suggest the role of some pre-existing autoimmune disorders such as Sjögren's syndrome, reactive arthritis or Hashimoto's thryroiditis. Because of the rarity of these types of lymphoma and the indolent clinical course, the optimal treatment strategies are not well defined and include surgery, radiation, chemotherapy and immunotherapy [109,110].
Escherichia coli and Primary Bladder MALT
Another pathogen that may also be associated with MALT lymphomas is Escherichia coli (E. coli). Recurrent infections with these bacteria have been reported in patients with primary bladder MALT lymphoma, an exceedingly rare form of lymphoma [111]. Based on the literature, 58 patients with this type of lymphoma were found, mainly from Asia and the UK with female predominance. A case of MALT lymphoma in the stomach and bladder was also described [111,112]. Chronic, recurrent cystitis of E. coli etiology has been observed in approximately 30% of patients, and it can be speculated that persistent cystitis is a necessary precursor of lymphoma. There are no treatment guidelines, with surgical excision, chemotherapy, radiation, or combined modality being utilized [111,113]. Furthermore, some patients were successfully treated with antibiotics [111,114,115]. E. coli is a Gram-negative rod-shaped bacterium. E.coli harmless strains are part of the natural bacterial microflora of the human body and they colonize the human digestive tract but there are also pathogenic E. coli strains responsible for many infections, including those of the urinary tract infections, wounds, pneumonia, diarrhea, meningitis and sepsis. Uropathogenic E. coli (UPEC) strains can permanently colonize the urinary tract, leading to an acute inflammatory process in about 70% of cases. More importantly, these strains can also lead to recurrent and persistent infections [116]. It is not entirely clear why some strains can invade the bladder mucosa and others cannot, and further research is needed on the exact mechanism of complex interactions of E. coli with host cells. In vitro studies have shown that UPEC E. coli strains through virulence factors, mainly adhesins, toxins, complex iron-uptake systems, and immune evasion strategies, have the ability to persist intracellularly in bone marrow-derived macrophages and uroepithelial cells and establish long-term infection [116,117]. UPEC are able to exist in intracellular bacteria communities, and inhibit the immune response by actively blocking TLR-4 signaling, NF-κB activity and pro-inflammatory cytokine production in urothelial cells [117]. However, the exact mechanisms of E. coli's role in lymphogenesis require further study. Given that E. coli infection is common worldwide, as is H. pylori, whereas the incidence of primary bladder MALT lymphoma is very rare, the role of this bacterium in the formation of bladder MALT lymphoma requires further investigation.
Coxiella burnetii (C. burnetii) and Various NHL
The list of bacterial pathogens involved in lymphogenesis has also recently included C. burnetii, the Gram-negative intracellular pathogen responsible for Q fever [118]. The role of this bacterium in the development of DLBCL and other NHL lymphomas was first reported in 2016, when C. burnetii lymphadenitis and hemophogocytic syndrome was linked to NHL lymphomas, such as DLBCL, MALT-lymphoma, FL, Mantle cell lymphoma and chronic lymphocytic leukemia [14]. C. burnetii infection is primarily zoonotic, acquired by respiratory droplets and is mainly asymptomatic in humans, however acute infection may manifest as flu-like illness, pneumonia, hepatitis, and a small percentage of patients may develop persistent infection which manifests as endocarditis, vasculitis, and lymphadenitis. C. burnetii was detected in monocytes, macrophages and dendritic cells in both chronic lesions such as granulomas and in NHL lesions. The role of C. burnetii in the pathogenesis of NHL is not fully understood. C. burnetii DNA was present in about 36% of cases with NHL vs. 8% of controls but this difference was not statistically significant [118]. It has been speculated that the risk of NHL could be increased after exposure to C. burnetii. Research into the relationship between this infection and NHL has been conducted in the Netherlands, where an outbreak of Q fever was reported between 2007 and 2010. The median time between primary C. burnetii infection and NHL diagnosis was 8 months, and the most common forms of lymphoma were CLL. Moreover, Melanotte et al. reported that Q fever most commonly preceded DLBCL and mantle cell lymphoma [118,119]. According to the latest data, 45 cases of NHL associated with C. burnetii persistent infection have been described so far [14]. It has been reported that persistent Q fever is associated with an altered Th1 response with defective production of IFN-γ and overproduction of proinflammatory cytokines IL-10 and IL-6 [120]. The authors demonstrated in murine models that, in the absence of T-bet, which is a transcription factor known to initiate and coordinate the gene expression program during Th1 differentiation and is crucial for clearance of intracellular pathogens, leads to defective bacterial control, persistent infection, and organ injury manifesting as an increased number of granulomas [120]. The same authors in another study showed that during the course of Q fever soluble E-cadherin (sE-cad) can be detected in the sera of patients, indicating that sE-cad can be considered as a marker of a metabolic disorder and/or bacterial invasion in the course of Q fever. The role of sE-cad as a tumorigenic co-factor was highlighted in the infections caused by H. pylori, that trigger gastric adenocarcinoma. The association between sE-cad release and the induction of NHL is unknown so far [120]. In another study conducted by Melanotte et al., it was demonstrated that patients with C. burnetii lymphadenitis presented significantly elevated levels of BCL2 and ETS1 mRNAs, which may indicate the upregulation of antiapoptotic processes and the fact that lymphadenitis might constitute a critical step towards lymphomagenesis [121]. The association of Q fever with NHL requires further study.
Conclusions
The involvement of various pathogens in the development of NHL remains an open topic. We cannot forget the proven role of other pathogens such as HIV, HCV, HTLV-1, and EBV which have a known oncogenic potential in the development of NHL (primary splenic MALT lymphoma, Burkitt lymphoma, DLBCL and others). Moreover, a parasite such as Plasmodium falciparum is also considered a co-factor with EBV in the development of Burkitt lymphoma. Studies on the role of infectious agents in the lymphogenesis of certain B-cell lymphomas show that diagnosis of these infections, particularly caused by H. pylori, C. jejuni, Borrelia burgdorferi and Chlamydophila psittaci among others, antibiotic therapy should be included in the diagnostic and treatment process (Table 1). On the other hand, it is also necessary to consider the role of other infectious agents, especially viruses in lymphogenesis. In addition, host and tumor genomics and in vitro studies will be important to identify other factors regarding the pathogenesis of lymphoma development and progression, such as novel prognostic markers, microbiota-host relationships and genetic factors affecting the tumor microenvironment (Figure 3).
|
v3-fos-license
|
2022-12-11T14:40:22.138Z
|
2018-05-04T00:00:00.000
|
254500193
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10698-018-9314-y.pdf",
"pdf_hash": "8e71e42ca386b2c2beb85d3314d42f0f1bb78621",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45497",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "8e71e42ca386b2c2beb85d3314d42f0f1bb78621",
"year": 2018
}
|
pes2o/s2orc
|
Priestley’s views on the composition of water and related airs
In some views in the history, philosophy and social studies of chemistry, Joseph Priestley is at least as well-known and cited for his objections to the new chemistry and his promotion of his own late version of the theory of phlogiston, as for his early series of discoveries about types of air for which he had become famous. These citations are generally not associated with any detailed indications about his late work from 1788 onwards and his late phlogistic theory, of which there has not been a detailed study. This paper undertakes a detailed study of Priestley’s late work on water and related airs. He put forward a theory to support which his apparatus and initial substances would have needed to exclude impurities altogether. His theory did not take into account the solutions to the difficulties with the experiment which had been comprehensively understood and published by the phlogistian Cavendish several years previously, and with which the Lavoisians were in agreement. Priestley readily and fundamentally changed his interpretations of experiments in order to support the theory he currently favoured, and he was highly selective about replying about the criticisms of any opponent. This detailed analysis shows many divergences between his own practices and aspects of his objections to the new chemistry, which have implications for those stances in the secondary literature which do not question his objections. Accordingly, this study has implications concerning the nature of chemistry and other sciences, how they do progress and how they should progress.
Introduction
There has been no detailed study of Priestley's late work in chemistry from 1788 onwards, including his late phlogistic theory. This paper undertakes a detailed study of Priestley's late work on water and related airs. 1 This detailed analysis shows many divergences between his own practices and aspects of his objections to the new chemistry. The following are the most noteworthy of his objections: He (1800, p. xi) said to the antiphlogistians that "no man ought to surrender his judgement to any mere authority", and that as "you would not, I am persuaded, have your reign to resemble that of Robespierre, few as we are who remain disaffected, we hope you would rather gain us by persuasion, than silence us by power" (1800, p. xi). He suggested to the antiphlogistians that "If you gain as much by your answer to me, as you did by that to Mr. Kirwan, your power will be universally established, and there will be no Vendée in your dominions". He argued that he had not seen "sufficient reason to change my opinion" (1800, p. 2) and that "I cannot help thinking that what I have observed in several of my former publications has not been sufficiently attended to, or well understood" (1800, p. 3). He claimed that "no person who has made near so many experiments as I have, has made so few mistakes" (1800, p. 4). He claimed his apparatus "was perfectly simple, so that nothing can be imagined to be less liable to be a source of error" (1800, p. 48) while the apparatus of the antiphlogistians "does not appear to me to admit of so much accuracy as the conclusion requires, and there is too much of correction, allowance and computation in deducing this result" (1800, p. 44).
The later commentators who have taken these objections literally, have included those who have not distinguished between the differing qualities of Priestley's early and late work in chemistry. His (e.g. 1775, pp. xxxv-xlii) early apparatus for releasing air from solids had indeed been simple, cheap and effective for some of the purposes for which Priestley used it. This apparatus had made possible his early series of discoveries about types of air, especially "dephlogisticated air", 2 due to which he had rightly become and remained famous, and which led to the involvement of many other participants in the field of the chemistry associated with types of air.
However, there were several fundamental differences between this early work and his work from 1788 onwards, in which the central experiment involved burning dephlogisticated air and light inflammable air 3 to form a liquid. A large amount of gas was needed to produce a small amount of liquid, and as a result the apparatus that was needed to test the main issues was fundamentally larger and more expensive than his apparatus for releasing air from solids, as was demonstrated even in the case of the apparatus used by Cavendish (1784Cavendish ( , 1785). Yet much more importantly, Priestley (1788a) came up with a theory that effectively required his apparatus and initial substances to be free from impurities and that did not take into account Cavendish's already-published outstanding series of experiments.
Cavendish had found that, despite the presence of phlogisticated air [nitrogen] in the initial airs or apparatus, pure water was produced if there was an excess of inflammable air over that required to combine with the dephlogisticated air (1784, pp. 136-137), or if the experiment was conducted at relatively low temperature (1784, p. 134). If there was an excess of dephlogisticated air, then a little nitrous acid was produced. When the dephlogisticated air was very pure, introducing a little additional phlogisticated air made the resulting liquid more acid (1784, pp. 138-139), but when atmospheric air was used so that there was a very high proportion of phlogisticated air, less acid was formed (1784, pp. 133-134). Cavendish also found (1785) that nitrous acid was only formed when both dephlogisticated air and phlogisticated air were present, and was not formed when only phlogisticated air was present in an experiment (Blumenthal and Ladyman 2017a). By the end of May 1785, the Lavoisians knew and agreed with these experimental results. In contrast, Cavendish's theory that nitrous acid in the result was the result of decomposition of the phlogisticated air, was disputed by the Lavoisians via Berthollet's letter to Blagden of 17 June 1785. 4 Cavendish unofficially gave up phlogiston by January 1787. 5 His acceptance of the Lavoisians' view on the composition of nitrous acid was indicated by the title of his (1788) paper. So by the start of 1788, the relevant experiments were understood in great detail by both Cavendish and the Lavoisians, and the problems of Cavendish's interpretations had already been identified and accepted.
The first of Priestley's (1788a) late papers started from the theories that nitrous acid was always formed in the results of the combustion together of pure air and inflammable air, and that it resulted from the main gases in the experiment and not from impurities. These theories could only have been established by conducting experiments in which there were effectively no initial impurities. In effect, Priestley took on an experimental task that his targeted opponents understood in great detail and knew was impracticable, in support of theories which his opponents knew could not be accurate. Yet he promoted his own point of view with arguments of which the most noteworthy have been covered at the start of this introduction. As noted, this has become of much wider importance in that Priestley's work and arguments have become central to a number of subsequent stances concerning the nature of chemistry in particular, and science in general.
The purposes of this paper are to explore Priestley's late work on water and related airs in detail, to explore the issues of this work in relation to his arguments against the Lavoisians, and to explore some wider implications. The next section examines separate topics in Priestley's work on water and related airs in detail and in context. The following section discusses Priestley's general complaints against the Lavoisians in the light of his own theory and rhetorical strategies. The penultimate section gives a brief survey of selected secondary literature. The last section gives some conclusions. that he was decomposing the inflammable air, and if he had weighed the incoming inflammable air and the resulting water, then he would presumably have found that his resulting water was greatly in excess of the weight of the inflammable air. Priestley (1786, p. 141) then followed Cavendish's interpretative leap, from his interpretation of the results of these two experiments to the conclusion that this experiment showed "that water is an essential ingredient in the constitution of inflammable air, at least as procured from iron". Priestley later gave the title "experiments which prove that water is a necessary ingredient in inflammable air" to a section in his "methodized" volumes (Priestley 1790a, p. 66). In this case, after his (1790a, pp. 274-275) conclusion that water was essential to the production of inflammable air from iron, he then rephrased (1790a, p. 277) this without any further evidence as "water is an essential ingredient in the constitution of inflammable air".
The hypothesis that dephlogisticated air consists of water plus the "principle of acidity" Priestley published his new concept of the composition of dephlogisticated air around three years after his concept concerning inflammable air. On this occasion he differed from Cavendish's concept. Cavendish's (1784) view that this gas was "dephlogisticated water" was even weaker than his view on inflammable air, in that Cavendish had no evidence that dephlogisticated air contained water, the theory contained nothing that indicated how dephlogisticated air promoted or caused acidity, and the hypothesis that phlogiston was taken away from water weakened the implicit Stahlian case that water was the basis of fluidity.
The first unprecedented part of Priestley's late theory was his hypothesis that dephlogisticated air consisted of water and the "principle of acidity". 7 He advanced this even though he (1786, p. 54) had stated that there was no acid in dephlogisticated air. Priestley (1788a, p. 147) argued that he was taking the concept of the "principle of acidity" from Lavoisier, for whom dephlogisticated air was, or contained this principle. However, Lavoisier's "principle of acidity" was oxygen which formed an air when heated, rather than necessarily a constituent of oxygen, which had very different compositional consequences from Priestley's new view of dephlogisticated air. Not the least of the differences was that in practical effect, Lavoisier's (1778) "principle of acidity" was an experimentally-accessible substance, 8 whereas Priestley's was an experimentally-inaccessible constituent. 9 So in this case, the only part of Cavendish's view that Priestley followed was that dephlogisticated air contained water. It was this part of his new view that was to some extent experimentally 7 (1788a, p. 154; 1788b, p. 314; 1790c, p. 535). It is not clear in his (1788a) paper that he thought of these as the only constituents of water, but in his (1788b) paper he states "I am inclined to think, that not much more than one-twentieth part of the dephlogisticated air is the acidifying principle, and that nineteen parts are water". 8 This was clearly the case in the option in which caloric, the "cause of heat" was the motion of molecules. Also, in the option in which caloric was a material substance, it was imponderable or practically so, in which case oxygen could be experimentally tested by weight while caloric could be experimentally tested by temperature. 9 Nearly all Priestley's references are to a "principle of acidity" which was a constituent, the exception being (1794, p. 8) in which he "temporarily accepted Lavoisier's designation of dephlogisticated air as the acidifying principle" (Schofield 2004, p. 307). assessable, and therefore it was this part of the view that was criticised at the time, and this is the topic of the rest of this section. Priestley (1788a, pp. 152-154) attempted to support his equivalent hypothesis just by analogy. He argued "that a considerable quantity of water enters into the composition of dephlogisticated air, will not be thought improbable, when it is considered that, in my former experiments, this appeared to be the case with inflammable air. For without water this air cannot be produced". Priestley (1788a, p. 152) reported one set of experiments which he argued gave support by analogy to the view, and which involved the substance that Withering (1784) had identified and called terra ponderosa aerata (t.p.a.) 10 Priestley reported that this substance gave no fixed air by mere heat, 11 but when steam was sent over it, in a red heat, fixed air was produced, and in the same quantity as when t.p.a. was dissolved in spirit of salt. 12 Priestley reported that "making the experiment with the greatest care, I find that fixed air consists of about half its weight of water". Priestley (1788a, p. 152) argued that this supported his view that every air contained water. Rupp (1798, p. 152) rightly argued that Priestley's whole late theory depended on this "opinion". Maclean (1797, p. 50) and Rupp (1798, p. 153) identified that the first of Priestley's statements about t.p.a. was now known to be incorrect, in that Dr. Hope of Edinburgh had "discovered that the carbonic acid can be separated from the barytes, by exposing the compound to such a temperature as can be raised in a smith's forge". 13 Maclean argued that the disengagement of the carbonic acid took place at a lower temperature when water was used. Berthollet (1789) proposed that the impression that fixed air contained water was because, at the heat in the experiment, the emitted carbonic acid contained a quantity of dissolved water which it would deposit on returning to atmospheric temperature. Maclean (1797, pp. 49-50) echoed this and stated that "every chemist knows that it has that property, and in a greater degree at a high than at a low temperature". Berthollet (1789, p. 82), followed by Maclean and by Rupp (1798, p. 153), argued that Priestley's calculation could not be relied on since he had not examined the loss of weight of the t.p.a. Maclean (1797, p. 51) concluded that these experiments "afford no support to the Doctor's principles". Rupp (1798, pp. 154-157) performed five different experiments to examine whether fixed air contained water, in each case documenting the actual experiment and calculating what the result would have been, if fixed air contained water, and concluded that it did not.
Concerning weighing the t.p.a. after the experiment, Priestley (1800, p. 58) argued that after the process the resulting solid "adhered so closely to the earthen tube in which the experiment was made, that the loss of weight could not be ascertained with accuracy". However, it would have been practicable to weigh the t.p.a. and the container before and after the experiment, which would have shown the weight loss of the t.p.a., and the weight gain of the container. 14 Priestley did not deal with the objection that fixed air could be produced from t.p.a. without the use of water, which was the central objection to his view. Priestley (1800, p. 58) argued that finding the loss of weight of his solid was "not at all necessary…. It was quite sufficient… to find how much water was expended in producing any quantity of fixed air from this substance… as there was no other source of loss of water besides the fixed air, it could not but be concluded that it had entered into its composition, as a necessary part of it". However, this was inaccurate-the alternative, as Berthollet and Maclean had already noted, was that water was dissolved in the carbonic acid at its current high temperature, and the experimental result that fixed air could be produced from t.p.a. without the presence of water supported this view. Priestley (1800, p. 58) did note that Rupp had produced several experiments "made seemingly with great accuracy, to prove that fixed air contains no water", 15 to which Priestley did not reply other than by arguing that they were much more complex than his own and therefore that they did not "authorize so positive a conclusion". It is noteworthy that this type of allegation could be made about any experiment whose conclusions did not favour Priestley's theory. This was just one of the arsenal of purely rhetorical responses that he was using by that date.
In addition, the view that water was present as an (unisolatable) integral constituent in some or all gases could not be definitely invalidated at that time by such experiments and inferences, and not only it had been put forward by Stahl, but also it had recently been supported in the cases of inflammable air and dephlogisticated air by Cavendish. From now on, Priestley (c.f. 1800, p. 58) held the view that probably "water is the basis of all kinds of air". He also argued that water might even be the whole of the weight of some gases (1800, p. 46).
For these unfounded views on the composition of inflammable air and dephlogisticated air, Priestley could rely either on the authority of Cavendish or on his partial precedent. For the remainder of his late experiments and theories on water and related airs, he was on his own, and in all crucial respects he differed from both Cavendish and Lavoisier without properly taking into account not only their and his own prior work but also the criticisms he received.
The degree of purity of the airs entering the experiment As already identified, Cavendish and Lavoisier each considered that fully excluding impurities from the initial substances and the apparatus was not practicable, and they recognised that the small amount of nitrous acid that was found under some conditions in the result was due to impurities, while Cavendish had showed how to eliminate nitrous acid from the result despite remaining impurities of phlogisticated air in the initial airs or the apparatus. By contrast, Priestley's (1788a) theory presented him with a totally different order of experimental problem, because in order to substantiate his view that small quantities of substances other than water that were present in the results had been integral to the reaction and not due to impurities, his apparatus and substances needed effectively to exclude substances other than dephlogisticated air and inflammable air from the initial conditions of the experiment.
Priestley started by arguing that his dephlogisticated air was "exceedingly pure" (1788a, pp. 151-152) without any supporting information concerning how he had dealt with or avoided the occurrence of impurities. He stated that his dephlogisticated air for these experiments was produced successively from manganese, minium, red precipitate, mercurius calcinatus, or finally the "yellow product". 16 Yet he (1775, p. 49) had stated that it was necessary to extract fixed air from calces before making his experiments, he (1779, p. 394) recognised the issue of fixed air in calces, while on one occasion he (1786, p. 5) stated that he had extracted fixed air from his calces. He (1788a) did not state any such precaution, and in any case, there was no equivalent known method for removing phlogisticated air from other airs, although the degree of purity of dephlogisticated air from calces could be improved by discarding the first produce of air from any calx.
This and his next (1788b) paper resulted in protests that Priestley had ignored Cavendish's work which concluded that the nitrous acid and the fixed air in the results of this type of experiment were due to impurities. 17 Priestley now made one of the temporary complete U-turns which, as will be seen in several sub-sections below, characterise his work on this topic. He (1789, p. 11) admitted that phlogisticated air "could not be excluded, whether it was by that which remained in the vessel after exhausting it by the air pump, or that with which the dephlogisticated air was more or less contaminated".
Entirely reasonably, the topic of impurities became a focus of the Lavoisians' published criticisms of Priestley's (1788aPriestley's ( , b, 1789 papers, starting with those of Berthollet (1789). The latter (1789, p. 67) noted that Cavendish had clarified that all metallic oxides prepared in air contain carbonic acid. Berthollet (1789, p. 89) also noted that Priestley had not taken into account Cavendish's careful work on the effects of varying amounts of azote gas in the water experiment and their results. Berthollet (1789, pp. 68-73) noted that Monge had not used the first parts of the air that had been disengaged from his calx, and that the result had been much purer vital air. Berthollet (1789, p. 73) now noted that in an experiment using lead oxide, Priestley (1786, p. 5) had initially heated the oxide thereby removing a quantity of air which was not all the air, as Priestley argued, but some of it including the carbonic acid that had been absorbed into the oxide. Berthollet (1789, p. 74) stated that "by means of these precautions, which he entirely neglected afterwards, Priestley reduced metallic oxides by hydrogen gas without obtaining carbonic acid". By contrast, having left lead oxide, which had been prepared like the former sample, exposed to the air during several weeks, Priestley found a considerable quantity of azote gas in the residue of his experiment, and when putting the same oxide in the fire, it also gave a considerable quantity of azote gas. Accordingly, Berthollet accurately judged that the azote gas in Priestley's residue had been absorbed from the atmosphere into his calx. Berthollet (1789, pp. 90-94) discussed Priestley's (1789, pp. 11-12) experiment taking the dephlogisticated air from some mercuric oxide that Berthollet had sent him. Priestley had reported that this never gave carbonic acid via heating. Berthollet (1789, pp. 91-92) had retained half of the sample from which he had sent the other half to Priestley, and now heated it, receiving the resulting air over lime water. There was no change to the lime water for half an hour, but then a deposit started forming. Berthollet explained that some carbonic acid was suspended in the vital air, that air had more affinity for carbonic acid than did lime water, and when air contained as much carbonic acid as it could in saturation, it was not easy to separate this with lime water, as Cavendish had previously pointed out. Berthollet (1789, p. 92) added that when vital air was disappearing rapidly, as when water was being formed, then the carbonic acid that was freed from the vital air was easily apparent. Berthollet (1789, pp. 93-94) also noted that the quantities of carbonic acid in Priestley's experiment did not exceed those that would have been contributed by the mercuric oxide. After having extracted the carbonic acid from the air from this sample of mercuric oxide, Berthollet (1789, pp. 94-95) then inserted a sulphur compound to take out the vital air, leaving a third of the air remaining as a residue, which was azote gas. On this basis Priestley's "very pure" dephlogisticated air had actually contained not only carbonic acid but also one-third of its volume of azote gas.
Concerning Berthollet's criticisms concerning impurities of phlogisticated air in his incoming dephlogisticated air, Priestley had told Wedgewood in October 1790, (Bolton 1891, p. 103) that "the air I use is not as pure as theirs", which contrasted with his previous argument that it was "exceedingly pure" (1788a, pp. 151-152). He now changed to obtaining his dephlogisticated air from the yellow oxide of mercury procured from nitrous acid, and therefore away from all the specific oxides which Berthollet had stated as including azote gas, but without giving any evidence that he was taking any more experimental precautions concerning impurities. Then on 16 February 1791 Priestley performed the opposite U-turn. He claimed to Wedgewood that "I now, with great certainty, make air so pure, that I am confident that it contains no mixed phlogisticated air whatsoever" (Bolton 1891, p. 105). On the basis of the change to the oxide he was using and one to his apparatus, he (1791, pp. 215-216) now claimed that there was nothing other than dephlogisticated air and inflammable air in his vessel. There was no way available at the time by which he could have achieved this. The 1791 paper marks the point in his work in chemistry when he began frequently to combat objections with purely rhetorical replies, as will be seen in several more instances below.
Berthollet's arguments were echoed by Maclean, who (1797, p. 47) noted that manganese oxide "ordinarily" contained a considerable quantity of azote gas and carbonic acid, that red lead absorbed azote gas from the atmosphere (1797, p. 47), and the fixed air in Priestley's results was due to initial impurities (Maclean 1797, 28). In Priestley's (1797) reply he continued his rhetoric from 1791 without giving any evidence to support his claim that his own dephlogisticated air "was purer than any that I believe they have ever pretended to have made" (1797, p. 31), and arguing that he knew well how to test the purity of his airs "when that was required" (1797, p. 33).
Concerning Priestley's incoming inflammable air, he mostly produced his inflammable air by passing steam over iron. 18 By contrast, Cavendish routinely produced his inflammable air from zinc with the agency of an acid, which gave relatively pure light inflammable air. Before Priestley's (1789) paper, he had already received objections that the fixed air in the experiments might have come from the plumbago in the iron (c.f. Maclean 1797, p. 28). Priestley (1789, p. 12) argued that "since we ascertain the quantity of plumbago contained in the iron by what remains after its solution in acids, it is in the highest degree improbable, that… plumbago… should enter into the inflammable air procured from it". Priestley (1789, p. 13) made the further counter-arguments that he had had the same result when his inflammable air was obtained from tin, and that the fixed air far exceeded the weight of the plumbago, which he calculated using what he stated was Bergman's information on the amount of plumbago contained in iron. 19 Another issue was implicit in Priestley's (1788a, p. 149) statement that after the experiments his apparatus had the smell "of the most offensive kind of inflammable air from iron". 20 The source of this may be indicated by his (1786, pp. 159-160) description of finding that kind of air, when "I happened to take some iron, parts of which had been heated by a burning lens in vitriolic acid air", 21 if he was using some of that iron also in his new experiments.
Problems with the apparatus
Priestley initially tried to collect his airs in the same chamber as he made the explosion. In that case, the result was disrupted by the effect of the explosion on the liquid over which the airs were stored, which initially was mercury and which became spattered over the chamber during the explosion, so that subsequently he used two separate chambers. In order to form or to store 22 the airs without disrupting the rest of the experiment, additional vessels were presumably needed, although Priestley did not state this.
For each container used in the experiment, there was the issue of how to exclude atmospheric air before the experiment. In some of his experiments, Priestley exhausted his chambers as far as possible with an air-pump, which therefore involved the problem that Cavendish had previously identified that this process could not remove all the initial air. He (1788a, pp. 151-152) argued that in the cases of his experiments in which he had not used an air-pump he had excluded phlogisticated air, and even when his apparatus did contain phlogisticated air "it is a satisfactory answer to this objection, from the presence of phlogisticated air in the tube, that this kind of air is not decomposed, or at all affected, by this process, as will be found by mixing any quantity of it with the other two kinds of air". This last point showed that he had not yet taken on board Cavendish's (1784, pp. 133-134, 138-139) results showing the two ways in which the quantity of phlogisticated air in the initial airs did affect the result of the experiment, nor Cavendish's (1784, pp. 134, 136-137) identifications of the ways that pure water could be produced despite the presence of phlogisticated air in the experiment. As has been seen in the previous section, he (1789, p. 11) subsequently admitted that it "could not be excluded, whether it was by that which remained in the vessel after exhausting it by the air pump, or that with which the dephlogisticated air was more or less contaminated".
For the last of Priestley's experiments, reported in 1791, he tried to exclude phlogisticated air from his apparatus by storing his airs in a single chamber, initially filled with water, out of which the water was pushed by the incoming airs. Yet this storage method introduced a different potential source of phlogisticated air, in addition to that which he 20 That is, H 2 S. 21 In effect, his "iron" probably included iron oxide and iron sulphide due being heated in the vitriolic acid air, SO 2 . 22 For transit-Priestley (1788a, p. 150) reported that his dephlogisticated air was supplied to him by Keir. 19 Berthollet confirmed in his 1789 paper that Bergman had given a wide range of figures for the content of plumbago in iron, but Priestley had quoted a single figure (which was an average). actually could not exclude from his incoming air. He had already written a paper (1783) referring to the generation of air from water, including methods without heat, 23 and he had known since his (1786) work that collecting airs over water did not provide a permanent barrier to the exchange of airs between his vessel and the atmosphere (1786, pp. 385-386). He also identified that water released air when the pressure was lowered. 24 So when his initial airs entered the chamber and thereby pushed some of the initial water out of the chamber, this would have effected some exchange between the air he was introducing and the air contained as impurities in his water. 25 Priestley did not take these factors into account when he (1791, pp. 215-216) now claimed that there was nothing other than dephlogisticated air and inflammable air in his apparatus. Rupp (1798, pp. 129-130) rightly objected that making experiments over water was generally "a method which always leaves some doubt of the exactness of the result, not only on account of the attraction, which the substances under operation have to moisture, with which they may combine or which they may decompose, and thereby produce errors; but also on account of the air which water contains, and which may be expelled by the heat of the operation or attracted by the bodies under examination". Berthollet had also stated that Priestley had taken no account of the air in the vessel in which he made the experiment, implying that this was a root of some of Priestley's interpretational errors.
Priestley (1800, p. 50) argued that he "did not overlook this circumstance, since I measured the capacity of the vessel by the quantity of air that had disappeared, by having been completely decomposed in the process, so that there was no occasion whatever to take an account of the air that was not affected by it". This did not actually answer Berthollet's objection. This also did not deal with the problem that Priestley's theory could only be established via this single experiment if there were no impurities in his initial airs and apparatus.
Priestley's work had been criticised on the grounds that his experiments on the combination of dephlogisticated air and inflammable air used very small quantities and that he could not weigh his substances accurately. He (1800, p. 49) attempted to excuse this by arguing that "all the inside of my large vessel being, of course, wet with the liquor produced by the explosion, I could not pretend to weigh that which drained from it with much accuracy". 26 Yet irrespective of the issues of quantities, he could also have avoided his problems with weighing his water, by firstly weighing his main vessel dry before the experiment and with its liquid contents afterwards, and simply calculating the weight difference. 23 Priestley (1774, p. 59) had noticed prior to his 1772 paper an effect which was actually the absorption and emission of gases by water when storing inflammable air over water for long periods, but had not identified that this was what was happening. 24 (1793, p. 34). This was a little later than his series of experiments published in his 1788-1791 papers, but in the 1793 paper he argued that he had been doing this for a long time with the relevant apparatus. Later Priestley (1796, p. 25) recommended that distilled water should be used in those experiments on generating air from water, and he (1793, p. 26) stated that it was universal knowledge that distilled water had an eager attraction for air. 25 Later Henry (1803, p. 276) found that 100 cubic inches of water at 60 °C and standard pressure would absorb 3.55 cubic inches of oxygen gas, or 1.47 cubic inches of azotic gas or 1.53 cubic inches of hydrogen gas. Under increased pressure, a quantity of hydrogen gas up to one-third that of the water could be absorbed (Murray 1819, v.2, p. 107) and a quantity of oxygen gas up one-half that of the water could be absorbed (Murray 1819, v. 2, p. 18). Davy (1807, p. 11) noted that "hydrogen, during its solution in water, seems to expel nitrogene, while nitrogene and oxygene are capable of co-existing dissolved in that fluid". Priestley (1800, p. 49) attempted to justify the first of these points by arguing that "very little" depended on the quantity of water produced. However, this was simply inaccurate, in that a crucial part of his main case was his argument that he (1800, p. 46) "was never able to get the whole weight of the airs in water", which was part of his reason for why he had "produced" phlogisticated air, or nitrous acid, or fixed air, or nitrous air, during the experiment. He then performed a further change of argument by (1800, p. 46) also arguing that water was a constituent part of his initial gases "and for any thing that is certainly known is all that can be ascertained by weight". Nevertheless, in effect his apparatus and practices did not support any of his claims concerning the quantity of water produced in the experiment.
At one early stage Priestley had claimed that there was a high amount of acid in his results. Berthollet (1789, pp. 83-87) clarified that the supposedly high amount of acid in Priestley's (1788b) repeated experiment on water which had given larger quantities of liquid, was due to errors which had been understood and analysed previously by Cavendish. Once Priestley's liquid was exposed to air, even without heat it deposited a green powder which was insoluble in water but soluble in acids. Keir, who analysed the liquid for Priestley (1788b, pp. 323-330) decided that the acid which had been dissolving this powder 'must' have disappeared from the experiment, and therefore calculated how much 'must' have done so, thereby roughly doubling the amount of acid that actually remained in the liquid. However, Cavendish had previously identified that the nitrous acid which was the product of the experiment, once exposed to air, gained oxygen and became nitric acid. This process was accompanied by the deposition of part of the base with which the acid had previously been saturated, since as Bergman had identified, stronger acids are saturated by smaller amounts of bases. Accordingly, the liquid formed during Priestley's experiment actually contained acid which was about one forty-fourth of the quantity of the waterwhich was less in proportion than had been the case in Cavendish's equivalent experiment-instead of one-twentieth as Keir had calculated.
The hypothesis that nitrous acid is an integral product of the combustion of dephlogisticated air and inflammable air
As already noted, Priestley (1788a) initially developed his new theory on the basis of his thought that the dense vapour or acidic liquid, later identified as nitrous acid, that was formed when he did the experiment on the combination of dephlogisticated air and inflammable air, was an integral result of this reaction. 27 This was a direct U-turn relative to his previous experimental interpretations, as will be noted in the sub-section on water below. He continued his recent policy of deferring to the authority of a fellow participant, in noting (1788a, p. 148) Keir's opinion "that some acid must be the produce of this experiment", and in stating that without Keir he might not have found the acid, even though in other cases he successfully used litmus to test for acid. Priestley (1788a, p. 149) stated that after every explosion "the vessel was filled with a dense vapour", that afterwards the vessel smelt of "the most offensive kind of inflammable air from iron" and after testing with litmus, that an acid had been formed. 28 Interpreting his results, Priestley (1788a, p. 151) argued that the nitrous acid 29 in the experiment was formed from the phlogiston and principle of acidity released during the decomposition of the dephlogisticated air and the inflammable air.
As already noted, this and his next (1788b) paper resulted in protests that Priestley had ignored Cavendish's work. Before producing his (1789) paper, Priestley reported to Wedgewood that "I can now procure, either pure water or a dry and condensed vapour at pleasure". 30 This appears to demonstrate that he had now understood part of Cavendish's (1784, pp. 136-137) results. However, he did not make this further U-turn in public for the next three years. Despite the information in his letter to Wedgewood, in the paper Priestley (1789, p. 7) argued that "when the experiments were conducted with due attention" he "never failed… to produce some acid". He produced a hypothesis for why he had not found the acid previously, which was that "the acid wholly escaped… [which] may easily be accounted for, from the small proportion of the acid principle in proportion to the water, and the extreme volatility of it, owing, I presume, to its high phlogistication when formed in this manner" (Priestley 1789, p. 8). However, this did not explain why the acid was now retained, and still did not take into account Cavendish's explanations (1784, pp. 134, 136-137) for how to produce results only involving water. Priestley (1789, pp. 7-8) argued that his experiment differed from Cavendish's in that the latter's was "a very slow one by electricity, and mine is a very rapid one by simple ignition". However, Cavendish's experiments were either performed with explosion by electricity, or were slow and involved ignition by a candle. Moreover, this argument ignored Cavendish's recipe for producing pure water by using an excess of inflammable air.
Priestley (1789, p. 8) also argued that there was "no contradiction whatever" between his experiment and Cavendish's experiment. Priestley (1789, p. 8) proposed that "phlogisticated air may contain phlogiston, and by means of electricity this principle may be evolved, and unite with the dephlogisticated air (or with the acid principle contained in it) as in the process of simple ignition the same principle is evolved from inflammable air, in order to form the same union, in consequence of which, the water, which was a necessary ingredient in the composition of both types of air, is precipitated". Priestley's basic theory was that in the experiment, the water that was in each air was released, and the phlogiston and the "principle of acidity" combined to form the acid (c.f. 1789, p. 8).
Priestley (1789, pp. 8-9) now reported that he had mixed phlogisticated air with his other airs, and found that it was not affected, as in his previous experiment; he then stated that he had tried the experiment "with atmospheric air instead of dephlogisticated air", and in this case he found that the "consequence was the production of much less acid than before, the liquor I produced being sometimes not to be distinguished from pure water". This now implicitly recognised one of Cavendish's (1784, pp. 133-134) results. 31 However, Priestley did not give recognition to Cavendish's (1784, pp. 138-139) (Schofield 1966, p. 250), Priestley (1789, pp. 9-10). 28 The likely explanation of this has been given in a previous sub-section. If the smell was as Priestley reported, then his acid presumably included some sulphuric acid, as well as the nitrous acid that was identified in Keir's analysis (in Priestley 1788b). 29 Priestley was originally uncertain as to which acid this was; Priestley to Ingenhousz, 24 November 1787, (Schofield 1966, p. 249). This was confirmed as nitrous acid by Withering; Priestley to Wedgewood, 8 January 1788, (Schofield 1966, p. 249); also Priestley (1788b, p. 321). 30 Priestley to Wedgewood, 17 August 1788 (Schofield 1966, p. 250).
1 3 when the dephlogisticated air was very pure, adding some phlogisticated air to the experiment made the result more acid, which was fundamentally contrary to Priestley's whole theory.
Despite his maintenance in 1789 and 1790 of the public position that he always produced an acid in the experiment, which will be detailed in the section on water below, and despite what he had reported to Wedgewood in August 1788, he (1791, p. 217) subsequently stated in public that "I am now able to procure, either nitrous acid or pure water, from the same materials". In his later polemics he (e.g. 1796, p. 51) continued to argue that "when dephlogisticated air and inflammable air, in the proportion of a little more than one measure of the former to two of the latter, both so pure as to contain no sensible quantity of phlogisticated air, and inclosed… and decomposed by taking an electric spark in it, a highly phlogisticated nitrous acid is instantly produced, and the purer the airs are, the stronger is the acid found to be". In effect, he continued not to recognise the part of Cavendish's (1784, pp. 138-139) findings that disconfirmed his theory, and he also continued not to recognise the smallness of the quantity of nitrous acid in the result.
The hypotheses concerning the production of other substances during the experiment
The first of Priestley's late papers set a precedent for the type of auxiliary hypothesis that he produced when other substances were found in the results of his experiment. He simply claimed that they had also been formed in the experiment. He (1788a, p. 154) argued that the small amount of acid that was in dephlogisticated air "may well be supposed to be employed in forming the fixed air, which is always found in this process" (1788a, p. 154).
In his (1789a, pp. 11-12) repetition of the experiment, he produced the dephlogisticated air using some mercurius calcinatus which had originally been made by Cadet and of which he now sent the residue to Berthollet. He found that a considerable portion of the air that remained in the vessel was fixed air, and he now came to the conclusion that in this case it was this acid with which his liquid was impregnated, not the nitrous acid. 32 Priestley (1789, p. 13) also undertook an experiment reducing mercurius calcinatus (which had been sent to him by Berthollet) in inflammable air, until his inflammable air had been reduced to a residue which was a quarter of the original, and of which a small proportion was fixed air. This, he argued, was "abundantly more than the weight of the plumbago"-this took into account the possible presence of impurities in his inflammable air but did not allow for the likelihood that his fixed air had been previously absorbed from the atmosphere into the calx. Priestley now concluded that the fixed air in the liquid was the product of the parts of dephlogisticated air and inflammable air that were not water.
His new overall argument from all these experiments was that "when either inflammable or dephlogisticated air is extracted from any substance in contact with the other kind of air… the result will be fixed air; but that if both of them be completely formed before their union, the result will be nitrous acid" (1789, p. 12; 1790c, p. 536). Priestley (1789, pp. 16-17) re-affirmed his views, arguing that his experiments "establish[ed] the doctrine of phlogiston", and that "I apprehend it will not be denied, that the produce of this decomposition is not mere water, but always some acid".
When he (1791, p. 217) finally stated publicly that the result was not always some acid, he (1791, p. 213) still did not deal with Cavendish's (1784) indications that fixed air was an impurity, nor with Berthollet's criticisms of his views on the formation of fixed air from dephlogisticated air and inflammable air, but merely argued without giving any evidence that this formation was "sufficiently evident". This was a direct echo of a rhetorical technique that Priestley had used in his ecclesio-political controversies-for example, he (1785b, p. 544) had claimed that "it is sufficiently evident that Unitarian principles are gaining ground every day" even though "for the present we see no great number of churches professedly Unitarian". It is noteworthy that using this technique, any inaccurate statement could be claimed to be accurate.
In the 1791 paper, when admitting that the result could be pure water "if there be a redundancy of inflammable air in the process", he now argued that the principle of acidity that was in the dephlogisticated air and the phlogiston that was in the inflammable air could form the phlogisticated air that was residual in the experiment (1791, p. 221; 1796, 52). His (1791, p. 220) overall argument was now that "it is very possible that the pure water that we find may be nothing more than the basis of the two kinds of air, and the principle of acidity in the dephlogisticated air and the phlogiston in the inflammable air, may combine to form a superfluous acid in the one case, and the phlogisticated air in the other".
Maclean in his Two lectures pointed out, among other objections, that Priestley was not taking into account Séguin's (1791, p. 48) identification that there was a temperature cutoff point below which nitrous acid was not formed (Maclean 1797, p. 42), 33 and the confirmation of this by Pelletier and Jacquin (Berthollet 1791, p. 140) and van Marum (1792, p. 139). Maclean also pointed out Séguin's (1791, p. 35) identification that the additional phlogisticated air in the French experiment was due to imperfect exhaustion of the vessels (Maclean 1797, p. 43), and that Priestley was not actually testing his gases, so that the azote in his experiment probably arrived in his oxygen gas (Maclean 1797, p. 47).
Priestley countered "that phlogisticated air can be produced from the same materials from which I get nitrous acid, viz. dephlogisticated and inflammable air, I have given various and sufficient proofs" (1797, p. 36). This was another version of his all-purpose rhetorical technique for claiming without evidence that an inaccurate statement was accurate. Priestley continued to argue that the Lavoisians "do not deny that they had a surplus" of phlogisticated air (1797, p. 34) and that the Lavoisians in their experiment had produced more phlogisticated air "than they could well account for. This quantity, therefore and perhaps something more (since the operators were interested to make this as small as possible) must have been formed in the process" (1800, p. 44). These claims continued not to take into account Séguin's actual explanation.
Priestley quoted Berthollet and Fourcroy's (1798, p. 306) identification that "the small quantity of acid which is commonly found in this process comes from the azote, which is mixed with the gas", but Priestley (1800, p. 44) argued that "if this was the case, they could never get water free from acid, because they can never wholly exclude azote". This did not take into account Priestley's own (1791, p. 217) admission that he could obtain pure water from the experiment, as well as continuing not to recognise the explanation produced by Séguin, following Cavendish, that the remaining phlogisticated air had been due to imperfect exhaustion of the vessels prior to use for the experiment or due to impurities in the original dephlogisticated air. Priestley (e.g. 1785a, pp. 284-289;1786, pp. 126-127) had interpreted the experiments that he done prior to 1787 on the combination of dephlogisticated air and inflammable air as showing that it was water that was produced. For example, he (1786, pp. 139-142) burned together dephlogisticated air and inflammable air (from iron and oil of vitriol) and he produced water that was "perfectly free from acid". 34 This implied that water was a compound. Priestley (1788a, p. 147) stated that he had never been able to find any acid in the liquid resulting in his previous experiments of this type. In making the U-turn that he now always found nitrous acid, which he used to support his view that water was not decomposed, he (1788a, p. 147) argued that previously he must not have taken sufficient precautions.
The question whether or not water is decomposed
Yet as has been already noted, by 17 August 1788 he was able to produce either water or an acid vapour "at pleasure". By contrast, in public he (1789, p. 7) continued to argue that "when the experiments were conducted with due attention" he "never failed… to produce some acid", and in his subsequent letter to Wedgewood of October 1790 (Bolton 1891, p. 103), Priestley took this official line. In his 'methodized' volumes, he (1790c, p. 546) continued to argue that "in what manner soever dephlogisticated air and inflammable air be made to unite, they compose some acid and in no case pure water".
At this stage Priestley became aware of the experiments of Van Troostwyck and Deiman (1789) in which they produced dephlogisticated air and inflammable air separately from water by electricity. Priestley's (1790c, pp. 543-544) "general reply" to these was outstandingly noteworthy. He started by arguing that "it must be acknowledged that substances possessed of very different properties, may, as I have said, be composed of the same elements in different proportions, and different modes of combination". He then noted that "it cannot therefore be said to be absolutely impossible but that water may be composed of these two elements, or of any other", 35 but argued that "in what degree it contains [these principles], we cannot tell". Once again, this was an all-purpose rhetorical device that could be used to argue a case against any accurate evidence. Priestley went on to modify, if not to abandon, his argument that water was not composed or decomposed by arguing that "this is no argument against the doctrine of phlogiston, since it only proves that this principle is contained in water, more or less intimately combined, as well as in many other substances". In Priestley's view, he had now replied to Van Troostwyck and Deiman, until he could repeat their experiments more conclusively.
When he (1791, p. 217) reported that "I am now able to procure, either nitrous acid or pure water, from the same materials. I constantly observe, that if there be a surplus of dephlogisticated air, the result of the explosion is always the acid liquor, but if there be a surplus of inflammable air, the result is simply water", he (1791, p. 219) also stated "I claim no merit whatever in this observation". In effect Priestley was repeating Cavendish's 34 Priestley (1783, p. 427) also found water produced when his inflammable air came from charcoal: in that case the dephlogisticated air came from nitre and both gases were stored over mercury and fired by electricity: he found "a manifest deposition of water". 35 The last of these italics have been added for emphasis.
(1784, pp. 136-137) observation concerning how water could be produced in the experiment. Whereas Cavendish had given reasons for the difference, Priestley (1791, p. 220) was "by no means able to assign any reason for this difference". Priestley (1791, pp. 213-214) repeated the back-up argument that he had already used concerning the experiments of van Troostwyck and Deiman, that the doctrine of phlogiston would "not be affected by the most decisive proof of the composition of water from dephlogisticated air and inflammable air, since this would only prove, that phlogiston is one constituent part of water, which is an opinion that I have advanced, and mentioned on several occasions". Priestley also departed from his water-as-element view as well as his view that dephlogisticated air and inflammable air did not form water, by stating that he (1791, p. 219) "concluded that nitrous air, though consisting of the same elements with pure water, 36 contains a greater proportion of dephlogisticated air". In 1791, therefore, he abandoned his (1788a, p. 154; 1790c, p. 535) central argument that water was not composed or decomposed.
In Priestley's (1793) paper on the generation of air from water, he (1793, pp. 33 and 36) said that he "could not help concluding that the whole of any quantity of water is convertible into air by means of heat". 37 He made a wide inferential leap to the view that "the whole of the atmosphere may have been originally formed from water by means of heat" (1793, p. 36). A second inferential leap led him to conclude that "since the atmosphere consists of both dephlogisticated air and phlogisticated air, it is evident… that… water must contain both these elementary ingredients, which is an idea which neither myself nor the French chemists had formed of it, since, according to them, it consist of dephlogisticated air and inflammable air, and phlogisticated air (or, as they call it, azote) is a simple element not contained in water, while I and other chymists had considered water as a simple elementary substance" (1793, p. 37). He qualified his new conclusion by arguing that "what I have before advanced concerning water,… viz. that it is the proper basis of every kind of air, may be, and probably is, strictly true" (1793, p. 37). This resulted in the compositional circularity that dephlogisticated air and phlogisticated air were constituents of water which was a constituent of each of them.
However, when producing his later polemics, he once again ignored his (1791, p. 219; 1793, p. 37) arguments that water was a compound. To Mitchill's (1798) attempt to produce a compromise between the systems, Priestley (1798) replied that "in my opinion there can be no compromise between the two systems… water is either resolvable into two kinds of air, or it is not", when supporting the latter position.
Concerning Van Troostwyck and Deiman's experiments using electricity to produce dephlogisticated air and inflammable air from water, Priestley (1800, p. 54) now stated that the two airs were produced from water in these experiments, "tho' with infinite labour". He now objected that the experiment was "very complex" and continued to argue that "several agents are concerned, and what, and how much, to ascribe to each of them it is not easy to say". He continued to state that in his own experiment the last air produced from water was "wholly phlogisticated air, 38 of the nature of which we know but little". He also argued that the combination of the two airs being "sometimes spontaneous, without the electric spark being taken in them, shews that at least part of the air produced is phosphoric; and it is well known that the electric spark is always accompanied by the smell of phosphorus". He also stated that he did not see how it was possible to conduct the experiments with only water involved. 39 Once again, given this repertoire of rhetorical techniques, he could argue that any accurate experimental evidence was inaccurate as well as the reverse.
Despite all his U-turns, inconsistent arguments and use of rhetorical techniques that have been outlined in this sub-section, he (1802, p. 154) argued that he considered that the modern hypothesis of the decomposition of water was "wholly chimerical".
All the details that have been given in this section now allow Priestley's polemical claims against Cavendish and the antiphlogistians to be assessed in the next section in the light of a detailed understanding of his own practices in his late work and polemics in chemistry.
Priestley's polemical claims in the light of his own practices
This section will assess each of Priestley's claims that have been summarised in the Introduction. Firstly, he argued that "I cannot help thinking that what I have observed in several of my former publications has not been sufficiently attended to, or well understood" (1800, p. 3). Yet in the cases of water and related airs, Cavendish and Lavoisier had achieved a very detailed understanding of the experiments, the needs of the apparatus, the methods of reducing the amount of impurities in the apparatus, and the methods of achieving a result of pure water despite the presence of impurities in the airs and apparatus. Very detailed attention had been paid to the problems of Priestley's experiments, especially by Berthollet, Rupp, Woodhouse, and Maclean, who had pointed out among other matters that Priestley had not sufficiently well attended his own previous work as well as to the work of Cavendish. Priestley never recognised the part of Cavendish's (1784, pp. 138-139) work which directly invalidated his (1788a) theory.
He argued that he had not seen "sufficient reason to change my opinion" (1800, p. 2). Yet in view of the array of rhetorical techniques which he could use to argue that any inaccurate result of his own was accurate and that any accurate result of the antiphlogistians was inaccurate, he would never have seen sufficient reason to change his opinion.
He claimed that his apparatus "was perfectly simple, so that nothing can be imagined to be less liable to be a source of error" (1800, p. 48), but the very numerous problems of his apparatus, which were pointed out to him by the antiphlogistians, have been explored in the previous section. He argued that the apparatus of the antiphlogistians "does not appear to me to admit of so much accuracy as the conclusion requires, and there is too much of correction, allowance and computation in deducing this result" (1800, p. 44). However Priestley (1788b, pp. 323-330) had been happy to publish Keir's estimate of the quantity of acid present in Priestley's resulting liquid, which involved correction, allowance and computation, and which was inaccurate by a far greater percentage than the Lavoisians' result. Priestley (1800, p. 50) argued that his apparatus had the advantage that it was "less operose and expensive" than that of the French chemists. However, Cavendish's eudiometer which he used for his experiments, and another apparatus designed by van Marum (1792, p. 114), were less difficult and expensive than Meusnier's (Lavoisier et al. 1783b). Van Marum's detailed report on his apparatus was sent to Berthollet and quickly published in the Lavoisians' journal, Annales de Chimie. 40 More generally, simplicity and lack of expense were advantages in experiments if and where the variables involved in the experiment were being adequately tested, but not otherwise.
Priestley claimed that "no person who has made near so many experiments as I have, has made so few mistakes" (1800, p. 4). The evidence concerning his experiments from 1783 onwards on water and related airs suggests that few persons have ever made so many experimental and interpretational errors, so many loose judgements, so many U-turns and so many purely rhetorical rejections of valid criticisms on a single type of experiment, as Priestley.
He (1800, p. 44) argued that the Lavoisians had only made one experiment in which the result was free of acid, and that this was an inadequate basis for generalisation. This did not take into account that Cavendish (1784, p. 133) had identified the general correlations that showed how to produce results involving pure water, that the Lavoisians' final experiment had taken these on board as well as their own previous experience, that Séguin (1791) had indicated that the Lavoisians could produce water that was free of acidity at will, and that Priestley (1791, p. 217) himself had stated that he could produce water that was free from acidity at will. 41 He said to the antiphlogistians that "no man ought to surrender his judgement to any mere authority", and that as "you would not, I am persuaded, have your reign to resemble that of Robespierre, few as we are who remain disaffected, we hope you would rather gain us by persuasion, than silence us by power" (1796, pp. i-ii [33-34]; 1800, p. xi; 1803, p. xiii). Several points need to be made about Priestley's challenge.
Firstly, there was no mechanism by which the antiphlogistians could have silenced Priestley or other phlogistians by power, even if they had wished to. Priestley's articles were welcomed by Samuel Mitchill and frequently appeared in the Medical Repository, they also frequently appeared in Nicholson's Journal, and Priestley's articles were given pride of place in the 1799 and 1802 editions of the Transactions of the American Philosophical Society. In addition, Crell's journal and Observations sur la Physique under de la Métherie continued to welcome articles by phlogistians, when such were forthcoming. By contrast the reason it had been necessary for the antiphlogistians to establish Annales de Chimie had been the difficulty of getting their articles published by the hostile de la Métherie. Also, Priestley's works on science generally sold well, and he had always found a publisher or printer, even in the cases (e.g. 1788c, d) where his religious subject matter was of such limited interest to the book-buying public that he had to pay for the edition himself.
Secondly, although Priestley argued that "no man ought to surrender his own judgement to any mere authority, however respectable", this did not stop him (1800, p. vi) from citing the names of Crell, Westrumb, Gmelin and Meyer when stating that "no person needs to be ashamed of avowing an opinion which has the sanction of names such as these". Berthollet and Fourcroy (1798, p. 309) said that Priestley would undoubtedly be pleased to hear that in France, de la Métherie, Sage and Baumé were also phlogistians, as well as other chemists of lesser rank, but Priestley (1800, p. vi) argued that there were fewer remaining phlogistians in France than in England. Thirdly, Berthollet's (1789) and Berthollet and Fourcroy's (1798) articles argued that Priestley's experimental methods and his interpretations of his experiments had been flawed, not that he should not hold independent opinions.
Fourthly, in practice Priestley had an asymmetrical view of intellectual authority. On the one hand Priestley (1794, p. ix) rejected the intellectual authority of others to the extent of saying that educational institutions were not to be regarded as sources of information. However, on the other hand Priestley did not avoid asserting the rectitude of his own positions. For example, he asserted that his type of materialism "is that philosophy which alone suits the doctrine of the Scriptures,… every other system of philosophy is discordant with the Scriptures" (1777, p. 302), and he (1803, p. vii) took the triumphalist and illiberal position that he had produced "a demonstration of the doctrine of phlogiston and a complete refutation of the composition of water", and proposed that the Lavoisians' theory should be abandoned altogether (Priestley 1803, p. xviii). 42 In effect, Priestley protested at any restriction to his own way of thinking while sometimes proposing that others should follow his own opinions.
Fifthly, Priestley's own language could be remarkably unpersuasive, especially when he argued that he was being persuasive rather than using force, for example when he (1787, 17) argued with the Prime Minister, Pitt. He (1787, 41) referred to Pitt as a "youth" and (1787, p. 2) stated that Pitt had been "misled by your education and connections". Priestley (1787, p. 1) stated that he was entitled to gratitude in that he was suggesting ideas which appeared to him to be clearer that those that Pitt "seemed to be possessed of". Priestley (1787, p. 3) stated, from his position as an older man than Pitt, that "honesty is the best policy", and admonished Pitt to "keep this in view in all measures of policy". He also used the language of power in that he (1785b, p. 544) stated that the arguments of the Dissenters were like gunpowder and he (1790d, p. 311) predicted that this "gunpowder… which will certainly blow [the system of the hierarchy] up… and perhaps as suddenly… and as com-pletely… as the overthrow of the late government of France". The antiphlogistic responses to Priestley (e.g. Berthollet and Fourcroy 1798) were far more polite and potentially "persuasive" to Priestley than Priestley could be to opponents. Sixthly as has been seen, Priestley's late theory and rhetorical methods allowed him to be effectively immune to persuasion by the usual scientific means of the production of experimental evidence.
All the evidence that has been collected in this and the previous section shows that Priestley's complaints concerning inattention, authority, rule and so on, only seem at all plausible when his own practices in chemistry and in rhetoric are not taken into account in detail.
A brief survey of selected secondary literature
Some early chemists saw Priestley's late work on water and related airs with clarity. For example, Black stated in his lectures at Edinburgh that "It is… difficult to procure vital air perfectly pure, and, especially, free from azote. The red nitrate of mercury affords the best process I know for it. But I have generally found it tainted with nitrous air, and with azote. I call your attention to this circumstance, because many of Dr. Priestley's experiments, by which he still thinks that the theory of Stahl is supported, have had results which were certainly owing to such impurities. I have particularly in my eye at present those which he published in 1792". 43 Thomson (1830, v.2, p. 22) argued that Cavendish's (1784) facts concerning nitrous acid in the experiment "invalidate the reasoning of Priestley altogether; and had he possessed the skill, like Cavendish, to determine with sufficient accuracy the proportions of the different gases in his mixtures, and the relative quantities of nitric acid formed, he would have seen the inaccuracy of his own conclusions". More generally, Thomson (1830, v. 2, p. 137) judged that "Dr. Priestley… was so hasty in his decision, and so apt to form his opinions without duly considering the subject, that his chemical theories are almost all erroneous and sometimes quite absurd". This judgement is amply borne out by the evidence in the present paper. It is interesting that Thomson's view can be seen as in conformity with some of the types of judgement Priestley made concerning opponents' work. Partington (1962, pp. 270-271) stated about the late work that "Hartog says 'Priestley henceforth displays what seems to us a perverse ingenuity in adapting the phlogiston theory to fit every new fact' and it would be tedious to follow him through this labyrinth of error", and Partington (1962, p. 293) judged that "Priestley's later papers… are of little or no interest and are mostly inaccurate". The present paper has illustrated quite how accurate was the description "labyrinth of error" for Priestley's late work on this topic. Holmes (2000, pp. 91 and 93) noted the "typical pitfalls that Priestley's casually stated 'opinions' on the composition of the airs which he studied set for his followers. When enthusiasts such as Volta tried to fill in the details left unexplored in the lapidary formulations of their leader, they were easily led into contradictions hidden from them by their allegiance to Priestley's general 'doctrine of airs'". Holmes (2000, p. 99) pointed out that Priestley actually had a "casual attitude toward the 'speculations' that he allowed himself while insisting that it was really only the 'facts' that counted". Holmes (2000, p. 103) stated that "Historians as well as contemporaries have generally been sympathetic to the personal credo that Priestley stated so strongly… He appears open-minded and democratic, committed to a kind of science in which everyone can participate and no one has particular authority. But… Priestley was professing principles that he did not in fact fully practice….
Priestley was flexible only within the limits of the broad 'modern doctrine of airs' that he had initiated", 44 and this is amply justified in the case of Priestley's later theory on water.
An attempt to explain Priestley's defence of phlogiston was made by Verbruggen (1972), but this had numerous problems of which the following are arguably the most noteworthy. Verbruggen (1972, pp. 47-50) gave prominence to Priestley's (1791) paper and Lavoisier's 1785 experiment. This is a historical error since Priestley's paper came after the Lavoisians' later and corrected large-scale experiment , to which Priestley (1791) specifically referred. Verbruggen also did not take fully into account Cavendish's (1784, 1785) work and did not take into account the rhetorical nature of several of Priestley's claims. Verbruggen (1972, p. 48) effectively altered the implication of Black's remark by suggesting that Black thought that Priestley's use of impure chemicals was the direct cause of his adherence to the theory of phlogiston. One of Verbruggen's (1972, p. 52) central arguments then was that the acidity in both the experiments of Lavoisier and Priestley was due to impurities. However, firstly this misconstrued Black's remark, which suggested that Priestley's argument that his experiments supported his theory was incorrect due to impurities, not that impurities caused Priestley's adherence to the theory of phlogiston. Secondly, Cavendish's (1784Cavendish's ( , 1785 findings solved the problem with Lavoisier's (1785) experiment 45 and the Lavoisians' et al. (1790) experiment was free from acid, while one of Cavendish's (1784, pp. 138-139) findings was directly contrary to Priestley's theory, and Priestley never acknowledged or solved this problem. Verbruggen (1972, p. 48) also did not take into account that Priestley's (1788aPriestley's ( , b, 1789 papers resulted in several objections which were based on Priestley's failure to take into account Cavendish's careful experimental work. Verbruggen (1972, p. 54) argued that the Lavoisians procured oxygen that was "quite pure and free from nitrogen". However, in the full report of the later version of their experiment they stated that they wanted to obtain pure vital air, and took several precautions including driving off the atmospheric air in the apparatus before filling with the vital air and also letting go the first products from the calx that they used (Fourcroy et al. 1791, pp. 267-268), but they also stated that they actually produced 97% pure vital air, with the remainder being azote gas (Séguin 1791, p. 35). Verbruggen (1972, p. 54) then noted that on conclusion, the container in which the combustion had taken place contained carbonic acid and azote gas as well. However, Séguin (1791, pp. 35-36) stated that the excess of measured total gas from the output over the measured input gas was very probably due to the small quantity of atmospheric air which remained in the gasometers, after exhaustion and before filling with the vital air and hydrogen gas, and that it was almost impossible to exclude impurities altogether. All this was in direct agreement with Cavendish's (1784) statements on the impossibility of full exhaustion of the chambers before filling with the intended airs and on the impracticability of obtaining totally pure input airs. Verbruggen (1972, p. 49) quoted Priestley's (1791, p. 215) argument that his dephlogisticated air contained "no sensible quantity of phlogisticated air". Verbruggen did not take into account the frequent drastic variations in Priestley's reports concerning the degree of purity of his dephlogisticated air, including his (1789, p. 11) admission that phlogisticated air "could not be excluded, whether it was by that which remained in the vessel after exhausting it by the air pump, or that with which the dephlogisticated air was more or less contaminated", nor that there was no practicable way that he could have produced dephlogisticated air of the degree of purity that he claimed in 1791. Verbruggen (1972, pp. 48-49) then suggested that in order to "refute" the argument of the Lavoisians, Priestley argued that an increase in the quantity of the phlogisticated air in the experiment produced a decrease in the acid formed. However, firstly this did not take into account that it was Cavendish who had originated the experimental findings that were crucial to the case, and that he (1784, pp. 133-134) had identified that when there was much phlogisticated air present, an increase in its quantity produced a decrease in the acid formed, but (1784, pp. 138-139) when the dephlogisticated air was relatively pure the introduction of phlogisticated air increased the amount of acid, and Priestley never acknowledged the latter finding. Also, Cavendish and the Lavoisians were effectively in agreement on this and had been so since 1785, so that this was not a distinction between phlogistic and antiphlogistic theories, but one between Priestley's view and those both of a phlogistian and of the Lavoisians who each conducted this particular type of experiment with apparatus that allowed for identification of more parameters and for greater accuracy. Verbruggen (1972, p. 48) also quoted Priestley's argument that if any quantity of nitrogen was combined with oxygen or hydrogen, the combustion proved to change neither the quantity nor the quality of the nitrogen involved. However, Priestley (1791) was producing his airs directly into the chamber in which he made the explosion, so that the apparatus did not involve any method of checking the quantity or quality of the phlogisticated air that was initially included or eventually present. His rhetoric continued not to take into account Cavendish's (1784, pp. 133-134, 138-139) careful prior findings as to the changes that were produced by differing amounts of initial nitrogen. Verbruggen (1972, p. 49) quoted Priestley's (1791) argument that in the case of a surplus of hydrogen, nitrogen was produced. This did not take into account his (1789, p. 11) admission that phlogisticated air "could not be excluded, whether it was by that which remained in the vessel after exhausting it by the air pump, or that with which the dephlogisticated air was more or less contaminated". Verbruggen did not recognise that since 1789 Priestley had changed the colour of the mercury calx from which he extracted his dephlogisticated air but apparently not any of his methods for doing so, that there was no way of extracting residual phlogisticated air from incoming dephlogisticated air, and that Priestley had changed his apparatus in such a way as to introduce a new way in which phlogisticated air could enter the chamber. Verbruggen (1972, p. 49) quoted Priestley's (1791) indication that if there was an excess of excess of inflammable air, pure water was produced, without noting that Cavendish (1784, pp. 135-136) had previously published this, and that Priestley (1789, p. 7;1790a, p. 546) had been continuing to maintain in published work that he never failed to produce an acid. It was only Priestley's awareness of the new experiment by the Lavoisians et al.
(1790) that in effect impelled him to publish that he could form pure water in this experiment at will. Verbruggen (1972, p. 50) then stated that it would be obvious to the reader that Priestley's view of the formation of nitrogen and nitrous acid was due to his use of impure oxygen. Verbruggen appears to be referring to the modern reader, but the more important issue is that it was also obvious to a contemporary reader who had read Cavendish's (1784) paper with full attention, or read Berthollet's (1789) paper, or any of the papers about the Lavoisians experiment (1790Fourcroy et al. 1791;Séguin 1791). Kirwan, who did undertake a "reflective reading" of Berthollet's (1789) paper, cited it as one of his 1 3 reasons for giving up phlogiston (in Lavoisier 1997, p. 227), and cited the Lavoisians' et al. 1790 experiment as another reason (Kirwan 1791). Verbruggen (1972, p. 66) argued that Priestley's observations were no less accurate than those of Lavoisier and the other antiphlogistians, with regard to "the perception and description of all phenomena that make up a chemical reaction." This is a drastic misrepresentation of the situation, which was that Cavendish has produced the most accurate perception and description of the phenomena, and that the Lavoisians et al. (1790) experiment had followed Cavendish's findings, while Priestley had continued not to recognise Cavendish's (1784, pp. 138-139) finding that disconfirmed Priestley's theory, and had produced a "labyrinth of errors".
Schofield produced statements about this period of Priestley's output on several occasions. He (1964, p. 289) wrongly stated that "the experiments of Priestley, of Cavendish and of Lavoisier and his adherents report that an acid was obtained, not simple water", and argued that "Priestley, the brilliant experimenter, was totally unable to ignore this production of acid". Schofield (2004, p. 183) took at face value Priestley's repeated statements that the more phlogisticated air he added to the pure and inflammable airs, the less acid he obtained, and went on (2004, p. 192) to argue that "failure to take [Priestley's] experiments seriously reveals a major flaw in the new chemistry, for the experiments did not simply reveal an acid, they showed that the amount of the acid was to be controlled not by elimination of an impurity but by its deliberate introduction… or by slow combustion rather than explosion" and that "this solution was unavailable to the new chemistry". However, these arguments have all the problems that have already been identified concerning Verbruggen's version of the matter, and also did not take into account that the Lavoisians knew of Cavendish's findings concerning the experiments by the end of May 1785, and that these solutions were represented in the papers produced by the Lavoisians on their revised experiment in 1790-1791, 46 so that all this was indeed taken into account in the new chemistry. Furthermore, Schofield did not take into account Berthollet's (1789) arguments against Priestley's (1788aPriestley's ( , b, 1789 papers. 47 Schofield (2004, p. 190) commented that Priestley's (1791) "paper had little influence on the growing acceptance of the 'new chemistry', nor did any paper by Priestley from now on". Schofield provided the explanation that the Nomenclature (Guyton de Morveau et al. 1787) and the Traité ) had now been published, the Annales de Chimie had started to appear, and the new chemistry was now established. However, there are also crucial points that are directly related to Priestley's work. The Lavoisians could see that Priestley was several years behind Cavendish and the Lavoisians in terms of understanding the problems of the experiment, and when Berthollet (1789) told him this in detail, Priestley did not actually answer many of Berthollet's points, but began to parry criticisms with rhetoric, as has been illustrated in previous sections of the present paper, and this was not recognised by Schofield. Schofield (2004, p. 368) argued that Priestley "had no coherent system to substitute for the one he felt was inadequate", which was one main reason why his "attacks on individual experiments were futile". This was not fully correct: Priestley (1794, pp. 8-9) did have what he called a "theory, or system of principles", and Priestley continued to argue that his system was preferable to that of the Lavoisians. Priestley's attacks on individual experiments were indeed futile, as Schofield rightly pointed out, but this was because of the extreme problems of his late work which have been illustrated in the present paper. McEvoy (1990, p. 133) argued that Priestley's debate with the Lavoisians was influenced by a set of philosophical principles arising out of the synoptic unity and interaction of epistemological, metaphysical, methodological theological and strictly scientific parameters in his thought. The inaccuracies of this are being demonstrated in a separate paper. McEvoy (1990, p. 133) also argued that "the empirical adequacies of the competing theories have been virtually equivalent", but the inaccuracies of this have been demonstrated (Blumenthal and Ladyman 2017b) as well as in the present paper. He did not modify these claims in the light of his (1990, p. 139) admissions that Priestley's (1786, pp. 7-8) identification of phlogiston with inflammable air was "a simple experimental error", and that Priestley's (1788a, p. 156) claim that phlogiston had weight and perfectly corresponded to the definition of a substance was also untenable. Priestley later argued that phlogiston might or might not have weight depending on which of these suited his argument at the time. McEvoy (1990, pp. 141-142) supported Priestley's claim that both theories had difficulties, but the major differences between the new chemistry and the many phlogistic theories have been demonstrated (Blumenthal and Ladyman 2017b) as well as in the present paper. Conlin (1996, p. 129) claimed that "historians have generally found Priestley's defence of phlogiston not only to be rational but also to be meritorious in various ways" without giving references. Yet even writers such as Cooper (in Priestley 1806), Jeffrey (1806) and McEvoy (1990) have tended include very little on his defences of phlogiston and have chosen to emphasise his attacks on the antiphlogistians' theory. Conlin (1996, p. 129) inaccurately claimed that Priestley "converted antiphlogistian James Woodhouse to phlogiston theory", which is disconfirmed in great detail by Woodhouse's (1799) actual work. Conlin made the (1996, p. 130) lesser claim that Woodhouse only "accepted a part of Priestley's phlogiston theory", but Woodhouse (1802) effectively abandoned only the specific hypothesis by Berthollet that heavy inflammable air was carbonated hydrogen, while noting Cruickshank's (1801) actually correct indication that it was an oxide of carbon containing half as mixed oxygen as carbonic acid (fixed air). Woodhouse (1799, pp. 465-466) had indicated that Priestley's hypotheses on finery cinder and heavy inflammable air were "very unsatisfactory", so it is incorrect to say that Woodhouse accepted part of Priestley's phlogiston theory. Even Priestley (1803, p. xviii) only stated that Woodhouse abandoned one part of the new theory. Conlin (1996, p. 129) gave the blanket argument that "Priestley was an inductive empiricist", but this does not apply to the basic views of the late theory. Conlin (1996, p. 129) argued that "Priestley graciously sent the antiphlogistians accounts of experiments which favoured phlogiston theory", but as the present paper illustrates, Priestley's experiments did not actually support his phlogistic theory. Chang (2012, p. 5) argues that Priestley "published well-informed and closely reasoned defences of phlogiston", but some of the many problems of this judgement are outlined in the present paper. Chang (2012, p. 7) argues that "historically well-informed philosophers have struggled to say exactly what was wrong with Priestley's stance, but the enormous number of problems with Priestley stances and work in chemistry have been illustrated in the present paper and other papers (Blumenthal and Ladyman 2017a, b). All these matters are examples why Chang's recommendation of normative pluralism for chemistry is not supported by late eighteenth-century chemistry.
There is a very large number of problems with Crosland's (1995) attempt at a social constructionist defence of Priestley. Crosland (1995, p. 110) argued that "as a plain Englishman [Priestley] often said that he was concerned only with the facts". However, another 1 3 plain Englishman, 48 Cavendish (1784, pp. 133-134, 136-139), had determined facts which showed the errors of Priestley's (1788aPriestley's ( , b, 1789Priestley's ( , 1791 later views, of which Priestley did not recognise the one which was crucial. Priestley from 1783 onwards is more reasonably characterised as a controversialist who was primarily concerned to win an argument according to his own criteria, irrespective of how many "facts" he discarded in the process. Crosland (1995, p. 106) argued concerning Priestley that "it was the low cost of the basic apparatus of pneumatic chemistry which had been a major factor in attracting him to this field of science", (1995, p. 109) that Priestley "must have seen Lavoisier as very elitist, and in more ways than one " and (1995, 116) that "in any case, long drawn-out quantitative experiments were just not Priestley's style". Yet Priestley's own theory concerning the result of the experiment resulted in the need for apparatus and initial substances that would totally exclude impurities, which was far more impracticable than merely competing with Cavendish's and Lavoisier's apparatus. The numerous problems of his apparatus and initial substances were pointed out to him on numerous occasions, but he chose to parry these valid criticisms by rhetorically asserting the superiority of simple, cheap apparatus. It was and remains a mistake to assume that cheap apparatus and Priestley's own experimental style could be used satisfactorily, irrespective of the nature of the problems, and irrespective of the unique level of experimental difficulties which was caused by Priestley's own theory. Crosland (1995, p. 109) argued that "the two chemists belonged to contrasting traditions. They viewed the natural world and society from completely different standpoints" and ((1995, p. 116) that "It was not so much that Priestley complained about the expense of Lavoisier's apparatus as that it belonged to a different world". However, it is more accurate to say that where experiments were concerned, Cavendish and Lavoisier belonged to the usual empirical tradition in which what mattered was finding out about the world, while Priestley belonged to a different "world" in which what actually mattered was not the ostensible aim to understand the chemistry but the underlying aim to "win" an argument in Priestley's style of controversy. Crosland (1995, p. 106) argued that "Priestley, like so many adherents of the phlogiston theory, thought of chemistry as a qualitative science". However firstly, as Rodwell (1868, p. 30) pointed out, the phlogistians were aware of the weight issue but "generally omitted [it] from their handbooks". It was extremely difficult if not impossible to maintain a phlogistic theory and deal with weight considerations, so the simplest solution was not to deal with weight considerations. Secondly, the phlogistian Cavendish performed excellent experiments at this period using quantitative volumetric measurements, which resulted in the crucial understanding of several variables concerning this experiment. Crosland (1995, p. 104) argued that "The French chemists seemed not to treat their opponents as equals but rather as misguided or even stupid colleagues, who failed to see the significance of the new evidence". Yet the "French chemists" as a whole were not united, as Berthollet and Fourcroy (1798) pointed out to Priestley, and the antiphlogistians treated their "opponents" on scientific merit, being very respectful of Cavendish's "beautiful experiments" 49 and treating Priestley with the respect due to his early work. Berthollet's (1789) prime arguments were that Priestley had failed to remember his own previous precautions, as well as failing to see the significance of his colleague Cavendish's work. Crosland (1995, pp. 109-110) argued that "One feature which united Priestley's career in religion, politics and science was his hostility to authority". However, in actuality Priestley was hostile to the supposed authority of others, even when they did not exercise any such authority, while being perfectly content to assert his own claims to authority, as has been illustrated in the previous section. Crosland (1995, p. 110) argued that "Already in 1790, reacting to the growing influence of the new theory of chemistry, he advocated "putting an end to all undue and usurped authority in the business of religion as well as of science". Yet this involves a simple mistake, in that Priestley's (1790a, p. xxiii) quote was merely a transcription of his exactly similar (1774, p. xiv) remark, and so it was nothing to do with the new theory of chemistry. In addition, Crosland did not take into account all of Priestley's (1774, pp. xv-xvi;1781a, b, pp. xv-xvi;1790a, pp. x, xxvi, xxvii) politicallynon-partisan and non-nationalist remarks about science. Crosland (1995, p. 109) argued that "Priestley was modest in his claims and language". This does not take into account many statements by Priestley, for example (1775, p. ix) that "I may flatter myself, if it be any flattery, as to say, that there is no history of experiments more ingenious than mine", and that "no person who has made near so many experiments as I have, has made so few mistakes" (1800, p. 4). He also (1803, p. vii) claimed that he had "refuted" the "fallacious hypothesis" of the new chemistry, that this could only be "of great importance to the future progress of science", and removed "a great obstacle to the path of true knowledge". Crosland's (1995) paper falls into the general category of those which can aptly be characterised in the terms of Kusch (2015, p. 78) as adopting "lock, stock and barrel" Priestley's "actors' sociology".
A far better-supported socially-orientated analysis was produced by Golinski (1994), focussing on Lavoisier's and Meusnier's 1785 version of the experiment, so it is of particular interest what this did not take into account. Golinski (1994, p. 38) did not note that one reason why their experiment was not decisive was that a small amount of acidity was found. 50 However, the occurrence of acidity had been explained and two methods of avoiding the acidity had been given by Cavendish (1784). This paper had very recently been published in French, after the final preparations for Lavoisier's and Meusnier's experiment were under way. 51 It can be inferred that this was where the Lavoisians found the answer to the problem, in that Berthollet's letter to Blagden on 19 March 1785 (Duveen and Klickstein 1954, p. 60) stated that Lavoisier now wished to repeat the experiment by burning dephlogisticated air in inflammable air, in accordance with the "beautiful observations" of Cavendish. Golinski (1994, p. 38) suggests that Lavoisier's experiment convinced Berthollet, but this letter illustrates that Berthollet was influenced by two sets of experiments, those by Lavoisier and Cavendish. It can be inferred that it was also crucial to Berthollet that the results of these experiments added to the increasing difficulties that he had experienced from 1776 to 1782 in attempting to adapt a phlogistic theory to the increasing amounts of available experimental evidence, as will be demonstrated in a separate paper. Golinski did not cover the repetition by the Lavoisians of the large-scale experiment with crucial amendments in 1790, which solved the problem with acidity. As noted above, the 1790 experiment was cited by Kirwan in 1791 as one of the several major reasons why he 50 It can be inferred that Lavoisier had not expected to find this, in that the Duc de Chaulnes asked Lavoisier by letter on 6 March to explain the acidity (Lavoisier 1986, pp. 77-78). 51 The first part of the French translation by Pelletier was published in December 1784 and the second part in January 1785 in Observations sur la Physique. had changed theory, the others being his own failure to demonstrate that the combination of dephlogisticated air and inflammable air could form fixed air, a "reflective reading" of Berthollet's criticisms of Priestley's experiments, and Guyton's article on Air in the Encyclopédie Méthodique (Lavoisier 1997, p. 227;Kirwan 1791).
As Golinski (1994, p. 30) rightly noted, the accumulation of supposed "facts" caused confusion and the ramification of discussions into more and more areas. A lack of concern about linkage between views on different facts underpinned Cavendish's (1784) view that his set of phlogistic hypotheses explained the phenomena at least as well as the new chemistry (Blumenthal and Ladyman 2017a). Golinski does not note that this conviction began to fall apart after Cavendish's (1785) experimental work on the formation of nitrous acid using dephlogisticated and phlogisticated air, which showed that nitrous acid could not be produced when using only phlogisticated air, 52 as has been noted in the Introduction to the present paper.
Golinksi (1994, p. 32) argued that "Priestley articulated a radically different form of scientific practice and condemned Lavoisier's supposed accuracy", but did not undertake any analysis of Priestley's apparatus and his claims to accuracy, which had the extraordinary problems that have been illustrated in the present paper. Also, Priestley's practices included rejecting any inconvenient evidence produced by opponents with the unsupported claim that it resulted from overly complicated apparatus, and supporting theories of his own without evidence as "sufficiently evident". This was indeed a radically different form of practice, but it is questionable whether it can reasonably be called "scientific". Golinski (1994, p.32) rightly noted that "the controversy was eventually brought to a close, albeit in a prolonged and confused way that deserves further investigation". As had been the case with Berthollet, Kirwan's change followed more than one set of adverse experimental evidence combined with difficulties in developing a phlogistic theory satisfactorily to meet the totality of the new evidence, and it can be inferred that the same was the case with Cavendish's change. The present paper has illustrated that Priestley developed methods of retaining his own views and rejecting any evidence produced by others, and that he was freely able to go on publishing revised versions of his views and did so. He was generally treated with the respect that was due to him for his early discoveries in airs. However, his late views were treated on their merits. In the case of heavy inflammable air, his objection stimulated work which resolved that specific anomaly (Cruickshank 1801;Desormes and Clément 1802), while his late work on water and related airs and his late phlogistic theory were a "labyrinth of errors" of which a large number were pointed out to him by his opponents at the time. Undoubtedly there were intensely social aspects to the conduct of chemistry at this period, yet these were not constitutive of the chemistry itself (references removed for review).
Conclusions
The analysis in this paper has shown that in Priestley's late work on water and related airs, he put forward a theory to support which his apparatus and initial substances would have needed to exclude impurities altogether. His theory did not take into account the solutions to the difficulties with the experiment which had been comprehensively understood and published by the phlogistian Cavendish several years previously, and with which the Lavoisians were in agreement. Priestley's interpretations were much looser than those of his selected opponents, he readily and fundamentally changed his interpretations of experiments in order to support the version of his theory that he favoured at the time, his basic compositional hypotheses were unfounded, and he was extremely selective about answering the criticisms of any opponent, especially those of Berthollet and Woodhouse. In replying, he used the arsenal of rhetorical techniques that he had honed in the very wide range of ecclesio-political controversies in which he engaged. 53 From 1791 onwards, when any objections were received, he produced a new defence of his position, utilising whatever arguments that came to mind when writing, and increasingly not taking into account the actual value of the criticisms by his opponents. Nearly all his criticisms of the Lavoisians on these matters were unfounded, and that this was why his criticisms had relatively little effect.
During this period, the new chemistry developed rapidly, so that textbooks at the time had to be frequently revised and expanded due to all the new discoveries. In contrast, Priestley spent his last fifteen years issuing variations of the same arguments, until finally he had apparently convinced himself that he had won a glorious battle and vanquished the new chemistry. Priestley was among several participants who continued to hold their own versions of a phlogistic theory: others who did so included Crell, Gmelin, Wiegleb, de la Métherie, Sage, Baumé, Cadet, Watt, and Keir. Of these others, de la Métherie was arguably the most prolific in continuing to issue public defences of phlogiston and attacks on the new chemistry. Yet no-one else combined the early prestige of experimental discoveries with well-developed rhetorical expertise and with apparent (and wholly inaccurate) belief in their own victory, in the way that Priestley did.
All this is of much wider importance as an example of how science progresses. In practice, it is not possible to determine scientific theories against all possible counter-theories and arguments. Therefore in the cases of participants who wished to defend a theory irrespective of inconsistency and the lack of testability of their theories, they could do so, as Priestley did. However, this effectively came at the price of entering an infertile backwater in which he made no further progress in chemistry. By contrast in the cases of participants in late eighteenth-century chemistry who were centrally concerned with developing a fertile way forward in chemistry, after much work it became apparent that the best way of doing this was by identifying the available coherent theory that was as experimentally-testable as practicable, and this dealt with the issue of the very numerous ways in which theories could be compared (Blumenthal and Ladyman 2017b). All this has general implications for how experiments, apparatus and theories are chosen and defended, for how future directions of research are chosen, and for some of the problems with some stances in the history, philosophy and sociology of science.
Funding The funding was provided by Arts and Humanities Research Council (Grant No. 1225327).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
v3-fos-license
|
2018-12-21T11:03:38.472Z
|
2017-01-31T00:00:00.000
|
73658060
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://eu-jr.eu/health/article/download/281/257",
"pdf_hash": "99068f0a05b852203bca34e9e2283ca0786109bb",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45498",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "99068f0a05b852203bca34e9e2283ca0786109bb",
"year": 2017
}
|
pes2o/s2orc
|
PREDICTION OF THE DURATION OF HOSPITAL TREATMENT OF PATIENTS WITH CATARACT
The aim of the paper was to evaluate and determine the duration of the hospital treatment of patients with cataract. Materials and methods of the investigation. 629 case histories were analyzed and 60 patients’ forms were also analyzed. The assessment of case histories was done according to the scheme: sex, age, the stage of cataract, the duration of hospital treatment, the cause of treatment which lasts more than 1 day (postoperative complication, treatment of concomitant eye diseases, patient’s desire), the type of complication. 60 surveyed patients with cataract required surgical treatment in order to work out the model of the duration of hospital treatment. The questionnaire contained several questions, which answers allow evaluating the social status of patients, somatic and optic status, peculiarities of the main disease and also the duration at the hospital after the operation. Results. It has been demonstrated that patient’s age, his or her financial status, the number of concomitant somatic and eye diseases are the main peculiarities for the prognosis for long-lasting stay at hospital after surgical treatment. Undetermined logics was used to create the model of prognosis of the duration of hospital treatment based on еру results of patients’ questionnaire, the method of clustering was used to receive undetermined rules. Conclusions. Ophthalmologic status of the patient, the presence of concomitant eye diseases, such as myopia alta, glaucoma, diseases of retina, and optic nerve and concomitant somatic disease play an important role in the choice of the type of treatment. The most informative indices for hospital treatment of patients with cataract after surgery are patient’s age, his/her financial status, the number of concomitant somatic and eye diseases.
Introduction
Cataract is the most common cause of blindness in the world.According to modern data cataract is the reason of blindness of over 18 millions of people from different countries all around the world [1].Modern tendency of population's aging provides the increase of patients' number with cataract, that's why the number of blind people can be more than 40 millions people by 2025 [2].The number of patients with cataract increased during last decades so the most remarkable growth of the morbidity will be in the nearest future [3].In countries such as the United States and Great Britain, cataract is still a common cause of visual loss, especially among African Americans and older adults [4].As the proportion of persons age 60 and older in the world's population increases, a shift in the burden of eye diseases to age-related causes will occur, resulting in cataract accounting for an even greater proportion of visual loss.By the year 2020, the projected numbers of persons with blinding cataract will exceed 40 million worldwide [5].Over the past few years in many countries a number of cataract patients grew significantly, it happened as a result of population ageing [6].Not looking at substantial progress of cataract surgery, a more considerable increase of morbidity rate is expected in near future.More than 60 % of surgeries, which were done in ophthalmological institutions were made concerning cataract [7].Quality cataract surgery has been shown to enhance visual function and quality of life.
The modern tendencies of the organization of ophthalmological care to patients with cataract are out-patient treatment or surgery which requires one day [8].Nowadays, in many countries such approach is the standard way of the treatment of patients with cataract [9].Out-patient treatment of cataract retracts the necessity in the bedspace, it leads to economy of energy resources and financial and it also decreases emotional, physical and financial costs of patients.
The prevalence rate of cataract in Ukraine includes from 980 to 1200 on 100 thousands of population (based on people who need treatment).Recent statistic shows that more than three millions of citizens' requests for medical care because of eye diseases are registered in Ukraine each year.In the structure of eye morbidity in the past 10 years cataract takes the second place (11 %) after conjunctive diseases (30,7 %) [9, 10].Present social and economic situation in the world is accompanied by the decrease of real incomes, and it can cause the decrease of patient's visit to hospital and increase of cataract rate especially its mature form.
Hospital treatment of cataract is the most important and comfortable one.These factors include patient's age, difficulties with individual movement of patients and far place of residence (for example in region or other town), residence without relatives, care absence, poor financial status, the presence of concomitant somatic pathology and eye disorders, which can cause the development of postoperative complications [11,12].
Ophthalmological status of the patient plays an important role in the choice of the treatment (out-patient treatment or hospital one), and the presence of concomitant eye diseases which can increase the risk of postoperative complications, the treatment of which will require long-lasting hospital stay [13].
Aim
To evaluate and prognosticate the duration of hospital treatment of patients with cataract.
Materials and methods of the investigation
629 case histories of patients with cataract, who took treatment in Kharkiv Municipal Clinical Hospital No 14 in 2014−2015 and took hospital treatment during different period of time, were involved in our investigation.The assessment of case histories was done according to the scheme: sex, age, the stage of cataract, the duration of hospital treatment, the cause of treatment which lasts more than 1 day (postoperative complication, treatment of concomitant eye diseases, patient's desire), the type of complication.
60 surveyed patients with cataract required surgical treatment in order to work out the model of the duration of hospital treatment.The questionnaire contained several questions, answers of which allow evaluating the social status of patients, somatic and optic status, peculiarities of the main disease and also the duration at the hospital after the operation.If the duration at the hospital included 0−1 days, it is considered that an operation was done during one day.
The analysis of patients' questionnaire gave an opportunity to make anamnesis, evaluate somatic status and optic status, social status, finances, place of residence, and its distance from the hospital, and to determine the presence of chronic diseases such as ischemic heart disease, hypertensive disease (high blood pressure), diabetes mellitus, arthropathy, respiratory diseases, thyroid diseases, problems with digestive system, kidney disorders, and other diseases.
The received results of case histories of patients were analyzed and processed statistically using the method of expert assessment, analysis of alternative characteristics [14].
Medicine and Dentistry
Undetermined logic was used in order to develop the model of prognosis in need of hospital treatment and its duration based on results of the questionnaires [15].The model of prognosis of undetermined rules was developed using the method of clustering [16].Program pack of system of computerized algebra Scilab [17] with pack of extension sciFLT was used to determine tasks of clustering, optimization and undetermined logic conclusion [18,19].
Corresponding indices were calculated before clustering of these indices which determined patients with cataract and their somatic and optic status.Each disease of the patient had 1 grade.The index of somatic status was calculated as grades of the presence of each concomitant disease.When there is a high index value, the somatic condition of the patient is poor [20].Optic index was calculated similarly which is equal to grades for each of concomitant diseases.
Three variants of answers were proposed to patients to evaluate and determine financial status.These variants appropriated corresponding grades.Well financial status included 1 grade, satisfactory one included 2 grades, and unsatisfactory status included 3 grades.
Such indices as age, financial status, optic index and index of somatic status determine previous information which was received from questionnaires and case histories of patients with cataract.Such indices also determine the most informative prognosis for the duration of hospital treatment after surgery.
Results
Table 1 presented patients' division depending on the time of the duration at the hospital.Based on information which is presented in the table 1 it should be noted, that more than 57 % of patients took hospital treatment from 4 to 7 days.117 (19±1,5) % of patients took out-patient treatment, more than 7 days 75 (11±1,2) % of patients were treated at the hospital.Postoperative complications, cataract with glaucoma, the treatment of vascular malformation are the main causes of long-lasting hospital treatment of patients.Mature age of patients (middle age) was not the cause of long-lasting hospital treatment, so each group contained from 50 to 70 % of people who were 70 years old, and the average age of patients did not differ.Received information was used to develop the model of prognosis of the duration of hospital treatment.Diagnostic training sample was formed as rectangular matrix, in which each line contains data of one patient, and columns include such information as: age, financial status, optic index and index of somatic status, the number of days of the hospital treatment.
Well financial status of patients was marked only in (3,3±2,2) % of people (2 people), satisfactory one was noted in (70±5,9) % of patients (42 people), poor financial status was marked in (26,7±5,7) % of patients (16 people).Financial means determine the opportunity of the person after out-patient treatment visits the hospital in order to take control examination or continue the treatment if it is necessary.As usual patients with poor financial means stay at the hospital where they can receive qualified medical aid, nutrition and care twenty four hours.
It has been established, that 9 patients (16 %) did not have concomitant somatic pathology during calculation of index of somatic status (correspondingly the index was 0).One concomitant disease had 8 (13 %) patients, two diseases had 11 (18 %), three diseases had 14 (23 %), four ones 9 (16 %), five ones 5 (8 %), six and seven diseases had 2 (3 %) patients.Consequently, two or more concomitant diseases were present in 71 % of examined patients that indicates the high possibility of the presence of postoperative complications and necessity of the hospital treatment.
After the development of diagnostic training sample, procedure of the synthesis of prognosis model with initial parameters of the algorithm of the clustering was done: radii=0,5; accept ratio=0,5; reject ratio=0,15.In order to decrease the number of rules (it can be due to the decrease of clusters) parameters setting of reject ratio of the algorithm was done.Characteristics of parameters reject ratio (from 0,15 to 0,49 with the step 0,01) was increased and procedure of the synthesis of prognosis model was repeated.The main conditions of the setting are: quality degradation of the prognosis to the level of average mistake ∆>0,5 or development of minimal value of rules' number.
The model of prognosis of the number of days in the hospital was received during the work of the procedure, optimal number of clusters and value of average mistake of prognosis ∆ ср .was determined.Optimal parameters of the algorithm of clustering were the next: parameter, which determines sizes of clusters (radii) =0,5; coefficient of suppression =1,25; coefficient of acceptance (accept ratio) =0,5 coefficient of rejection (reject ratio) =0,4.
Next approach was used to evaluate adequacy of developed model of prognosis: one line was elicited from training sample, which corresponds to the information of concrete patient, the model was synthesized again and expected number of days in the hospital for each set of indices was calculated according to it.Such control determined that number of days which patients were in the hospital and received in the result of the developed model of prognosis referred to it sufficiently well.
It is necessary for the physician to set up system Scilab with the pack of extension sciFLT and save on HD the file with developed model of the duration of hospital treatment ("predic-tion_model.fls" for practical use of received undetermined model.After that, patients' data that required surgical treatment of cataract is involved in the installed program and the duration in the hospital is calculated.
Discussion
Out-patient treatment of cataract retracts the necessity in the bedspace, it leads to economy of energy resources and financial resources, and it also decreases emotional, physical and financial costs of patients.Otherwise, the peculiarity of Ukrainian system of Healthcare, financial funding of medical institutions and medical and social peculiarities of patients with cataract (age, the presence of pathology, financial status, the place of residence) define not always out-patient treatment as accessible one.So, the choice of out-patient treatment or hospital one is accompanied by medical and social causes, which should be considered during the first ophthalmological aid to patients with cataract.
Developed model of the prognosis of the duration of hospital treatment can be used to maintain making decision according to treatment management of concrete patient with cataract.A doctor should propose hospital treatment to patient during predictive term more than 1 day or when patient chooses out-patient treatment it is necessary to define the possible risk of the presence of postoperative complications, which can be associated with complications of optic and somatic status of the patient.
Table 1
Patients' division depending on the duration of hospital treatment
|
v3-fos-license
|
2017-06-23T20:42:47.558Z
|
2013-07-19T00:00:00.000
|
14081982
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://clinicalproteomicsjournal.biomedcentral.com/track/pdf/10.1186/1559-0275-10-8",
"pdf_hash": "c7aee1526a525a9c6f6f21559a524cf2a3c47f12",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45499",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "c7aee1526a525a9c6f6f21559a524cf2a3c47f12",
"year": 2013
}
|
pes2o/s2orc
|
Proteomic analysis of purified protein derivative of Mycobacterium tuberculosis
Background Purified protein derivative (PPD) has been used for more than half a century as an antigen for the diagnosis of tuberculosis infection based on delayed type hypersensitivity. Although designated as “purified,” in reality, the composition of PPD is highly complex and remains ill-defined. In this report, high resolution mass spectrometry was applied to understand the complexity of its constituent components. A comparative proteomic analysis of various PPD preparations and their functional characterization is likely to help in short-listing the relevant antigens required to prepare a less complex and more potent reagent for diagnostic purposes. Results Proteomic analysis of Connaught Tuberculin 68 (PPD-CT68), a tuberculin preparation generated from M. tuberculosis, was carried out in this study. PPD-CT68 is the protein component of a commercially available tuberculin preparation, Tubersol, which is used for tuberculin skin testing. Using a high resolution LTQ-Orbitrap Velos mass spectrometer, we identified 265 different proteins. The identified proteins were compared with those identified from PPD M. bovis, PPD M. avium and PPD-S2 from previous mass spectrometry-based studies. In all, 142 proteins were found to be shared between PPD-CT68 and PPD-S2 preparations. Out of the 354 proteins from M. tuberculosis–derived PPDs (i.e. proteins in either PPD-CT68 or PPD-S2), 37 proteins were found to be shared with M. avium PPD and 80 were shared with M. bovis PPD. Alignment of PPD-CT68 proteins with proteins encoded by 24 lung infecting bacteria revealed a number of similar proteins (206 bacterial proteins shared epitopes with 47 PPD-CT68 proteins), which could potentially be involved in causing cross-reactivity. The data have been deposited to the ProteomeXchange with identifier PXD000377. Conclusions Proteomic and bioinformatics analysis of different PPD preparations revealed commonly and differentially represented proteins. This information could help in delineating the relevant antigens represented in various PPDs, which could further lead to development of a lesser complex and better defined skin test antigen with a higher specificity and sensitivity.
Background
Around 2 billion people in the world are infected with M. tuberculosis. According to WHO world TB (tuberculosis) control report 2011, in 2010 alone, 9 million new TB cases were reported and 1.45 million deaths occurred worldwide. Tuberculosis is the second most common infectious killer disease after HIV. One in five of the 1.8 million AIDS related deaths are estimated to be associated with TB. Tuberculin skin test (TST) is the standard test for the diagnosis of TB infection in the Western world [1]. American Thoracic Society and Center for Disease Control and Prevention recommend targeted TST for deciding the treatment regimen among groups associated with increased risk for progression of latent tuberculosis infection (LTBI) to active TB [2]. Vaccination is an important preventive measure to control community load of TB. An attenuated strain of M. bovis known as Bacillus Calmette-Guerin (BCG) is universally employed as a vaccine against TB. However, efficacy of BCG is controversial as it does not protect against adult forms of pulmonary tuberculosis [3,4]. Moreover, prior exposure of individual to environmental mycobacteria and organisms sharing antigenic epitopes results in recall of immune memory response to BCG administration [3]. After almost 12 decades of research, we still do not have a reliable diagnostic test for TB that can be used in primary health care centers with definitive results.
In 1890, Robert Koch introduced boiled, crude extract of tubercle bacilli in glycerin (referred as "old tuberculin") as a potential vaccine material against tuberculosis infection [5][6][7]. Although Koch's old tuberculin could not be used as therapy because of its toxicity, impurity and inadequate standardization; the concept of tuberculin was instrumental in laying the foundation of the modern TST [8]. TST, first introduced by Von Pirquet in 1909 [6] has been in use as a standard method for diagnosing TB infection almost over the last six decades [8,9]. It is based on measuring the extent of induration formed because of delayed type hypersensitivity reaction elicited by mycobacterial antigens present in PPD.
In addition to its role in detecting mycobacterial infection, TST has also been used as a standard tool to estimate the prevalence of LTBI [8]. The role of PPD in serodiagnosis of TB, with sensitivity as high as 92%, was reported in Warao and Creole populations [10]. Several studies reported the use of PPD in serodiagnosis of tuberculosis infection with high sensitivity [11,12]. PPD has also been used as a standard control in immunological assays [13]. It is reported that PPD improves the sensitivity of interferon gamma release assay (IGRA). IGRA uses early secretory antigenic target-6 (ESAT-6) and culture filtrate antigen EsxB (CFP10) antigens present in M. tuberculosis and M. bovis but not in BCG. This can enable differentiation of TB-infected and BCG vaccinated individuals [14,15]. However, Yassin et al. reported that sensitivity of IGRA can be compromised in children with severe malnutrition and HIV coinfection. Concomitant use of TST, IGRA and interferon gamma induced protein 10 (IP-10) in children staying in contact with smear-positive adults has shown higher number of children as positive [16]. In addition, IGRAs suffer from limitations including higher cost, variable sensitivity, poor reproducibility, limited interpretive criteria and unknown prognostic value [17]. Despite its important applications, PPD is not considered as a reliable material. This is due to high rates of false positive results, inability to distinguish between tuberculous and non-tuberculous mycobacteria or individuals vaccinated with BCG [18]. This can be attributed to immune response elicited by antigens from BCG or environmental bacteria sharing antigenic epitopes [19,20]. Earlier studies by Borsuk et al. identified molecular chaperone DnaK (DnaK), molecular chaperone GroEL (GroEL2), elongation factor 2 (EF-Tu), cell surface lipoprotein Mpt83 (Mpt83) and acyl carrier protein as abundant proteins common to M. bovis and M. avium PPDs [21]. Moreover, discrepancy of results has been observed between different PPD preparations [22,23]. Currently available PPD preparations used on human subjects include PPD-S2 [6], PPD-RT23 [24], PPD IC-65 [9,13] and PPD-CT68 [25].
Knowledge about the constituents of PPD could allow the researchers to effectively work on PPD associated diagnostic complications. Earlier studies employed gel electrophoresis to identify constituents of PPD [26]. Kuwabara and Tsumita in 1974 first attempted to identify and characterize the components of PPD [27]. An analysis which employed gel electrophoresis for characterization of PPD antigens in whole cell lysate of M. bovis BCG resulted in four protein bands corresponding to PPD [28]. Kitaura et al. could distinctly identify only two ribosomal proteins L7 and L12 in M. tuberculosis and M. bovis PPDs in gel electrophoresis [29]. With the advent of high resolution mass spectrometry, it is now possible to identify proteins from complex peptide mixtures. Borsuk et al. identified 171 proteins in an LC-MS/MS analysis of Brazilian and UK bovine and avium PPDs [19]. Cho et al. recently identified 240 proteins in PPD-S2 [26]. PPD-CT68, which is another standard reagent used for TST, has not been analyzed thus far. In the present report, we have analyzed and described the proteome profile of PPD-CT68 using high resolution mass spectrometry and compared it with that of other PPDs derived from M. tuberculosis, M. avium and M. Bovis. PPD-CT68 examined here was developed by Landi in 1963 from "Johnston" strain of M. tuberculosis var. hominis [30].
Results and discussion
Identification of proteins present in PPD-CT68 from Mycobacterium tuberculosis We carried out a proteomic profiling of PPD-CT68 prepared from M. tuberculosis culture in a protein-free medium using high resolution Fourier transform mass spectrometry. Mass spectrometry-derived data was searched using Sequest algorithm embedded in the Proteome Discoverer software against a protein database of M. tuberculosis from NCBI RefSeq. Search of 5,205 MS/ MS spectra resulted in 1,146 peptide-spectrum matches, which corresponded to 695 unique peptides. The list of peptides identified in this study is provided in Additional file 1: Table S1. Representative MS/MS spectra are provided in Figure 1. Based on these 695 unique peptides, we identified 265 proteins (Additional file 2: Table S2) of M. tuberculosis in PPD-CT68.
Cho and colleagues [26] recently reported the identification of 240 proteins from PPD-S2, which is the standard for TST, as recommended by U.S. Food and Drug Administration (FDA). Out of 240 proteins listed, 231 are non-redundant. We compared proteomic results obtained in our study with proteins identified from PPD-S2 ( Figure 2A). Out of the 265 proteins identified from PPD-CT68, 142 proteins were shared with PPD-S2, whereas, 123 and 89 proteins were exclusively identified from PPD-CT68 and PPD-S2, respectively. Altogether, 354 proteins of M. tuberculosis PPD have been identified.
For further understanding of protein profiles of various PPD preparations, we compared the PPD derived from M. tuberculosis (PPD-CT68 and PPD-S2) with PPDs of M. bovis and M. avium [19] ( Figure 2B). Out of 354 proteins from M. tuberculosis PPDs, 37 proteins were found common with M. avium PPD and 80 were common with M. bovis PPD. We also found that 18 proteins were common in PPDs obtained from M. tuberculosis, M. bovis and M. avium. When compared to PPDs from M. tuberculosis, 35 and 19 proteins were exclusively found in M. avium and M. bovis PPDs respectively. It is also interesting to note that 255 proteins were exclusively identified in PPDs from M. tuberculosis.
Functional analysis of proteins common among all PPDs
We carried out functional classification of proteins identified in M. tuberculosis (PPD-CT68 and PPD-S2), which are also present in M. avium and M. bovis. Most of the proteins were implicated in causing infection and protecting the pathogen against various metabolic stresses. Five of eighteen proteins-secreted antigen 85A (FbpA), thiol peroxidase (Tpx), bacterioferritin (BfrA), thioredoxin (TrxC) and lipoprotein LprG (LprG) -offer protection against oxidative and nitrosative stress. On the other hand, co-chaperonin GroES (GroES), DnaK, serine protease PepA (PepA), alanine and proline rich secreted protein Apa (Apa) and hypothetical protein Mpt64 are involved in causing infection. Detailed functional classification of each protein is given in the Table 1.
Bioinformatics analysis of PPD-CT68 proteins showing homology to lung infecting bacteria
Raman et al. performed a comprehensive analysis on M. tuberculosis genes homologous to 228 different pathogenic bacteria [50]. We further analyzed 265 proteins represented in PPD-CT68 against the proteins encoded by 24 lung infecting bacteria selected from the list of 228 pathogens. Protein BLAST was performed to locate regions of PPD-CT68 proteins sharing 10 or more identical amino acid long regions with bacterial proteins. In all, 3,446 peptides from 24 pathogens corresponding to 1,048 proteins showed 10 or more amino acid identity with 117 proteins from PPD-CT68 (Additional file 3: Table S3a and S3b). Since, a peptide of 20 or more amino acid residues can be a potential epitope, we further shortlisted the proteins reflecting identity in a continuous stretch of 20 or more amino acids (Additional file 3: Table S3c). Two hundred and six out of 1,048 bacterial proteins showed identity with 47 PPD-CT68 proteins. ( Figure 3A). Functional analysis of 47 mycobacterial proteins sharing identical regions revealed that 41% proteins are associated with intermediary metabolism and respiration, 34% with information pathways, 9% with virulence and detoxification, 4% with cell wall and cell processes, 4% with lipid metabolism and 4% with regulatory proteins ( Figure 3B).
To further study the role of these 47 proteins in DTH and serodiagnosis, we compared our data with immunoproteome of included 484 mycobacterial proteins recognized by human sera collected from worldwide TB suspects [51]. Thirteen proteins -DNA-directed RNA polymerase subunit beta; DNA-directed RNA polymerase subunit alpha; GroEL; 30S ribosomal protein S1; fumarate hydratase; elongation factor G; DnaK; aconitate hydratase; isocitrate dehydrogenase; S-adenosyl-L-homocysteine hydrolase; malate synthase G; D-3-phosphoglycerate dehydrogenase; and enoyl-CoA hydratase -were recognized by antibodies in serum. Out of these 13 proteins, only seven (isocitrate dehydrogenase; malate synthase G; succinyl-CoA synthetase subunit alpha; malate dehydrogenase; succinyl-CoA synthetase subunit beta; aconitate hydratase; and type II citrate synthase) were listed in immunoproteome and were identified in our mass spectrometry analysis of PPD-CT68 [51]. These proteins are associated with a host immune response in cases with active tuberculosis.
PPD proteins as candidate biomarkers
Available knowledge of the M. tuberculosis genes provides us the advantage to express and synthesize recombinant purified antigens, which gives us the opportunity to test new biomarkers for TB infection. These antigens can be used to detect the antibodies in the serum and have a potential to improve diagnosis. Several studies have explored the use of the recombinantly expressed antigens and evaluated their immunodiagnostic potential. An updated systematic review on the diagnostic accuracy of commercial serological tests for pulmonary and extra pulmonary tuberculosis for relevant studies was updated in May 2010 and WHO has performed a bivariate meta-analysis that jointly modeled both test sensitivity and specificity (http://www.who.int/tb/laboratory/ policy_statements/en/index.html). It has been concluded that the commercially available serological tests provide inconsistent and imprecise findings and the sensitivity and 2 FbpA Secreted antigen 85-A It participates in cell wall biosynthesis, and interacts with the host macrophage as fibronectin-binding protein. It is also involved in establishment and maintenance of a persistent tuberculosis infection [32,33] 3 GroES Co-chaperonin It is a dominant immunogenic protein [34] 4 DnaK Molecular chaperone It is highly antigenic and act as co-repressors for heat shock protein transcriptional repressor (hspR) [35] 5 Tpx Thiol peroxidase It protects M. tuberculosis against oxidative and nitrosative stress [36] 6 RpiL 50S ribosomal protein L7/L12 It is involved in interaction with translation factors [37] 7 BfrA Bacterioferritin It is an intracellular iron storage protein, which protects Mycobacterium from oxidative stress mediated by excess iron [38] 8 SahH S-adenosyl-L-homocysteine hydrolase It is a ubiquitous enzyme that plays a central role in methylation-based processes by maintaining the intracellular balance between S-adenosylhomocysteine (SAH) and S-adenosylmethionine [39] 9 TrxC Thioredoxin It is involved in redox homeostasis and uses it to protect the pathoen against the oxidative intermediates generated by the macrophages [40] 10 FixB Electron transfer flavoprotein subunit alpha It is electron transfer flavoprotein subunit alpha, in some bacteria it functions as nitrogen fixing agent but its function in M .tuberculosis is not clear [41] 11 PepA Serine protease It is a serine protease associated with cell membrane, which stimulates peripheral blood mononuclear cells from healthy PPD donors to proliferate and secrete gamma interferon [42] 12 Wag31 Hypothetical protein It is a cell division initiation protein involved in regulation of genes including virulence factors and antigens [43] 13 Mpt64 Hypothetical protein It is an immunogenic protein which elicits delayed type hypersensitivity skin response [44] 14 Apa Alanine and proline rich secreted protein It is a cell surface glycoprotein which binds to host lectins and cheat the innate immune system [45] 15 LprG Lipoprotein LprG It plays a role in M. tuberculosis infection by inducing increased suppression of the immune response due to elevated nitric oxide production [46] 16 Rv1893 Hypothetical protein Function unknown [47] 17 Rv1855c Oxidoreductase Probable monooxygenase [48] 18 Gap Glyceraldehyde-3-phosphate dehydrogenase It has glyceraldehyde-3-phosphate dehydrogenase activity [49] specificity of the tests were highly variable. Our approach presented in this study, has not only identified a large number of proteins unique to M. tuberculosis, but in parallel provided the information on the coverage. The higher the coverage, higher is the abundance of the protein in the PPD sample analyzed. Our results are correlating with earlier publications and many of the proteins identified in the PPD of M. tuberculosis, have already been analyzed for their potential as a diagnostic markers. Some of the hits we have found are GroES [52,53], GroEL [54,55], protein EsxB (EsxB) [56] heat shock protein HspX (HspX) [57], hypothetical protein (TB15.3), hypothetical protien (TB16.3) [58], 50S ribosomal protein L7/L12 (RplL) [59], hypothetical protein (EsxA) [60], immunogenic protein Mpt63 (Mpt63) [61], Mpt64 [62], ESAT-6 like protein EsxJ (EsxJ), and ESAT-6 like protein EsxO (EsxO) [63]. Based on these observations, many more M. tuberculosis PPD can be analyzed and the abundant antigens be evaluated for their potential as diagnostics biomarkers.
Clinical applications of the study
Based on our findings, out of 265 proteins identified in PPD-CT68, 142 proteins were found common between PPD-CT68 and PPD-S2. The common proteins can further be evaluated for their potential as skin test antigens. Proteins identified in our analysis, which are absent in M. avium and M. bovis and do not show any significant identity with proteins from lung infecting bacteria can be shortlisted for developing various immunological assays to identify M. tuberculosis, on the basis of their seroreactivity and abundance. For example, Rv2346, Rv0379 and Rv1388 were found to be absent in PPD M. bovis and PPD M. avium and possessed least identity with proteins from lung infecting pathogens. As discussed earlier, one of the major issues with the use of PPD as a skin test antigen is false positive results for the individuals with BCG vaccination. Use of antigens absent in M. bovis may help overcoming that. Global profiling of antigens in PPD may help to identify M. tuberculosis-specific antigens, which are not present in BCG. These antigens will be useful in differentiating infected from vaccinated individuals. PPD can be prepared from M. tuberculosis, M. avium and M. bovis, so if we can use antigen specific to every strain to elicit the test response we can diagnose the species of Mycobacteria with a simple skin test. Subset of antigens that is mainly responsible for activating the immune response can be used in adjunction with BCG or for booster doses to enhance immune response. By knowing the antigens involved in the test response, we can use minimal essential amount of PPD for TST. Use of specific antigens in TST will make it more specific and will reduce the false positive results due to antigen cross reaction.
Conclusions
Despite the identification of almost a dozen antigens for developing next generation PPD, it is challenging to replace the classical PPD preparation. ESAT-6, Mpt64, recombinant antigen (DPPD), CFP10, recombinant truncated 38 kDa protein (TPA38), DnaK, GroEL, RplL are currently under evaluation as next generation PPD candidates [9,29,62,[64][65][66][67][68][69][70][71][72]. PPD is a crude extract obtained after several steps of filtration, purification, and precipitation with trichloroacetic acid [30]. Enough knowledge on PPD composition and contribution of individual antigens in TST would give a better insight to understand the molecular mechanism behind it and will also allow the researchers to select a combination of proteins specific to M. tuberculosis. Our analysis further revealed mycobacterial proteins in PPD-CT68 sharing identical amino acid sequence with lung infecting bacteria. Detailed epitopic analysis of these proteins may help the researchers to understand the mechanism behind cross reactivity in TST. Mass Spectrometry is an efficient tool for proteomic analysis due to its high mass accuracy, sensitivity and ability to deal with complex sample mixtures. Here, in this study we used a high resolution Fourier transform mass spectrometer for LC-MS/MS analysis of PPD-CT68. Many of the proteins identified in the PPD of M. tuberculosis, have already been analyzed for their potential as diagnostic markers. The complete protein profile of PPD-CT68 uncovered from this study can be used to analyze immune response and antibody production pattern of body against different PPD antigens.
Data availability
The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium (http:// proteomecentral.proteomexchange.org) via the PRIDE partner repository [73] with the dataset identifier PXD000377.
Mass spectrometry
We have carried out the LC-MS/MS analysis on an LTQ-Orbitrap Velos ETD mass spectrometer (Thermo Scientific, Bremen, Germany) interfaced with an Agilent 1100 HPLC system (Agilent Technologies, Santa Clara, CA). Trypsin digested PPD peptides were analyzed on a reversed phase liquid chromatography. The RP-LC system equipped with a pre-column (2 cm, 5 μ -100A°), analytical column (10 cm, 5 μ -100A°) made with magic AQ C 18 material (PM5/61100/00; Bruker-Michrom Inc., Auburn, CA) packed in-house. Further, the peptides were sprayed using an electro-spray emitter tip 8 μ (New Objective, Woburn, MA) fixed to a nanospray ionization source. The peptides were loaded on the pre-column using 97% solvent A (0.1% formic acid (aq) and resolved on the analytical column using a gradient of 10-30% solvent B (90% acetonitrile, 0.1% formic acid) for 60 min at a constant flow rate of 0.35 μl/min. The spray voltage and heated capillary temperature were set to 2.0 kV and 220°C, respectively. The data acquisition was performed in a data dependent manner. From each MS survey scan, 10 most intense precursor ions were selected for fragmentation. MS and MS/MS scans were acquired in an Orbitrap mass analyzer and the peptides were fragmented by higher energy collision dissociation with normalized collision energy of 39%. MS scans were acquired at a resolution of 60,000 at 400 m/z, while MS/MS scans were acquired at a resolution of 15,000. The automatic gain control for full FT MS was set to 0.5 million ions and for FT MS/MS was set to 0.1 million ions with maximum time of accumulation of 750 ms and 100 ms, respectively. The raw data obtained was submitted to ProteomeXchange Consortium (http://proteomecentral.proteomexchange.org).
Data analysis
We searched the MS/MS data using Sequest search algorithm on Proteome Discoverer (version 1.3.0.339, Thermo Scientific, Bremen, Germany), against protein database of M. tuberculosis H37Rv strain downloaded from NCBI RefSeq (updated December 29, 2011). The search parameters are: a) precursor mass range between 350 to 8000 Da; b) minimum peak count was set to 5; c) signal to noise threshold set to 1.5; d) trypsin was used as a proteolytic enzyme allowing up to one missed cleavage; e) precursor mass tolerance of 20 ppm and fragment tolerance of 0.1 Da; f ) oxidation of methionine as variable modification and carbamidomethylation of cysteine as fixed modification; and g) 1% false discovery rate (FDR).
Additional files
Additional file 1: Table S1. List of peptides identified from M. tuberculosis PPD (CT68).
Additional file 3: Table S3a. List of Mycobacterium tuberculosis PPD-CT68 peptides with 10 or more amino acid identity with 24 lung infecting bacteria. Table S3b. List of Mycobacterium tuberculosis proteins corresponding to the peptides with 10 or more amino acid identity with 24 lung infecting bacteria. Table S3c. List of Mycobacterium tuberculosis proteins with sequence identity of ≥ 20 amino acid identity with 24 lung infecting bacteria.
|
v3-fos-license
|
2024-06-06T06:17:19.619Z
|
2024-06-04T00:00:00.000
|
270256518
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-024-62784-8.pdf",
"pdf_hash": "d2086191c6865538db5a52ac455858d29ac2266d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45502",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "8acd64e3a020dd70445c58e6b505be7d384437a2",
"year": 2024
}
|
pes2o/s2orc
|
Parallel detection of multiple biomarkers in a point-of-care-competent device for the prediction of exacerbations in chronic inflammatory lung disease
Sudden aggravations of chronic inflammatory airway diseases are difficult-to-foresee life-threatening episodes for which advanced prognosis-systems are highly desirable. Here we present an experimental chip-based fluidic system designed for the rapid and sensitive measurement of biomarkers prognostic for potentially imminent asthma or COPD exacerbations. As model biomarkers we chose three cytokines (interleukin-6, interleukin-8, tumor necrosis factor alpha), the bacterial infection marker C-reactive protein and the bacterial pathogen Streptococcus pneumoniae—all relevant factors in exacerbation episodes. Assay protocols established in laboratory environments were adapted to 3D-printed fluidic devices with emphasis on short processing times, low reagent consumption and a low limit of detection in order to enable the fluidic system to be used in point-of-care settings. The final device demonstrator was validated with patient sample material for its capability to detect endogenous as well as exogenous biomarkers in parallel.
Continuous and reliable monitoring of both, endogenous biomarkers and exogenous triggers, preferably in point of care (POC) or even home test diagnostic devices for daily use, can improve diagnostic accuracy: it will establish an individual baseline under stable conditions from which deviations could reveal developing exacerbations and indicate the necessity for therapeutic intervention at an early point in time.Such regular monitoring requires an easy-to-sample analyte that reflects both, the load of exogenous triggers and the level of endogenous biomarkers.We therefore aimed to develop a prospectively POC-and hometest-competent fluidic device that is capable of analyzing saliva from asthma or COPD patients for exacerbation-relevant markers and triggers.We ruled out a dip stick test format as this would have difficulties to deal with particulate analytes such as bacteria 16 , and PCR based methods as they are not suitable to analyze endogenous biomarkers in saliva.Instead, we decided to utilize antibody-based assays for our test system as those should be able to detect all types of analytes in a saliva sample, and developed robust assay protocols that are capable of being conducted in a fluidic cartridge system, with emphasis on quick results in POC or hometest environments.
The longterm goal of developing a POC or home test diagnostic device demands the transferability of the fluidic device into cost-effective mass production employing plastic materials and consisting of the least possible number of functional units.These components must be kept simple and standardized and should be easily implementable in existing fluid management systems.The fluid management in general should be robust, functionally reliable and safe to operate in POC or hometest settings.Only when a cost-effective, rapid and easy to handle solution is established, patient compliance will be high, allowing continuous monitoring to establish a personal baseline.
Saliva sampling
Prospective anonymized collection of saliva samples from voluntary donors was carried out by the BioMateri-alBank Nord at the Research Center Borstel, Borstel, Germany.All donors provided informed consent for participation.The study was approved by the ethics committee of the University of Luebeck (approval no.16-167).All methods involving human material were carried out in accordance with relevant guidelines and regulations.Probands were divided into four groups according to their lung health status: asthma patients, COPD patients, subjects with an acute respiratory infection, and healthy controls.For the sampling procedure, all subjects were requested to refrain from food for 30 min and were asked afterwards to chew on paraffin chewing gum (GC Corp., Leuven, Belgium) for a minimum of 1 min.Saliva (total of 5 ml) was transferred to a polypropylene sampling vial.Samples were aliquoted, snap frozen in liquid nitrogen and stored at − 80 °C.Saliva was used for analysis either as individual samples, or as saliva pool obtained by mixing 1 ml saliva each from 30 randomly selected healthy controls.Cytokine levels in a subset of individual samples were measured using an enhanced cytometric bead array as a reference, following the manufacturer's guidelines (eCBA, Flex Set Kits; BD Biosciences, Franklin Lakes, NJ).
Analyte detection on slide arrays in open well format using incubation chambers
For detection of endogenous analytes, arrays (2 rows of 8 arrays, each array 4 × 3 spots) of capture reagents were generated by depositing 20-nl droplets of the respective antibody solution (50 µg/ml (1 ng/spot) of anti-IL-8, anti-IL-6 or TNF-α in borate buffer (12.5 mM Na 2 B 4 O 7 , pH 8.5) onto 3D-Epoxy glass slides (PolyAn GmbH Berlin, Germany) using a nanoplotter (GeSiM GmbH, Dresden, Germany).As setup/coupling control, 50 µg/ ml anti-FLAG was spotted in an analogous way.Slides were kept over night (o.n.) in a humidified atmosphere at 4 °C.On the next day, Grace Bio-Labs FlexWell™ grids (Merck) were attached to the slides thereby creating 2 × 8 well incubation chambers.The cavities were washed 3 × with 100 µl of PBST (0.1% Tween 20 (v/v) in 2.7 mM KCl, 1.5 mM KH 2 PO 4 , 136 mM NaCl, 8.1 mM Na 2 HPO 4 , pH 7.4).Slides were incubated for 2 h at room temperature (RT) with pooled saliva samples that had been spiked with analytes IL-8, IL-6 and TNF-α in a concentration of 10, 2 or 1 ng/ml each, washed 3 × with 100 µl of PBST and incubated with 50 µl of biotinylated detection antibodies for 90 min at RT. Slides were washed 5 × with 100 µl of PBST and incubated with 50 µl of 1 µg/ml streptavidin Alexa 680 (Thermo Fisher Scientific, Waltham, MA, USA) in PBST for 1 h at RT.After washing 5 × with PBST, the slides were dried and the fluorescent signal was recorded at 700 nm using a microarray imager (Odyssey CLx, Li-Cor Biosciences, Lincoln, NE, USA).
For detection of S. pneumoniae on slide surfaces, arrays were generated on 3D-Epoxy glass slides by depositing 20-nl droplets of S. pneumoniae capture antibody in patterns of 4 × 3 spots (5-20 ng/spot).Slides were kept o.n. at 4 °C and equipped with Grace Bio-Labs incubation chambers as above, wetted for 30 min with 50 µl/cavity of PBST and washed 3 × with PBST (75 µl/cavity).Slides were incubated for 1 h at RT with 50 µl of PBST or pooled saliva samples spiked with S. pneumoniae at varying concentrations (1 × 10 5 CFU/ml to 1 × 10 8 CFU/ml), or saliva devoid of bacteria.Slides were washed 3 × with PBST, incubated with 50 µl of biotinylated detection antibody for 90 min at RT, washed 5 × with PBST and incubated with 50 µl of 1 µg/ml streptavidin-Alexa 680 in PBST for 1 h at RT.After washing 5 × with PBST the slides were dried and the fluorescent signal was measured as above.
Analyte detection on slide arrays using fluidic devices
The main fluidic components of the detection devices as well as the incubation chambers were manufactured in an Agilista 3200W 3D printer (Keyence Deutschland GmbH, Neu-Isenburg, Germany) using AR-M2 printing material.Dimensions of each analysis chamber were 14 mm × 3 mm × 0.2 mm, generating a chamber volume of 8.4 µl; the total volume of the complete fluidic system was ~ 20 µl.For the parallel detection of analytes in version 1 of the fluidic device (quadriplex design), four arrays of capture reagents were generated on 3D-Epoxy glass slides by depositing 20-nl droplets of the respective capture antibody solutions in duplicate spots, with 1.5 mm spot-to-spot distance.Depending on the experimental set-up, a varying combination of four different capture antibodies was used (anti-IL-6, anti-IL-8, anti-TNF-α, anti-CRP each at 1.5 ng/spot, anti-phosphorylcholine IgA κ at 5 ng/spot).Slides were incubated o.n. in a humidified atmosphere at 4 °C, dried and stored at 4 °C under argon atmosphere for a maximum of 72 h.For application, slides were mounted onto the 3D printed incubation chambers by using a dual adhesive foil.All subsequent washing steps were carried out by slowly pushing the respective washing buffer via a luer-fitted 1-ml syringe over each individual analysis chamber.Likewise, samples and detection reagents were applied through the luer ports with the same syringe.Chambers were washed 3 × by flushing with 200 µl of PBST, filled bubble free with 150 µl of saliva sample and incubated for 1 h at RT.After a second washing step (3 × 200 µl of PBST), the respective biotinylated detection antibodies were applied (one individual detection antibody for each array) and incubated for 1 h at RT. Chambers were washed again (5 × 200 µl of PBST) and subsequently incubated with 150 µl of 1 µg/ml streptavidin-Alexa 680 in PBST for 1 h at RT. Chambers were flushed again (5 × 200 µl of PBST), liquids were removed and the chambers were imaged at 680 nm excitation and 700 nm emission wavelength in a microarray imager (Odyssey CLx, Li-Cor Biosciences, Lincoln, NE) at 21 µm resolution, using the built-in solid state diode laser for excitation and silicon avalanche photodiodes for detection.Under these conditions, autofluorescence and background signals from the microfluidic device were negligible.
For detection of IL-8 in version 2 of the fluidic device (parallel design), four capture reagent arrays of 8 spots each were generated on slides using anti-IL-8 capture antibody at 1.5 ng/spot.Slides were mounted onto the 3D printed channel incubation chamber (200 µm depth), and all chambers were washed as described above.The chambers were then filled bubble free with 400 µl of pooled saliva samples spiked with 10 ng/ml IL-8 and incubated for 1 h at 34 °C.After a second washing step (3 × 500 µl of PBST) biotinylated anti-IL-8 detection antibody was applied to all chambers and incubated for 1 h at 34 °C.Chambers were washed 5 × by passing 500 µl of PBST through the chambers and incubated with 400 µl of 1 µg/ml streptavidin-Alexa 680 in PBST for 1 h at RT. Chambers were again washed 5 × with 500 µl of PBST, liquids were removed and fluorescence was measured as above.
Results and discussion
The desired user-friendly test-system requires an assay format that allows high sensitivity and specificity along with fully automated sample handling, analyte detection and data interpretation.Consequently, our focus was laid on the design of the fluidic system and the translation of classical bioassay protocols to a unit into which all relevant process steps of the analytical protocol can be implemented.This includes facile sample supply with biofluids that contain the relevant biomarkers, fluid exchanges to remove sticky biological compounds, and the fluorescence-based detection of bound analytes.Handling of sample materials, reagents or washing fluids was kept as simple as possible in all processing steps.
Hardware design
The initial version of the fluidic device was a quadriplex system, comprising four separate analytical chambers each equipped with individual inlet port and outlet channel connected via separate fluid conduits (Fig. 1A).The 3-D manufactured top piece was designed with a footprint of 25 mm × 75.5 mm and a height of 14 mm.This top piece can be mounted onto standard glass slides which will then close the fluid coves and create the analysis chambers as well as a waste reservoir with a capacity of up to 9 ml volume.Syringe (luer) ports were integrated as loading connectors to allow manual operation.A pivotal issue in choosing the dimensions for the analysis chambers was the consideration of shear stress caused by the fluid streaming over solid surfaces where analyte-capturing ligands are immobilized.On the one hand, smaller fluid layers will cause larger shear forces since fluids move faster with decreasing conduit diameter, in particular when the parabolic flow profile of laminar fluid movement switches into the turbulent state.In such a case, larger analytes such as viruses or bacteria might detach from the surface due to fluid friction as shear forces are large compared to the binding forces of the small contact spots between analyte and surface-bound capture reagent.On the other hand, a slow fluid movement in the chamber facilitates air bubble formation which will interfere with the readout.Moreover, a switch from turbulent to laminar flow increases diffusion times of the analytes to their respective ligands.In light of those interdependent variables we sought an empirical solution to the problem.Top pieces that allowed chamber heights from 0.2 to 2 mm were 3-D printed and investigated in different analytical settings.We found that a chamber height of 200 µm performed best with the analyte solutions tested.Consequently we chose the following dimensions for the analysis chamber: length, 14 mm, width, 3 mm and height, 0.2 mm, resulting in a total volume of 8.4 µl (Fig. 1B).
To ensure robust operation, tube diameters were set to 1 mm.As can be seen in Fig. 1B, the interface between the adhesion foil and the glass surface is pervaded by small gas-filled cavities, but these were found to be not critical to the biochemical assays performed in the chambers.Apparently, flow rates were low enough to prevent high pressure in the chambers, and no fluid leaked into the offline cavities, such that no background fluorescence caused by entrapped detection reagents was observed.Air bubbles could be easily driven out from all parts of the chamber and detrimental surface effects such as trapping of air did not occur.As autofluorescence of the AR-M2 material was low in the far-red spectral range at 700 nm, Alexa-680 labeled streptavidin was readily detectable without perceptible background noise.
As an alternative to the quadriplex design, the fluidic device was laid out for parallel operation (Fig. 1C).This device has a reduced footprint of 46 × 32 mm and a height of 16 mm.While the geometry for each analysis chamber remained unchanged, arrangement of the chambers was modified together with the channel structures.The aim of these modifications was to achieve uniform filling and emptying of all chambers.For the planned integration into an automated cartridge system, fluidic uniformity is an essential requirement.Figure 1D shows demonstrators of both versions of the analytic device, fitted for manual operation.Based on these designs, secondary functional elements, such as fluid reservoirs and a suitable fluid management system, can be integrated in a later step.
Analyte and biomarker selection
For complex disorders such as chronic airway inflammation a battery of biomarkers must be monitored in order to gain the desired diagnostic power.In addition, the use as POC or hometest settings requires noninvasive, rapid acquisition of sample material that is available in sufficient amounts.Saliva meets these requirements, and it has been shown that a wide variety of endogenous and exogenous biomarkers can be present in this sample material 18 .Importantly for diagnostic measures in inflammatory lung diseases, the analyte is released into the oral cavity which is anatomically linked to the site of inflammation and thus can pick up biomarkers liberated onto the mucosa at inflamed sites.Indicative for inflamed tissues are proinflammatory cytokines such as IL-6, IL-8 and TNF-α [11][12][13][14] .In case of a microbial infection, CRP is a known endogenous biomarker 19,20 .As an analyte www.nature.com/scientificreports/representing the exogenous biomarkers, we selected the bacterium Streptococcus pneumoniae as prototypic airway pathogen.
Assay design
For the selected biomarkers, detection reagents are commercially available and laboratory methods for their detection have been described.We improved typical protocols for e.g.microtiter plate-based biological assays to shorten incubation times and to allow rapid and sensitive readout.As assay principle we chose the sandwich immunoassay format with analyses to be carried out in parallel for the different markers (for details, see Fig. S1 in Supporting Information).
The assays were performed on slide arrays for which glass slides functionalized with epoxy groups on their surface were commercially available.These slides are also equipped with a thin layer of hydrophilic polymers which minimizes unwanted attachment of (hydrophobic) biomolecules onto the surface.Functionally proven compounds were employed as capture, detection and signal amplification reagents.In all set-up variations, arrays of the respective capture antibodies were first spotted onto the slide forming covalent links with the functionalized surface.Captured analytes were detected with specific, biotinylated antibodies, followed by fluorophore-labeled streptavidin, and fluorescence signals on the processed slides were recorded in a microarray imager.
Detection of disease markers in an open well micro-device
We first designed an open well slide system for performing the immunoassays.Array dimensions on the slides were chosen to accommodate commercially available grids which were fixed on the slide surface thereby providing specified reaction cavities that were loaded and processed manually.This set-up was used to optimize reagent concentrations and ratios, and to investigate the effect of saliva as biological matrix on the traceability of the different analytes.For detection of endogenous analytes, saliva samples were spiked with the different cytokines at varying concentrations, with undiluted saliva and saliva diluted 1:3 in PBS buffer being used as sample matrix.While IL-6 and IL-8 were easily detectable at an analyte concentration of 10 ng/ml, for TNF-α only a very weak signal was obtainable (Fig. 2).
The presence of saliva matrix had no effect on the detectability of IL-6 as signal intensities were identical in undiluted and in diluted saliva.For IL-8, the signal was even higher in the spiked undiluted saliva than in diluted matrix.This may be due to the presence of endogenous IL-8 in the used saliva pool which may contribute to the higher signal in the undiluted sample (see below).Despite those differences between the individual cytokines, detectability of the respective analytes was considered sufficient to be verified in the fluidic chamber system after adaptation of the procedure.
To include exogenous triggers of exacerbations in our system we selected a capture antibody that should be able of reacting with a broad variety of microorganisms: an antibody directed against phosphorylcholine, which is an essential component of cellular membranes in most bacterial species [21][22][23] .The specificity of the system would be provided by the detection antibody, in our case an antibody which recognizes numerous S. pneumoniae serotypes.In order to evaluate the assay setup for the detectability of S. pneumoniae on glass surfaces, arrays were generated with different amounts of capture antibody and probed with bacteria at different concentrations (Fig. 3).www.nature.com/scientificreports/ The bacteria were well detectable in our set-up, but detection quality varied depending on the test conditions.Regression analysis showed good correlations between analyte concentration and signal intensity, as long as the highest bacterial concentrations were omitted from the calculations (Fig. 3, right panel).Hence, up to a concentration of 1 × 10 7 CFU/ml, the increasing amounts of bacteria were well reflected by the signal intensities obtained in our detection system, but higher concentrations tended to be underestimated.Likewise, the detectability of the bacteria varied in dependance of the amount of capture antibody per spot, although the differences were generally not statistically significant (One-way ANOVA followed by Friedman multiple comparison test).Over all, the highest signals were obtained with 1 × 10 7 CFU/ml of bacteria at 10 ng of capture antibody.Neither higher amounts of capture antibody nor of bacteria led to an increase in signal intensity.We attribute this "detectability optimum" to the fact that S. pneumoniae is a huge antigen and binding to it is limited by sterical constraints.
In contrast to the detection of cytokines, the sample matrix did influence the detectability of S. pneumoniae, as signals were reduced when the analyte was applied in saliva in comparison to buffer as matrix.These differences were statistically significant for nearly all bacterial concentrations measured in combination with capture antibody amounts of 15 ng/spot and 20 ng/spot, and for the lower bacterial amounts captured with 5 ng antibody/spot (p < 0.05, unpaired t-test with Welsh's correction).However, they did not differ significantly between www.nature.com/scientificreports/saliva and PBST with any of the bacterial concentrations when 10 ng capture antibody per spot were used, which confirms this coating concentration as optimal for the assay set up.
Detection of disease markers in a quadriplex chamber microfluidic device
The next steps aimed to transfer the assays from open-well format to the microfluidic device.The first experiments were conducted with the quadriplex chamber system (Fig. 1A).For parallel detection of four analytes in this device, two different reaction set-ups were tested.In each case, four arrays of capture antibodies (duplicate spots for each analyte to be tested, resulting in eight spots per reaction area) were immobilized on the glass surface in the predefined reaction areas (Fig. 4A).After mounting the slides into the reaction device, two different set-ups were chosen to gain maximum information on potential cross-reactivities and interferences between the different analytes and detection reagents: either each reaction chamber received a saliva sample spiked with www.nature.com/scientificreports/one individual analyte only, and all four chambers were subsequently treated identically for analyte detection with a mixture of all four anti-analyte antibodies (Fig. 4B), or, all four reaction chambers were incubated with saliva samples spiked with a mixture of the four different analytes, and analyte detection was achieved with one specific anti-analyte antibody in each of the four reaction chambers (Fig. 4C).All endogenous analytes, which had been added to the saliva matrix in a concentration of 10 ng/ml respectively 25 ng/ml (TNF-α in set-up 2), were readily detectable in either setup, with signals well above background and signal-to-noise ratios around 5. Thus, even lower spiking concentrations such as 1 ng/ml should be feasible.This would translate into limits of detection (LODs) of about 40-100 pM analyte, which is in accordance with previous reports where LODs in the picomolar to low nanomolar range were observed for fluoroimmunoassays (FIA) conducted on plain surfaces 24,25 .
For S. pneumoniae, the signal was rather poor in saliva when compared to phosphate-buffered saline, perhaps due to masking of capture and detection antibody binding sites by saliva-derived anti-S.pneumoniae immunoglobulins.Cross reactivities of the detection antibodies with the different analytes were minimal, and despite the high spiking concentrations no false positive signals for other analytes were observed.
Quality control I-fluidic and detection reagent robustness
The sturdiness of assays in a branched channel system was investigated using the advanced 3D printed fluidic device (Fig. 1C) with four chamber structures connected to a single sample inlet port (Fig. 5A).
Measurement results in the four chambers were comparable, differences between the mean signal intensities obtained in the different chambers were not significant (Fig. 5B, ANOVA, Kruskal Wallis Test, Dunn's multiple comparison test) indicating that our design ensures simultaneous and uniform filling of the chambers from one fluid inlet port onwards.This is of special concern with regard to a later automatization of the fluidic system and an important requirement for further development of the device towards POC-or home use.
Quality control II-detection limits for biomarkers in patient samples
To validate the biomarker detection system under real-life conditions, we analyzed individual human saliva samples for the content of the endogenous biomarkers in our microfluidic device.Samples were obtained from a study cohort that enrolled patients with either asthma or COPD, healthy individuals and people suffering from an acute respiratory infection.None of the patients with asthma or COPD exhibited an acute disease exacerbation at the time of sampling.Nine saliva samples were arbitrarily selected from each group and not treated further.To assess matrix effects, a tenth sample of each group was spiked with the recombinant cytokines used before, at a nominal concentration of 1 ng/ml each.In order to obtain reference values for the cytokines IL-6, IL-8 and TNF-α in human saliva, all samples were analyzed using an enhanced cytometric bead array (eCBA) (see Supporting Information for listing of individual values for all samples).
In the spiked saliva samples, the concentration of IL-8 determined by eCBA was in all cases higher than the added amount of 1 ng/ml, most likely due to the presence of endogenous IL-8 in the sample.In contrast, the measured concentrations of IL-6 and TNF-α were much lower, displaying values between 12 and 40% of the added amount.Diminished detectability in saliva has been reported for these two cytokines in previous studies, with recovery rates of 50-80% for IL-6 and of about 44% for TNF-α 26,27 .While this might indicate that the recombinant proteins used for spiking are not as traceable in the samples as their human counterparts, it can also signify that they are degraded or masked in saliva.
In the non-spiked saliva samples, the concentrations of endogenous IL-6 and TNF-α were very low (in nearly all samples < 50 pg/ml), well below the levels detectable by the fluoroimmunoassay setup in our experimental devices.IL-8 concentrations in non-spiked saliva samples were considerably higher, ranging from 0.1 ng/ml to nearly 20 ng/ml.The highest amounts of IL-8 were in general present in saliva from COPD patients (Table 1).
Notably, significant differences were not detectable between healthy and diseased groups for any of the investigated salivary cytokines; in fact, a very high variability in the cytokine levels existed between individuals within each group, which is illustrated by the high coefficient of variation (CV).This underscores the necessity In order to find out whether our current setup provides sufficient sensitivity, four patient samples containing high cytokine levels (one from the asthma group, two from the COPD group and one spiked sample) were analyzed in the quadriplex fluidic device.IL-6 and TNF-α were detectable in the spiked sample only, whereas IL-8 was detected in all samples (Fig. 6).
Over all, our device did not reach eCBA sensitivity.The better performance of the bead-based assay may be explained by the larger surface area of a spherical carrier compared to a flat spot, and to higher quantum yield and extinction coefficient of the eCBA-fluorophore (R-phycoerythrin) compared to Alexa 680 which we used.This might result in a lower sensitivity of our setting compared to the eCBA, and might explain why only IL-8, of which the concentration in saliva is more than tenfold higher than that of IL-6 and TNF-α, was detectable in non-spiked samples.It is therefore mandatory to improve the sensitivity in our system so that it can be used in future POC devices for the detection of IL-6 and TNF-α in saliva as well.While the incorporation of a signal amplification step, employing e.g.catalyzed reporter deposition 28 or cation exchange reactions in ionic nanocrystals 29,30 for fluorescence enhancement, or the use of high quantum yield fluorophores such as quantum dots 31 is a worthwhile consideration, these changes would require substantial modifications in the assay set-up and in the detection reagents used.Ultimately, the choice of the readout system is likely the most crucial step.In the recent past, advances in both, sensitive optical and electronical detection systems have been reported.On the optical side, femtomolar concentrations (1 pg/ml) of IL-6 were detectable by combining drop-coating deposition Raman spectroscopy and graphene-enhanced Raman spectroscopy 32 , and even lower, zeptomolar concentrations of serum IL-6 were detected by evanescent field-enhanced fluorescence imaging 33 .Salivary IL-8 could be sensed fluorescently with a confocal optics based sensor at low femtomolar concentration 34 .Yet, although of impressive sensitivity, such optical sensorics still require extensive readout hardware which is not practical for implementation in a POC system.
As yet, none of the reported high-end read-out designs, neither optical nor electronical, combines such a sensitivity with multiplexing and the use of saliva as sample matrix.Once moderately sized and -priced detection hardware becomes available, it can be merged with the immunochemical layout and a miniaturized fluidic design as described here by us.Such sensoric systems shall allow creation of easy-to-use point-of care and point-of need analytical devices for the detection of upcoming exacerbations in patients with asthma and COPD.
Conclusions
This work provides a proof-of-concept for multiplex saliva cytokine diagnostics that shall be suitable for the detection and prediction of upcoming exacerbations of stable asthma and COPD.It reveals the constraints and demands for next generation analytical systems suited for salivary cytokine monitoring.The clinical finding that individual variation in salivary cytokine levels is higher than expected, preventing group-based comparisons of different health conditions, asks for personal cytokine baselines and thus for translation of such devices into cheap and easy-to-use point-of-need test systems which a patient can use routinely at home.
Figure 1 .
Figure 1.Design of the microfluidic reaction device.(A) Device version 1: four chambers loaded/processed individually; (B) Scheme of fluid flow and microscopic top view of individual analysis chamber (grayscale image); (C) Device version 2: four chambers loaded/processed in parallel; (D) Demonstrator units of fluidic devices version 1 and 2, equipped with syringes for manual operation.
Figure 2 .
Figure 2. Detection of endogenous biomarkers by sandwich immunoassay.(A) Slide arrays processed in open well format.(B) Measurement of IL-6, IL-8 and TNF-α at different concentrations in undiluted or diluted saliva (N = 3).
Figure 3 .
Figure 3. Detection of model pathogen S. pneumoniae.(A) Slide arrays processed in open well format.(B, C) Measurement of increasing numbers of bacteria, captured by different amounts of antibody in individual spots.Analyte was applied either in PBST (B) or in saliva (C) (each N = 3).Graphs in the right panel depict results of linear regression analysis assessing the relationship between fluorescence read-out (relative fluorescence unists, RFU) and analyte concentration in the samples.Note that the highest concentration of bacteria (1 × 10 8 CFU/ ml) has been omitted from all regression analyses.
Figure 4 .
Figure 4. Detection of markers on glass slides using a microfluidic device.(A) Slide with four reaction arrays carrying duplicate spots of capture antibodies for the detection of four different analytes.(B) Detection of analytes in individually spiked saliva samples (analyte concentration: 10 ng/ml) using a mixture of biotinylated detection antibodies.(C) Detection of analytes in mixed spiked saliva samples (IL-6 & IL-8: 10 ng/ml, TNF-α: 25 ng/ml, S. pneumoniae: 2 × 10 5 CFU/ml) using individual biotinylated detection antibodies in each reaction chamber.
Figure 5 .
Figure 5. Measurement of IL-8 in a fluidic device with 4 parallel channels attached to a single sample inlet structure (A).The measurements of IL-8 are not significantly different (1way ANOVA, Kruskal Wallis Test, Dunn's multiple comparisons test) in the four channels (B).
Figure 6 .
Figure 6.Detection of endogenous biomarkers IL-6, IL-8 and TNF-α in non-spiked saliva samples of selected asthma-and COPD patients, using the quadriplex fluidic device.One saliva sample spiked with IL-6, IL-8 and TNF-α, at 10 ng/ml each, was used as control.
to determine individual for each patient, which translates into the need for routine use of such a pointof-care device by patients with asthma and COPD.
Table 1 .
Salivary cytokine levels in humans with inflammatory airway disorders versus healthy individuals.For each cytokine, minimum and maximum values as well as coefficient of variation (CV) in each group are given.No statistically significant differences were found between the health status groups (p > 0.05; ANOVA, Kruskal Wallis tests); SD: standard deviation; CV, coefficient of variation.
|
v3-fos-license
|
2019-06-27T16:22:14.978Z
|
2019-06-27T00:00:00.000
|
195656096
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3389/fimmu.2019.01475",
"pdf_hash": "d95a07116912ffff8bf8a38187a626d2a193db0e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45503",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "d95a07116912ffff8bf8a38187a626d2a193db0e",
"year": 2019
}
|
pes2o/s2orc
|
Plasma MicroRNAs in Established Rheumatoid Arthritis Relate to Adiposity and Altered Plasma and Skeletal Muscle Cytokine and Metabolic Profiles
Background: MicroRNAs have been implicated in the pathogenesis of rheumatoid arthritis (RA), obesity, and altered metabolism. Although RA is associated with both obesity and altered metabolism, expression of RA-related microRNA in the setting of these cardiometabolic comorbidities is unclear. Our objective was to determine relationships between six RA-related microRNAs and RA disease activity, inflammation, body composition, and metabolic function. Methods: Expression of plasma miR-21, miR-23b, miR-27a, miR-143, miR-146a, and miR-223 was measured in 48 persons with seropositive and/or erosive RA (mean DAS-28-ESR 3.0, SD 1.4) and 23 age-, sex-, and BMI-matched healthy controls. Disease activity in RA was assessed by DAS-28-ESR. Plasma cytokine concentrations were determined by ELISA. Body composition was assessed using CT scan to determine central and muscle adipose and thigh muscle tissue size and tissue density. Plasma and skeletal muscle acylcarnitine, amino acid, and organic acid metabolites were measured via mass-spectroscopy. Plasma lipoproteins were measured via nuclear magnetic resonance (NMR) spectroscopy. Spearman correlations were used to assess relationships for microRNA with inflammation and cardiometabolic measures. RA and control associations were compared using Fisher transformations. Results: Among RA subjects, plasma miR-143 was associated with plasma IL-6 and IL-8. No other RA microRNA was positively associated with disease activity or inflammatory markers. In RA, microRNA expression was associated with adiposity, both visceral adiposity (miR-146a, miR-21, miR-23b, and miR-27a) and thigh intra-muscular adiposity (miR-146a and miR-223). RA miR-146a was associated with greater concentrations of cardiometabolic risk markers (plasma short-chain dicarboxyl/hydroxyl acylcarnitines, triglycerides, large VLDL particles, and small HDL particles) and lower concentrations of muscle energy substrates (long-chain acylcarnitines and pyruvate). Despite RA and controls having similar microRNA levels, RA, and controls differed in magnitude and direction for several associations with cytokines and plasma and skeletal muscle metabolic intermediates. Conclusion: Most microRNAs thought to be associated with RA disease activity and inflammation were more reflective of RA adiposity and impaired metabolism. These associations show that microRNAs in RA may serve as an epigenetic link between RA inflammation and cardiometabolic comorbidities.
INTRODUCTION
MicroRNAs (miRNAs) are small, non-coding RNAs, ∼22 nucleotides long, that regulate post-transcriptional gene expression (1). miRNAs are synthesized by multiple cells and tissues. While miRNA can be passively released upon injury, active release of miRNA in vesicles or exosomes allows miRNA to communicate in autocrine and paracrine fashions. Upon cellular uptake, miRNAs repress protein synthesis by cleaving or blocking translation of target mRNA (2). Individual miRNAs can have one hundred or more mRNA targets across multiple cells and organs systems, while individual mRNA can be bound and repressed by many miRNAs (3). Altered miRNA expression is associated with many disease states (4) and has been implicated in the pathogenesis of autoimmune disease, including rheumatoid arthritis (RA) (5,6). Thus, miRNAs have been proposed as both RA biomarkers and therapeutic targets (7,8).
In addition to autoimmune disease, miRNA contribute to the pathogenesis of sarcopenia (9) and obesity (10). However, evaluation of miRNAs in co-morbid disease states, including RA and its associated comorbidities, has received less attention. Despite revolutionary progress in the management of RA inflammation over the past few decades, patients with RA are still at high risk for sarcopenic obesity-decreased skeletal muscle mass with increased fat mass-which contributes to increased risks of disability, cardiovascular disease (CVD), and mortality (11,12). RA development, severity, and poor treatment responses are tied to obesity (13). Also, RA is associated with sarcopenia, altered skeletal muscle remodeling, and impaired oxidative metabolism (14,15). While these metabolic impairments in RA are likely driven in part by epigenetic dysregulation (16), it is unclear whether RA-related miRNAs contribute to the RA comorbidities of obesity and altered metabolism. Additionally, it is unclear how complex associations between miRNAs, RA inflammation, obesity, and metabolism impact the potential for miRNAs to be used as biomarkers and therapeutic targets in RA.
In the present study, we measured plasma expression of six miRNAs proposed to be biomarkers of RA inflammation: miR-21 (17), miR-23b (18), miR-27a (19), miR-143 (20), miR-146a (21), and miR-223 (22). Here, we evaluated relationships between miRNA expression and measures of inflammation, adiposity, and altered metabolism. To better understand RA miRNA specific effects, we then compared RA plasma expression of each miRNA to age-, gender-, and race-matched healthy controls. We hypothesized that some miRNAs would reflect RA disease activity, while others would better reflect obesity and metabolic alterations.
Design and Participants
In a cross-sectional design, patients with RA and matched controls were recruited to participate as previously reported (15). RA subjects (n = 48) met American College of Rheumatology 1987 criteria (23); were seropositive (positive rheumatoid factor and/or anti-cyclic citrullinated peptide antibody) or had evidence of erosions on hand or foot imaging; had no medication changes within 3 months of enrollment; and were using ≤5 mg prednisone daily. Healthy control subjects (n = 23) without a previous diagnosis of inflammatory arthritis or current joint pain were matched to RA subjects by gender, race, age within 3 years, and body mass index (BMI) within 3 kg/m 2 . Subjects were excluded with pregnancy, type 2 diabetes mellitus, or known coronary artery disease. This study complied with the Helsinki Declaration and was approved by the Duke University Institutional Review Board.
Outcome Measures
All subjects underwent assessments as previously reported and described (15,24), which included questionnaires, rheumatologic physical exam, fasting phlebotomy, computed tomography (CT) imaging of abdomen and thigh, and vastus lateralis muscle biopsies. RA disease activity was measured by the Disease Activity Score in 28 joints (DAS28) with erythrocyte sedimentation rate (ESR) (25). Plasma inflammatory marker and cytokine concentrations were determined by immunoassay (24). CT scan analyses were performed to determine central and muscle adipose and thigh muscle tissue size and tissue density (greater tissue density is indicative of less inter-muscular adipose tissue) (24). Standard Bergstrom needle muscle biopsies were performed on the vastus lateralis (26). All plasma and muscle tissue samples were stored at −80 • C until analyses.
Statistical Analysis
Participant characteristics ( Table 1) and plasma miRNAs (Figure 1) were compared in RA vs. control subjects using two sample t-tests or Wilcoxon rank-sum tests dependent on whether data conformed to a normal distribution. Analyses of the metabolomic data of the combined RA and control subjects were performed separately for plasma ( Table 2) and skeletal muscle ( Table 3). Briefly, metabolic intermediates were standardized and reduced using principal components analysis (PCA) with varimax rotation to five factors, each with an eigenvalue >1.0. For each factor, individual metabolites with a factor load >0.4 were reported as factor components. Factor scores were computed for each individual, and correlations between factor scores and clinical assessments were evaluated using Spearman's rho. Strengths of associations for the two groups were compared with Fisher r-to-z transformations (32). All statistical analyses, besides Fisher transformations, were performed using SAS 9.4 (SAS, Cary, NC). Statistical significance was set at P-value <0.05. Data are available from the corresponding author upon a reasonable request.
Associations Between MicroRNAs and
Markers of RA Disease Activity, Inflammation, Adiposity, and Altered Metabolism in RA In RA, plasma miR-143 was positively related to RA-related systemic inflammation (plasma IL-6 and IL-8) ( Table 4), however no miRNA was significantly associated with disease activity (DAS-28; range 0.6-6.4), ESR, or plasma high sensitivity creactive protein. miR-21 was associated with less systemic inflammation, greater adiposity, and an altered, pro-atherogenic plasma lipoprotein profile. miR-146a was associated with greater adiposity, pro-atherogenic lipoproteins, and altered plasma ( Table 2) and skeletal muscle metabolic intermediates ( Table 3). miR-23b and miR-27a were predominantly associated with adiposity while miR-223 was predominantly associated with increased thigh muscle fat, and altered plasma metabolic and lipoproteins profiles, namely small HDL particles, plasma short, and medium chain acylcarnitines and non-branched chain amino acids.
MicroRNA Associations With Markers of Inflammation and Altered Metabolism Are Different in RA Compared to Controls
There were no differences in RA and control plasma miRNA expression levels (Figure 1), but RA and controls differed in miRNA associations. In controls, miR-143 was negatively correlated with plasma IL-8 (Supplemental Table 1; r = −0.34), which significantly differed from the positive association in RA (r = 0.33; Fisher r-to-z P < 0.05). In controls, miR-146a was positively correlated with plasma branched chain amino acids ( Table 2) (Supplemental Table 1; r = 0.45) and skeletal muscle long-chain acylcarnitines and pyruvate ( Table 3) (Supplemental Table 1; r = 0.08), differing significantly from the negative associations in RA (r = −0.09 and −0.42, respectively; Fisher r-to-z P < 0.05 for both).
DISCUSSION
In this cohort of established RA, several plasma miRNAs showed unique patterns of association with systemic pro-inflammatory cytokines, adiposity, and impaired metabolism. Among six miRNA selected based on prior associations with RA disease activity and/or inflammation, only miR-143 was reflective of RA systemic inflammation. Rather, plasma miRNAs in our RA cohort associated with measures of adiposity and metabolic alteration. These unique and unexpected associations, along with multiple associations that significantly differed from those of matched controls, highlight the complexity of miRNA functions, especially as they contribute to RA and associated comorbidities. In contrast to previous reports in the literature (7), we did not find significant associations between inflammation and miR-146a. Inflammation is expected to induce miR-146a expression as part of a feedback mechanism to down-regulate the inflammatory response, including acute inflammation as well as Th1-mediated interferon responses (33)(34)(35). We hypothesize that our findings reflect differences in the inflammatory signatures of the RA patients with long-standing disease in our cohort Frontiers in Immunology | www.frontiersin.org as opposed to those with early, acute inflammatory disease. In our cohort of established RA, miR-146a was instead associated with multiple measures of adiposity as well as plasma shortchain dicarboxyl/hydroxyl acylcarnitines, strong markers of myocardial infarction and coronary artery disease risk (36). Plasma miR-146a was also differentially associated with plasma amino acids, as well as skeletal muscle long-chain acylcarinitines and pyruvate, both key substrates for muscle energy generation, compared to control subjects. These findings are supportive of miR-146a's role in modulating systemic metabolic function, which appears to be altered in RA. miR-146a mechanistically downregulates TNF-α-induced adipogenesis (37,38) and oxidative metabolism (39). Our findings suggest that miR-146a expression is appropriate (i.e., increased in response to limit adipogenesis), but is unable to adequately regulate specific metabolic processes due to altered interaction with target mRNAs. Consistent with this hypothesis, miR-146a polymorphisms are not more common in RA (40,41), but rather RA susceptibility is associated with polymorphisms in known target mRNA binding sites (41).
miRNA alterations likely occur in other RA miRNAs, including miR-143 and miR-223. For example, miR-143 functions to induce inflammation through activation of NF-κB, and other studies show cellular expression of miR-143 is increased in RA synovial tissue (20,42). In our cohort, miR-143 was positively associated with plasma inflammatory cytokines IL-6 and IL-8 in RA but negatively associated with plasma IL-8 in controls, indicating in RA, the miR-143 pro-inflammatory stimulating response is overactive. In contrast, while miR-223 down-regulates inflammation through multiple mechanisms (43,44), counterintuitively, in RA, miR-223 expression is increased in multiple sites, including PBMCs, synovial tissue, and plasma (7). In our RA cohort, plasma miR-223 expression did not differ from controls or associate with RA disease activity or inflammation.
miR-223 is also associated with obesity (45) and HDL molecules, which transport miR-223 for lipid metabolic regulatory functions (46). We found RA miR-223 associated with thigh intramuscular fat, small HDL particles, plasma short, and medium chain acylcarnitines and non-branched chain amino acids. Thus, miR-223 alterations may contribute to incomplete systemic beta-oxidation and amino acid catabolism reflecting an obesity-related mitochondrial lipid overload state (47). Further study regarding the effects of these miRNAs on metabolic pathways is warranted.
While this study helps to inform how miRNAs likely exert effects on multiple biologic pathways crucial to RA-associated autoimmunity and impaired metabolism, the findings should be viewed in the context of a few key limitations. First, we did not find any significant differences in expression of candidate plasma miRNA between established RA and control subjects; in contrast, in larger cross-sectional cohorts, there was differential plasma expression of miR-146a and miR-233 between cohorts (7,48). Whether these findings are the result of the heterogeneity of the RA subjects participating at different investigational sites, low overall RA disease activity, small sample size, or a lack of younger, early RA subjects in our cohort is unclear. Interestingly, previous research shows that epigenetic signatures differ in early vs. longstanding RA (49). Second, the focus of this exploratory study was to identify possible associations between plasma miRNAs and clinical, inflammatory, and metabolic factors to guide further in-depth research; thus, no direct causal pathways were evaluated. Third, we did not choose miRNAs based on microarray studies and thus our analyses were limited to individually measured miRNAs in this study. Finally, we measured miRNA expression only in plasma, and not tissue specific sites, such as immune cells, adipose tissue, or skeletal muscle. We note that quantification of circulating miRNAs may not accurately reflect expression in the tissue (i.e., skeletal muscle) (50). We did not measure the mode of miRNA packaging, either into microvesicles or exosomes, or attached to lipoproteins or other circulating proteins. Knowing the site of miRNA expression, mode of packaging, and target tissue or cellular site of mRNA regulatory function may be helpful in order to better understand these complex miRNA effector pathways.
In conclusion, multiple miRNAs in RA were associated with inflammatory and metabolic pathways in an opposite direction of both expected function and comparative findings in controls. In contrast to previous studies, we found that RA and matched controls had similar amounts of plasma miR-21, miR-23b, miR-27a, miR-143, miR-146a, and miR-223; and only miR-143 was positivity associated with inflammation in RA. Conversely, in RA, miR-146a and miR-223 were prominently associated with age, obesity, plasma and muscle metabolic intermediates, and plasma lipoproteins. Taken together, these findings show in the context of RA, miRNAs influence multiple inflammatory and metabolic pathways across a variety of cells and organ systems. These RA miRNA associations with adipose tissue and metabolic alterations may provide insight into epigenetic connections whereby chronic inflammation leads to common RA comorbidities of obesity and altered metabolism. Further research is needed to clarify the multitude of effects that miRNAs influence in order to utilize miRNAs as diagnostic tools and disease modifying therapies in RA.
DATA AVAILABILITY
The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher.
ETHICS STATEMENT
This study was carried out in accordance with the recommendations of the Duke University Medical Center Institutional Review Board with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Duke University Medical Center Institutional Review Board (IRB no. Pro00064057).
AUTHOR CONTRIBUTIONS
BA, CC, VK, WK, and KH conceived and designed the study and experimental approach. KH performed the skeletal muscle biopsies. CC completed the miRNA analyses. OI and TK completed the metabolomic analyses. MC completed the lipoprotein analyses. BA and KH performed statistical analyses. BA wrote the manuscript. All authors contributed to the writing and approval of the final manuscript.
FUNDING
This work was supported by NIH/NIAMS K23AR054904 and the Rauch Family Scholarship.
|
v3-fos-license
|
2021-06-18T13:24:25.219Z
|
2021-06-18T00:00:00.000
|
235465971
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2021.679665/pdf",
"pdf_hash": "bef140e2c971baa64f3b8c74c23d82edbc504dab",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45505",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"sha1": "bef140e2c971baa64f3b8c74c23d82edbc504dab",
"year": 2021
}
|
pes2o/s2orc
|
Harnessing the Endogenous 2μ Plasmid of Saccharomyces cerevisiae for Pathway Construction
pRS episomal plasmids are widely used in Saccharomyces cerevisiae, owing to their easy genetic manipulations and high plasmid copy numbers (PCNs). Nevertheless, their broader application is hampered by the instability of the pRS plasmids. In this study, we designed an episomal plasmid based on the endogenous 2μ plasmid with both improved stability and increased PCN, naming it p2μM, a 2μ-modified plasmid. In the p2μM plasmid, an insertion site between the REP1 promoter and RAF1 promoter was identified, where the replication (ori) of Escherichia coli and a selection marker gene of S. cerevisiae were inserted. As a proof of concept, the tyrosol biosynthetic pathway was constructed in the p2μM plasmid and in a pRS plasmid (pRS423). As a result, the p2μM plasmid presented lower plasmid loss rate than that of pRS423. Furthermore, higher tyrosol titers were achieved in S. cerevisiae harboring p2μM plasmid carrying the tyrosol pathway-related genes. Our study provided an improved genetic manipulation tool in S. cerevisiae for metabolic engineering applications, which may be widely applied for valuable product biosynthesis in yeast.
(2) Yeast centromere plasmid (YCp) contains an autonomously replicating sequence (ARS) and a yeast centromere (CEN) (Chlebowicz-Śledziewska andŚledziewski, 1985;Lee et al., 2016), which has high mitotic stability but low copy number. (3) Yeast episomal plasmid (YEp) harbors a 2µ plasmid replication origin and a partitioning locus (STB or REP3) (Murray and Cesareni, 1986), which has high copy numbers but low stability (Hohnholz et al., 2017). In summary, plasmids with stable expression usually cannot provide high copy number, while plasmids with high copy number will be easily lost after long-term fermentation in the nutrient medium. Therefore, a stable plasmid system with high copy number is urgently needed.
Yeast endogenous 2µ plasmid is a cryptic nuclear plasmid (Stevens and Moustacchi, 1971;Petes and Williamson, 1975), which confers no phenotype beyond the ability to maintain itself a high copy number at 60-330 copies per cell with the help of FLP-mediated recombination (Gerbaud et al., 1979;Murray and Cesareni, 1986;Reider Apel et al., 2017). The 2µ plasmid is a circular DNA plasmid with a size of 6,318 bp and a circumference of about 2 µm (Hartley and Donelson, 1980).
In the 2µ plasmid, there is an ∼600-bp DNA sequence essential for the faithful partitioning of the 2µ plasmid along with the trans-acting ORFs REP1 and REP2 (Kikuchi, 1983), named STB (Murray and Cesareni, 1986). In the absence of STB, the 2µ-based plasmids are rapidly lost due to extreme mother bias during mitosis. In addition, the 2µ plasmid codes for four proteins (REP1, REP2, RAF1, and FLP) that are vital for its own survival. REP1 and REP2 are the primary factors responsible for the 2µ plasmid stability (Jayaram et al., 1983). RAF1 interacts with both REP1 and REP2 independently and blocks their interaction, thus reducing the cellular concentration of the REP1-REP2 complex that acts as a repressor of REP1, FLP, and RAF1 genes. This blockage resulted in reduced plasmid stability and increased plasmid copy number (PCN). Both the deletion and overexpression of RAF1 have a similar effect on the plasmid stability and copy number, resulting in an increased PCN and decreased plasmid stability (Rizvi et al., 2018). FLP is a conservative site-specific recombinase (Sadowski, 1995). The flip of one half of the 2µ plasmid with respect to the other is predominantly FLP dependent (Gerbaud et al., 1979;Broach and Hicks, 1980). The FLP-mediated recombination is also believed to be responsible for the interconversion of the plasmid replication between the theta and the rolling circle modes of replication.
Many researchers took advantage of the high PCN and stable inheritance of the 2µ plasmid to directly transform 2µ plasmid as an expression tool. Ludwig et al. selected the HPAI restriction site of STB as the insertion site (Ludwig and Bruschi, 1991), but the loss of STB led to a high loss rate of the plasmid (Murray and Szostak, 1983;McQuaid et al., 2019). Misumi et al. (2018) inserted the yeast promoter, terminator, and nutritional deficiency marker gene leu2 between RAF1 and STB and called this plasmid YHp. The application of YHp was restricted in [cir 0 ] strains (Misumi et al., 2018). Zeng et al. (2021) chose two sites as the targets for insertion of heterogeneous DNA fragment: one is at the downstream of the RAF1, while the other is at the end of REP2. The derivative plasmids generated by inserting the same target gene at these two sites have lower plasmid loss rates and better expression level than the conventional 2µ-based plasmid pRS425 (Zeng et al., 2021). To our knowledge, no commonly used methods have been developed in laboratory strains with the wild-type (WT) 2µ plasmid (Supplementary Figure 1A).
Based on these previous studies described above (Hartley and Donelson, 1980;Jayaram et al., 1983;Rizvi et al., 2018;McQuaid et al., 2019), we identified a new insertion site between the REP1 promoter and RAF1 promoter (Supplementary Figure 1B). The pBR322ori, KanMX selection marker gene, and three endonuclease sites XhoI/PmeI/NotI were inserted in this site. The 2µ-modified plasmid was named p2µM. In plasmid stability measurement, the p2µM plasmid system was more stable than the pRS423 plasmid system. To test the application of p2µM in the biosynthesis of natural products, the tyrosol [a phenethyl alcohol derivative that has antioxidant and anti-inflammatory effects (Choe et al., 2012)] pathway-related genes were introduced into p2µM. The results confirmed that the stability and property of the p2µM were better than those of the pRS423M plasmid. Our study provided an improved genetic manipulation tool in S. cerevisiae for metabolic engineering applications, and it may be widely applied in valuable natural product biosynthesis in yeast.
DESIGN AND CONSTRUCTION OF ENDOGENOUS 2µ-BASED PLASMIDS IN VITRO
In order to construct a stable endogenous 2µ-based plasmid and apply it for DNA expression and pathway construction, the proper insertion site should be selected to insert essential elements and heterogeneous DNA fragments. Besides the known genes and sequences, there are still uncharacterized transcripts transcribed from the 2µ plasmid (Rizvi et al., 2017). It was found that the promoters of RAF1 and REP1 on the endogenous 2µ plasmid were adjacent and there was no other element between them by analyzing the elements related to stability. Thus, this site was selected as the insertion site (Supplementary Figure 1A). To edit the endogenous 2µ plasmid for a better genetic manipulation tool, the origin replication of Escherichia coli, combined with G418 resistance marker, was chosen to be inserted to construct p2µM (Supplementary Figure 1B).
To characterize the property of the p2µM plasmid, plasmid pRS423 with G418 resistance was chosen as a control to generate plasmid pRS423M (Supplementary Figure 1C). Plasmid pRS423 is also commonly used in yeast among the YEp pRS42 series plasmids due to its relatively high stability and copy number (Christianson et al., 1992).
Tyrosol is mainly extracted from olive oil, wine, and plant tissues. It has proven to be an effective cellular antioxidant and is widely used in food and medicine industries (Benedetto et al., 2007;Karković Marković et al., 2019). Taking into account the impact of the size of inserted fragment on the p2µM plasmid, we constructed three modules of different sizes using genes of the tyrosol biosynthetic pathway (Supplementary Figure 1D). The small module (mutation module, 3.8 kb) of ARO4 K229L and ARO7 G141S could efficiently relieve feedback inhibition and increase the production of tyrosol in S. cerevisiae (Liu H. et al., 2020), which was introduced to generate plasmid p2µM-ARO4 K229L -ARO7 G141S (p2µM-small-module). The rewiring module containing pentose phosphate pathway genes TKL1 and RKI1 could tune the flux of the precursor pathway (Walfridsson et al., 1996;Kondo et al., 2004;Bera et al., 2011). The adjustment module that contains ARO2 and ARO10 could adjust the shikimate pathway and L-tyrosine branch by catalyzing the conversion of chorismate from EPSP and the decarboxylation of 4-HPP to 4-HPPA (Liu H. et al., 2020), respectively. The medium module (9.8 kb) composed of the rewiring module and the adjustment module was overexpressed by p2µM plasmid, resulting in plasmid p2µM-TKL1-RKI1-ARO10-ARO2 (p2µM-medium-module). Finally, the medium module was introduced into plasmid p2µ-smallmodule, resulting in plasmid p2µM-TKL1-RKI1-ARO10-ARO2-ARO4 K229L -ARO7 G141 S (p2µM-large-module, the size of the large module was 13.6 kb). Then, these three modules were also inserted into the multiple cloning sites of plasmid pRS423M to generate pRS423M-small-module, pRS423M-medium-module, and pRS423M-large-module, collectively called pRS423M-based plasmids (Supplementary Figure 1F). The structures of the three modules are shown in Supplementary Figure 2.
DETERMINATION OF PLASMID STABILITY
Since the yeast endogenous 2µ plasmid showed high stability and copy number, we assumed that our p2µM plasmid could be more stable than the pRS423M plasmid. To test this hypothesis, the influences of the size of the inserted fragment on the stability of the p2µM plasmid were explored via measuring the plasmid loss rate. As shown in Supplementary Tables 1, 2, the stabilities of the p2µM-based plasmids were significantly higher than those of the pRS423M-based plasmids. First, plasmid p2µM and pRS423M were transformed to S. cerevisiae strain CEN.PK2-1C, respectively. Then, the plasmid loss rates of the 10th, 20th, 40th, and 50th generation strains were tested in YPD without G418 and in YPD + G418 medium (Figures 1A,B). When the size of the inserted fragment was 0, the plasmid loss rates of plasmid p2µM in non-selective medium were 36.3 ± 6.0% for the 10th generation, 62.4 ± 3.3% for the 20th generation, 72.5 ± 7.9% for the 40th generation, and 85.7 ± 1.4% for the 50th generation, lower than those of the pRS423M plasmid (90.4 ± 2.9, 98.8 ± 0.9, 99.3 ± 0.2, and 99.9 ± 0.2%). Plasmid loss rates of p2µM in selective medium were 5.7 ± 1.3, 7.2 ± 0.7, 12.4 ± 0.8, and 27.1 ± 1.4% for each generation, which were much lower than those of pRS423M (17.8 ± 1.1, 31.4 ± 1.8, 74.8 ± 0.9, and 85.1 ± 2.2%).
Furthermore, three p2µM-based plasmids of the experimental group and three pRS423M-based plasmids of the control group mentioned above were transformed to strain CEN.PK2-1C, respectively. The results showed that the stabilities of p2µMbased plasmids were higher than those of pRS423M-based plasmids both in non-selective medium and selective medium (Figures 1A,B). For non-selective medium, when the sizes of the inserted fragments were 3,842 and 9,821 bp, the plasmid loss rates of p2µM-based plasmids were 54.3 ± 8.5 and 71.4 ± 5.6% (the 10th generation), 87.9 ± 2.4 and 95.8 ± 1.3% (the 20th generation), 91.9 ± 1.0 and 96.9 ± 0.8% (the 40th generation), and 96.4 ± 0.9 and 98.7 ± 0.3% (the 50th generation), while the plasmid loss rates of pRS423M-based plasmids were 98.1 ± 1.3 and 97.2 ± 1.1% for the 10th generation, and the plasmids were all lost at the 20th generation (99.7 ± 0.4 and 100.0%). Until the size of the inserted fragment increased to about 14 kb, the plasmid loss rate of the experimental group was 94.3 ± 1.2% for the 10th generation, but plasmids of the control group were almost all lost. For cultures that were grown in selective medium, when the fragment of 9,821 bp was introduced, 49.3 ± 2.5% strains lost their plasmid p2µM-medium-module, but almost all strains lost the plasmid pRS423M-medium-module after fermentation for 50 generations (94.9 ± 1.3%). All strains lost the plasmid pRS423M-large-module at the 40th generation (98.6 ± 1.2%); however, the plasmid loss rate of the p2µM-large-module was merely 57.0 ± 1.9%. The amounts of plasmid loss in YPD + G418 medium were less than those in YPD medium without G418.
As shown in Figure 1C, supplementing antibiotics to YPD + G418 medium every 10 generations could maintain lower plasmid loss rates. Plasmid loss rates of the 40th generation were greatly decreased after G418 was supplemented at the 38th generation ( Figure 1D). The plasmid loss rates of the 40th generation were lower than those of the 20th generation, and the plasmid loss rates of p2µ-derived plasmids were still much lower than those of pRS423-derived plasmids.
PLASMID p2µM APPLIED IN TYROSOL PRODUCTION
To demonstrate that p2µM could be applied for the optimization of natural product biosynthesis, the tyrosol biosynthetic pathway was chosen as an example. The WT strain CEN.PK2-1C was fermented in YPD medium. Engineered strains containing individual p2µM-based plasmids and pRS423M-based plasmids with different sizes of tyrosol biosynthesis-related modules were simultaneously fermented in both non-selective medium and selective medium.
As demonstrated in Figure 2A, after fermentation in YPD medium, tyrosol productions of the WT strain were 45.11 ± 0.85 mg/L at the 20th generation and 48.53 ± 0.98 mg/L at the 40th generation. In non-selective YPD medium, strain CEN.PK2-1C with p2µM produced 39.39 ± 0.97 mg/L tyrosol after 20 generations and 44.78 ± 0.64 mg/L tyrosol after 40 generations (Figure 2B), which were lower than those of the WT strain. When the plasmid p2µM-small-module was transformed into the strain CEN.PK2-1C, the tyrosol production was 47.79 ± 0.64 mg/L at the 20th generation and 54.46 ± 0.21 mg/L at the 40th generation, 12.2% greater than that of the WT strain and 9.7% greater than that of the strain with pRS423Msmall-module. The strain CEN.PK2-1C carrying plasmid p2µMmedium-module accumulated 50.59 ± 1.12 mg/L tyrosol after 40 generations of fermentation. In the strain CEN.PK2-1C with p2µM-large-module, the tyrosol titer of 48.03 ± 0.45 mg/L was obtained, which was not as good as the WT strain but 7.3% higher than that of CEN.PK2-1C carrying p2µM. CEN.PK2-1C carrying plasmid pRS423M produced 35.99 ± 0.35 mg/L tyrosol at the 20th generation and 43.41 ± 0.94 mg/L tyrosol at the 40th generation, which were lower than those of the strain with p2µM and the WT strain. Tyrosol productions in strain CEN.PK2-1C with pRS423M-medium-module and pRS423M-large-module at each generation were all much lower than those of the strains carrying p2µM-based plasmids.
According to Figure 2C, after shake flask cultivation in YPD + G418 medium, the strain harboring p2µM generated tyrosol titer of 44.75 ± 0.83 mg/L at the 20th generation. At the 40th generation, tyrosol production was 49.05 ± 0.90 mg/L, which was higher than that of the WT strain and CEN.PK2-1C with p2µM fermented in non-selective medium; 71.11 ± 0.71 and 98.39 ± 0.41 mg/L tyrosol was produced in the strain containing p2µM-small-module after fermentation for 20 and 40 generations, respectively, which were much higher than that of CEN.PK2-1C with pRS423M-small-module (59.55 ± 0.16 mg/L). Tyrosol productions accumulated in the strain with p2µMmedium-module (47.71 ± 0.72 and 54.95 ± 0.50 mg/L) and p2µM-large-module (46.44 ± 0.65 and 50.20 ± 0.34 mg/L) after fermentation for 20 and 40 generations in selective medium were lower than those of the strain containing p2µM-smallmodule, but they were higher than those of CEN.PK2-1C with pRS423M-based plasmids. Strains carrying pRS423M produced 47.72 ± 0.18 mg/L tyrosol at the 40th generation, 2.8% lower than that of the strain with p2µM and 1.7% lower than that of the WT strain. The tyrosol yields of the strain containing plasmids pRS423M-small-module (59.55 ± 0.13 mg/L), pRS423M-medium-module (44.65 ± 1.46 mg/L), and pRS423Mlarge-module (25.64 ± 0.80 mg/L) at the 40th generation were all lower than those of the strains of p2µM-based plasmids with modules of the same size.
All results showed that the tyrosol yields of the strains with p2µM-based plasmids were higher than those of the strains with pRS423M-based plasmids both in non-selective medium and selective medium, which could be due to the instability of plasmid pRS423.
DISCUSSION
In this study, an endogenous 2µ-based expression vector with enhanced stability was developed in S. cerevisiae. The site between the RAF1 promoter and REP1 promoter on this plasmid was chosen as the insertion site for the gene of interest, which would not affect the functional elements and stability of the plasmid.
The plasmid loss rates were calculated on the strains harboring plasmids with inserted fragments of different sizes by culturing in non-selective YPD medium and YPD medium with selective pressure. After culturing without selective pressure for 40 generations, the loss rates of p2µM and pRS423M were about 73 and 100%, respectively. For plasmids containing modules of about 4 kb, the plasmid loss rates of p2µM-small-module and pRS423M-small-module in non-selective YPD medium were about 90 and 100%, respectively. All strains lost their plasmids by fermentation in YPD medium for 50 generations. Culturing in YPD + G418 medium for 50 generations, plasmid loss rate of p2µM was about 27% and that of pRS423M was about 85%. Plasmid pRS423M-large-module was all lost after 40 generations of cultivation, while merely 57% of the plasmid p2µM-large-module was lost. Continuous supplementation of G418 in YPD + G418 medium could help maintain the stability of plasmids, especially for p2µM-based plasmids. The plasmid loss rate of p2µM-large-module after 40 generations of cultivation was about 31%, which was much lower than that of pRS423Mlarge-module (about 82%). Although the selection pressure was conducive to the stable existence and inheritance of plasmids, a large number of pRS423M-based plasmids were lost during long-time fermentation. The results showed that the stabilities of the p2µM-based plasmids were higher than those of the pRS423M-based plasmids. It is estimated that an inserted fragment of 10 kb is acceptable for p2µM when there is no selection in the medium, and the inserted fragment of 14 kb is acceptable for p2µM under condition with selection. Zeng et al. (2021) moved the essential gene TPI1 from chromosome to p2µ plasmid. With auxotrophic complementation of TPI1, the resulting plasmid pE2µRT could undergo cultivation of 90 generations without loss under non-selective conditions.
Tyrosol biosynthetic pathway was introduced to demonstrate that the expression level of the p2µM-based plasmids was superior to that of the controls. After 40 generations of shake flask cultivation in YPD medium, the tyrosol yield of strain CEN.PK2-1C carrying plasmid p2µM-small-module was 54.46 ± 0.21 mg/L, about 9.7% higher than that of CEN.PK2-1C with pRS423M-small-module (49.64 ± 0.71 mg/L). The tyrosol titer of CEN.PK2-1C with p2µM-medium-module was 82.0% higher than that of strains carrying pRS423Mmedium-module. The yield of tyrosol harvested from strains with p2µM-large-module was about threefold higher than that from strains with pRS423M-large-module. However, strains containing large module accumulated less tyrosol than strains containing small module and medium module, which was probably due to the instability of p2µM containing large module. Tyrosol production of the strain with p2µM-smallmodule at the 40th generation was 98.39 ± 0.41 mg/L with selective pressure, which was 80.7% greater than the strain with p2µM-small-module in non-selective medium and 65.2% higher than that of the strain with pRS423M-smallmodule in selective medium. The tyrosol yields of the strain containing plasmids pRS423M-medium-module and pRS423Mlarge-module at the 40th generation were all lower than those of the strains of p2µM-based plasmids with modules of the same size.
Taking these results into account, in order to improve the stability of endogenous 2µ-based expression vector in yeast, an essential gene could be introduced into the plasmid while knocking out the same essential gene in the genome to ensure the existence of engineered endogenous 2µ plasmid in yeast (Zeng et al., 2021). In the future, researchers could apply the CRISPR/Cas9 system to directly integrate metabolic pathways into the endogenous 2µ plasmid with an essential gene in vivo (Dean-Johnson and Henry, 1989;Zheng et al., 1993;Wang et al., 2020;Yang et al., 2021). In summary, our endogenous 2µ-based expression vector p2µM has improved stability than the commonly used YEp pRS423, so it could be applied in S. cerevisiae for genetic manipulations.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
|
v3-fos-license
|
2021-01-11T14:53:00.676Z
|
2021-01-11T00:00:00.000
|
231575991
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://cvirendovasc.springeropen.com/track/pdf/10.1186/s42155-021-00205-x",
"pdf_hash": "1f0293d9b94f627dadb01aa69a7b8d39c055e57c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45509",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1f0293d9b94f627dadb01aa69a7b8d39c055e57c",
"year": 2021
}
|
pes2o/s2orc
|
Long-term clinical effectiveness of a drug-coated balloon for in-stent restenosis in Femoropopliteal lesions
Background The short-term efficacy of paclitaxel-coated balloons (PCBs) has been established in femoropopliteal in-stent restenosis (ISR) lesions. The aim of this study was to compare 5-year clinical outcomes of patients with femoropopliteal ISR lesions undergoing percutaneous transluminal angioplasty (PTA) with and without PCB. Methods After 1:1 propensity score matching, we extracted 50 patients with femoropopliteal ISR lesions undergoing PTA with (n = 25) and without (n = 25) IN.PACT PCB (Medtronic, Minneapolis, MN) from 106 consecutive ISR patients treated in our hospital between 2009 and 2015. We compared the 5-year outcomes between PCB and non-PCB groups. The primary endpoint was the cumulative 5-year incidence of recurrent restenosis. All-cause mortality, target lesion revascularization (TLR) and unplanned major amputation were also assessed. Results The primary patency after PCB treatment at 5 years was significantly higher than the patency after non-PCB treatment (65.7% vs. 18.7%; hazard ratio [HR]: 6.11; 95% confidence intervals [CI]: 2.57–16.82; p < 0.001), as well as freedom from TLR (77.6% vs. 53.8%; HR: 3.55; 95% CI: 1.21–12.83; p = 0.020). All-cause mortality and unplanned major amputation rates did not significantly differ between the two groups. The Cox proportional hazard multivariate analysis showed that PCB was independently associated with preventing recurrent restenosis (HR: 0.17; 95% CI: 0.06–0.41; p < 0.001). Conclusions At 5 years, patients with femoropopliteal ISR lesions undergoing PCB treatment showed significantly lower recurrent restenosis than those that underwent non-PCB treatment. Evidence-based medicine Level of Evidence: Level 2b, Non-randomized controlled cohort/follow-up study.
Introduction
Endovascular therapy (EVT) represents an established practice in the treatment of lower extremity peripheral arterial disease (PAD). The use of bare-nitinol stents (BNS) has led to good acute luminal gains in the past 2 decades; however, its primary patency has remained unsatisfactory (Schillinger et al., 2006;Soga et al., 2010;Iida et al., 2014;Tosaka et al., 2012). Several paclitaxel-based devices have been developed to overcome the shortcomings of BNS (Gray et al., 2018;Laird et al., 2019). The new devices are associated with higher primary patency in de novo femoropopliteal lesions than plain balloon angioplasty and BNS, and the recent guideline recommends the primary EVT strategy to be deployed in complex femoropopliteal lesions (Aboyans et al., 2017).
The one-year efficacy of paclitaxel-coated balloons (PCB) has been proven also in treatment of femoropopliteal in-stent restenosis (ISR) after BNS implantation by randomized control trials (RCT) comparing to plain balloon angioplasty; therefore, PCB is one of the best solutions for ISR lesions (Krankenberg et al., 2015;Kinstner et al., 2016;Ott et al., 2017;Cassese et al., 2018). On the other hand, few studies have investigated the long-term patency after PCB treatment in ISR lesions of femoropopliteal segments, and the longest follow-up outcomes have been reported up to 3 years in the previous study (Grotti et al., 2016). In this study, we retrospectively compared the 5-year clinical outcomes of patients with femoropopliteal ISR undergoing percutaneous transluminal angioplasty (PTA) with and without PCB.
Study population
This retrospective, single-center, non-randomized study was performed to compare the immediate and 5-year outcomes of consecutive patients with femoropopliteal ISR lesions who underwent PTA using PCB (PCB group) and plain balloons (non-PCB group). Given that PCB has been commercially available since December 2018 in our country, we privately imported IN.PACT Pacific PCB (Medtronic, Minneapolis, MN) from 2008 to 2014 after approval by the institutional review board of our hospital. We analyzed 106 consecutive Asian patients (mean age: 72.1 ± 8.7 years; 68 males) with symptomatic PAD who underwent PTA for femoropopliteal ISR lesions at our hospital from 2009 to 2015 (Fig. 1). The key inclusion criteria were age > 50 years, symptomatic PAD (Rutherford category 2 to 5), ISR > 70% at the stented site in femoropopliteal segments (Kinstner et al., 2016). We excluded patients with acute limb ischemia and/or short life expectancy, as in the previous studies (Cassese et al., 2018). The study protocol was developed in accordance with the Declaration of Helsinki and was approved by the institutional review board of our hospital (approval no. 27-33). Informed consent was obtained from all patients.
Procedures
After local anesthesia with 2.0% xylocaine, a 6.0-or 7.0-French guiding sheath was inserted via the ipsi-or contralateral common femoral artery. Unfractionated heparin (5000 IU) was injected initially from the sheath, with an additional 2000 IU given intravenously every hour. In cases of in-stent occlusion, we initially attempted lumencrossing using 0.018-or 0.014-in. guidewires and microcatheters. If that was unsuccessful, the loop-wire technique was applied, using a 0.035-or 0.018-in. hydrophilic guidewire. If necessary, a retrograde approach was implemented, via either the popliteal or tibial arteries. At first, PTA was performed using plain balloons with nominal diameters same as the reference vessel diameter and with matched length to the lesion's, evaluated by visual estimation. Balloon dilatation was continued for at least 60 s. IN.PACT PCB was available in nominal diameters of 4 and 6 mm and nominal lengths of 60, 80, and 120 mm in our hospital. When the operators adjudicated that lesions could be covered by one or two PCB with the above sizes after successful balloon dilatation, IN.PACT PCB was dilated for at least 60 s. Inflation time was based on the previous studies conducting PCB treatment (Krankenberg et al., 2015;Fanelli et al., 2014). PCB treatment was used after successful pre-dilatation; therefore, bail-out stenting was not performed in the PCB group.
Definition and study endpoints
All the lesions were characterized according to the Trans-Atlantic Inter-Society Consensus (TASC) II and Tosaka classification (Tosaka et al., 2012;Eur J Vasc Endovasc Surg, 2007). The immediate success of PTA was defined as achieving any residual stenosis of < 30% of the reference diameter, adjudicated by visual estimation. The Proposed Peripheral Arterial Calcium Scoring System was used to categorize the degree of native femoropopliteal lesion calcification (Rocha-Singh et al., 2014). All angiograms were independently evaluated by two experienced operators for baseline lesion morphology and procedural success. In outpatient follow-up, color Doppler ultrasound assessment was performed routinely every 12 months after EVT to evaluate the patency of the vessel. Restenosis was defined as a peak systolic velocity ratio over 2.4 on duplex ultrasonography, which was considered to indicate > 50% narrowing (Fanelli et al., 2014). The primary study endpoint was the recurrent restenosis within 5 years after PTA for ISR lesions. The secondary endpoints were all-cause mortality, target lesion revascularization (TLR) and unplanned major amputation within 5 years.
Statistical analysis
In this study, a propensity score (PS) matching analysis was performed to adjust for the differences in baseline clinical characteristics between the two groups. The PS was estimated by a logistic regression model that included patient and lesion characteristics listed in Tables 1 and 2 as exploratory variables. The matching was performed using the nearest-neighbor method, with a caliper of 0.20. Categorical variables were presented as counts (percentages) and compared using the Chisquared or Fisher's exact tests. Continuous variables were expressed as mean ± standard deviations and compared using the Student's t-test or the Mann-Whitney U test based on their distributions. In the matched population, the cumulative incidence of study endpoints was estimated using the Kaplan-Meier method. Hazard ratios (HRs) for recurrent restenosis, all-cause mortality, and TLR were compared between the PCB and non-PCB groups. To identify possible risk factors for recurrent restenosis, HRs for clinically selected patients, lesions, and procedural variables were estimated by univariate and multivariate Cox models. All statistical analyses were performed by two physicians using JMP version 14 (SAS Institute, Cary, NC). Values of P < 0.05 were considered statistically significant.
Study population
After PS matching, the final study population consisted of 25 matched patients in each group (Fig. 1).
Baseline clinical characteristics
Baseline patient and lesion characteristics before and after PS matching are displayed in Tables 1 and 2. Before PS matching, no significant differences in baseline clinical characteristics were observed between PCB and non-PCB groups, except for ISR patterns. After PS matching, baseline patient and lesion characteristics were well balanced between the two groups. Table 3 shows the comparison of procedural characteristics and results. Before and after PS matching, the balloon sizes and pressure before PCB dilatation were similar between the two groups. Since PCB was used after successful balloon pre-dilatation, the patients in the PCB group did not undergo bail-out stenting. The bail-out stenting was performed in 20 patients of the original population in the non-PCB group and the 12 patients had Tosaka III lesions (60.0%). The PCB dilatation was successful in all patients of the PCB group. There was no significant difference in the rates of complications between the two groups.
Clinical outcomes
Five-year follow-up information was obtained for 19 (76.0%) PCB patients and in 20 (80.0%) non-PCB patients (p = 1.000) after PS matching. At 5 years, the rate of freedom from recurrent restenosis was significantly higher in the PCB group (65.7%) than that in the non-PCB group (18.7%) with a HR of 6.11 and 95% confidence interval (CI) of 2.57-16.82 (p < 0.001; Table 4 and Fig. 2), as well as the rate of freedom from TLR (77.6% vs. 53.8%; HR: 3.55; 95% CI: 1.21-12.83; p = 0.020). The Rutherford category was improved in both groups similarly after the procedure; whereas the rate of patients with Rutherford category 0 and 1 was significantly higher in the PCB group at 5 years (p = 0.014; Fig. 3). There was no significant difference between the two groups in terms of all-cause mortality (16.0% vs. 12.0%; HR: 1.34; 95% CI: 0.30-6.81; p = 0.699) ( Table 4). No unplanned major amputation was observed in any Variables included in the multivariable analysis to estimate propensity score patient of the two groups. The Cox proportional hazard multivariate analysis revealed that the use of PCB was independently associated with a lower incidence of recurrent restenosis (HR 0.17, 95% CI: 0.06-0.41; p < 0.001) ( Table 5).
Discussion
The main findings of the present study are as follows: (1) the recurrent restenosis rate at 5 years after PCB treatment was significantly lower than that after non-PCB treatment; (2) the Cox multivariate analysis revealed that the use of PCB significantly reduced the incidence of recurrent restenosis; and (3) the cumulative rates of procedural complications, all-cause mortality, and major amputation were not different between the two groups. Balloon angioplasty can potentially injure the vessel due to overstretching of the wall, denudation of endothelium, rupture of internal elastic lamina, and medial tear leading to the stimulation of smooth muscle cells; therefore, paclitaxel plays an important role in suppressing the stenotic processes (Yahagi et al., 2014). Restenosis and de novo femoropopliteal plaques are different in cellular composition and cell proliferation, the former being highly cellular and comprised primarily of smooth muscle cells (Johnson et al., 1990;Edlin et al., 2009). The suppression of neointimal growth through the antiproliferative effect of paclitaxel led to a consistent lower risk for repeat revascularization and recurrent restenosis (Krankenberg et al., 2015;Kinstner et al., 2016;Ott et al., 2017;Cassese et al., 2018). Thus, the recent guidelines recommend PCB treatment rather than plain balloon angioplasty in ISR lesions, as class IIb (Aboyans et al., 2017).
RCTs reported that the primary patency rates at 6 months after PCB treatment were significantly higher than those after plain balloon angioplasty. However, the 6-month patency after PCB treatment ranged from 58.8 to 84.6%, according to ISR complexity such as lesion length and total occlusion; whereas the 6-month patency in the control group treated by plain balloon angioplasty was reported to range from 41.3% to 55.3% and it could Categorical variables are expressed as number and percentage, and are calculated based on Kaplan-Meier estimate ISR indicates in-stent restenosis, PCB paclitaxel-coated balloon not differ with the lesion complexity (Krankenberg et al., 2015;Kinstner et al., 2016;Ott et al., 2017;Cassese et al., 2018). The difference between the patency rates might be due to the difficulty associated with vessel preparation before PCB treatment. This study included patients with mainly Tosaka I and II lesions (84.0%) of relatively short length (126 ± 57.4 mm); therefore, the patency rate after PCB treatment was high at 96.0% within one year, comparing to that in the previous reports (Krankenberg et al., 2015;Kinstner et al., 2016;Ott et al., 2017;Cassese et al., 2018).
In the present study, the rate of Tosaka I and III lesions was significantly higher in the original non-PCB group before the matching. The possible reason of the discrepancy was the difference of the procedural success and the patency rate according to the severity of ISR patterns. The freedom from recurrent occlusion after plain PTA in Tosaka I ISR was reported to be 84.1% at 3 years; therefore, we might not tend to use PCB for focal ISR (Tosaka et al., 2012). On the other hand, instent occlusion was generally more difficult to achieve successful pre-dilatation (Grotti et al., 2016) and the bail-out stenting was performed in mainly Tosaka III instent occlusion in the present study. Consequently, the rate of patients with Tosaka III was significantly lower in both the original and matched PCB group. Moreover, the high rate of patients with stent-in-stent treatment might influence of the poor patency rate in the non-PCB group.
As a previous report demonstrated that the patency after PCB was lower in Tosaka III lesions than in the others (Grotti et al., 2016), it might be better to observe patients carefully after stenting in femoropopliteal lesions and to treat early-staged ISR lesions using PCB. On the other hand, the current meta-analysis demonstrated that debulking devices improved the patency after PTA in patients with complex ISR lesions (Li et al., 2020). Particularly, a combination of laser atherectomy (LA) and PCB was reported to be more effective in reducing the TLR rate within 2 years in Tosaka II and III ISR lesions than LA and plain balloon angioplasty (Kokkinidis et al., 2020;van den Berg et al., 2014). In PCB treatment, lesion modification might be effective in overcoming the complex femoropopliteal ISR lesions. Liistro et al. demonstrated that PCB reduced the rates of recurrent restenosis and TLR within one year significantly more than plain balloon angioplasty in ISR patients with diabetes and a high prevalence of critical limb ischemia (Liistro et al., 2014); however, they showed that the benefit provided by PCB was not evident at 3 years (Grotti et al., 2016). On the other hand, although the patency after PCB treatment decreased gradually from 96.0% at 2 years to 65.7% at 5 years in this study, the patency rate was superior at 5 years to that after plain PTA treatment significantly. The possible reasons of the low rate of this phenomenon, known as "late catch-up," might be due to the difference of patient characteristics. As mentioned above, the PCB group of this study included mainly claudicants and Tosaka II ISR lesions rather than the previous study which had patients with complex characteristics such as diabetes (100.0%), critical limb ischemia (75.0%) and Tosaka III lesions (51.0%) (Grotti et al., 2016;Liistro et al., 2014). These characteristics were associated with poor long-term patency after PCB treatment in patients with de novo femoropopliteal lesions (Laird et al., 2019); therefore, the adverse conditions of PAD patients might influence the patency also in ISR lesions.
The results of a recent meta-analysis aroused concern about an increased risk of death associated with the use of paclitaxel-based devices in lower-limb EVT for PAD (Katsanos et al., 2018). Whereas the latest large-scale RCT comparing paclitaxel-coated and uncoated devices demonstrated that use of coated devices did not increase the mortality of PAD patients within 4 years (Nordanstig et al., 2020). The concern about safety of paclitaxel-coated devices is still inconclusive. Although this study was retrospective and included quite small number of PAD patients, there was not difference in mortality within 5 years between patients treated with and without PCB.
Study limitations
This study has several limitations. First, it was a singlecenter trial with a small sample size. Therefore, the shortcoming of this study was small number of patients to support meaningful statistics, and especially a valid multivariable analysis. Moreover, this was retrospective; therefore, the follow-up rate was not high. Accordingly, outcomes should be read with extreme cautiousness. Second, because the study duration was 6 years, there was a possibility of bias over time in deciding the EVT procedure. Third, the endpoints were adjudicated by independent observers but not by an external core laboratory. Finally, although we performed PS matching to adjust for the differences in baseline clinical and procedural characteristics between the two groups, potential bias could not be excluded in this study, and might have affected the conclusions.
Conclusions
At 5 years, patients with femoropopliteal ISR lesions treated through PCB showed significantly lower recurrent restenosis and TLR rates than those who underwent non-PCB treatment.
|
v3-fos-license
|
2020-06-04T09:11:39.693Z
|
2020-05-28T00:00:00.000
|
219756167
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3417/10/11/3753/pdf",
"pdf_hash": "494e8eb497943a89f403e07c3ac683f0053003cf",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45510",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "eb2083ad3986185af8f8b21687f7c6baf6b4e57e",
"year": 2020
}
|
pes2o/s2orc
|
Secret Image Sharing Revisited: Forbidden Type, Support Type, and Their Two Approaches
: In this paper, we introduce two new image-sharing types to extend the applicability of sharing. Type 1 is our so-called forbidden type. In its sharing system, any t of the n shares can recover the secret image, unless the t shares form a forbidden group listed in a forbidden list. Type 2 is our so-called cross-department support type. If a government has 3 departments { DEP H , DEP M , DEP L }, then 3 thresholds ( t H , t M and t L ) exist. Any t H number of officers from department DEP H can unveil the secret image, and likewise for any t M and t L number of officers from departments DEP M and DEP L , respectively. Type 2 image sharing allows a secret to be disclosed not only in an intra-department meeting but also in a cross-department meeting. In this study, both types are implemented through two approaches: the polynomial and linear-equations approaches. Hackers can be confused when two approaches are mixed. As for the applications, use Type 1 to protect sensitive information in medical or military images or legal documents; and use type 2 to support cross-department crime investigation, industrial production, etc.
Introduction
Secret sharing was introduced by Shamir [1] and Blakley [2] in 1979. In secret sharing, a given secret is encoded and divided into n shares, and two requirements must be met before a secret can be disclosed: a) any t of the n shares can cooperate to unveil the given secret, and b) less than t shares cannot unveil. Secret image sharing has at least two main streams. The first is visual cryptography (VC) [3], which is used for black-and-white (2-levels) images, and the second is polynomial-based sharing [4], which is used for gray-value or color images. Introduced by Naor and Shamir [3] in 1994, visual cryptography uses several transparencies as media to share the secret image, where each transparency is "larger" than the secret image in size. Building on Shamir's (t, n) scheme, in 2002, Thien and Lin [4] proposed a (t, n) threshold scheme for sharing 256-level secret images. In [4], each share was t times "smaller" than the given secret image. For a (t, n) threshold scheme, t ≤ n always holds. The security level is controlled by the ratio t/n, where a larger t/n ratio means that more participants are required for the secret to be deciphered, and hence, prevent the betraying from a small group of participants. If every participant trusts no one else, then t = n can be set. Smaller t/n ratios are used in unstable environments (e.g., during a war or if the storage medium or Internet connection is unreliable) in which many participants may lose contact. Thus, a (t, n) system not only addresses security concerns from betrayal but also allows for tolerance toward missing shares in unstable environments. Because sharing is useful, all four aforementioned papers [1 -4] have been frequently cited, particularly Shamir [1], which has had more than 10,000 citations from 1979 to 2019. Many recent studies have also focused on sharing [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Finally, as for image sharing, some readers may wonder why one cannot just use a key to encrypt the secret image and then share the key with authorized personnel. This is because the resultant protection of the image will be extremely weak, although the n shares of the key will be smaller in size than the n shares of the secret image. In cases of a disk failure or hacker attack on the computer storing the encrypted version of secret image, the entire secret image will be deleted "forever".
In the present study, we attempt to extend our previous foundational study [4] on sharing to introduce two other types of sharing: the forbidden type and cross-department support type. We implement these two types first by using a polynomial approach and then by using a linear-equations approach. Notably, the polynomial approach has been widely used by researchers, but not the linearequations approach. We list both approaches here to increase security against hackers. For example, the secret image can be divided into several parts, where odd-numbered parts use the polynomial approach, and the even-numbered parts use the linear-equations approach.
The paper is organized as follows: Section 2 outlines the basics of polynomial-based sharing. Section 3 introduces the two proposed sharing types, including our two approaches to implementing each of the two types. Section 4 discusses some details of the design, such as how to obtain independent equations for the linear-equations approach and how to mix the two approaches to increase security. Section 5 describes practical application and implementation results. Section 6 concludes this study.
Review of Previous Work
Thien and Lin [4] proposed a (t, n) secret image sharing scheme in 2002. Their work has had many citations. In the scheme, the input image is partitioned and distributed into shares by using a polynomial. The size of every share is only 1/t the size of the original secret. Any t shares can reconstruct the image, whereas shares fewer than t cannot. The (t, n) secret image sharing scheme of Thien and Lin [4] is as follows: Polynomial-based (t, n) secret image sharing Step 1. Input a secret image. Assume it has m pixels. Permute all the pixels according to a key to obtain a noisy-looking image , which still has m pixels.
Step 2. Divide into (m/t) nonoverlapping segments so that every segment has t pixels.
Step 3. For each t-pixel segment j = 1, 2,..., m/t, use the gray values of its t pixels as the t coefficients {a0,..., at-1} in the segment-dependent polynomial: Step 4. Then, for each segment, the share Si receives a value p(i), true for each i = 1,…, n. As the segments are processed sequentially, the data size of each of the n shares also grows. Finally, when all m/t segments are processed, there are n shares. For each i = 1,…, n, the share Si receives one value from each of the m/t segments of image Q; thus, each share Si has m/t values. (Therefore, each share is t times smaller than the m-pixel secret image.) In these steps, the value of Z in Equation (1) can be set to 256, if all arithmetic operations in Equation (1) are in terms of the arithmetic operations in the Galois Field GF(256). For readers who are not familiar with the Galois Field, simply set Z to be a prime number near 256. For example, a study used Z = 251 [4], with auxiliary preprocessing that splits one pixel into two pixels for each pixel whose gray value is greater than 250.
The steps to recover the secret image are also executed segment by segment. For each segment, the t coefficients {a0,..., at-1} in the segment-dependent polynomial (1) is solved using Lagrangian interpolation polynomial method. We omit a discussion of this method because it can be learned through Internet resources or from any textbook on Numerical Analysis, such as [20].
Proposed Types and Approaches
In this section, we introduce two extended types (Types 1 and 2) of image sharing; we also demonstrate how they can be implemented by using either the polynomial approach or linearequations approach.
Type 1 (Sharing with forbidden combinations): To understand what Type 1 image sharing is, without loss of generality, consider 6 people {P1,..., P6} who share together a secret of the company. The secret is such that 3 or more people must be gathered to unveil it. Hence, the secret can be recovered by ( 6! (6−3)!3! ) = 20 possible combinations of personnel if no combination is forbidden. However, according to a security check, P1, P3, and P5 worked for a rival company before working in ours, making them less trustworthy. The combination of these 3 employees should thus be excluded as a combination that allows access to the secret. This is an example of Type 1 image sharing with a single forbidden combination, namely, {P1, P3, P5}. Of course, if the boss of our company wishes to be more careful, he can even forbid more combinations such as {P1, P3, x}, {P1, P5, x}, and/or {P3, P5, x}, where x can be any of the other employees (i.e., any other Pi). In the preceding example, some combinations cannot be used to gain access to the secret. We call it "sharing with forbidden combinations" and denote it as (tparticipants, nparticipants, f) sharing. Here, nparticipants: The number of participants who share the secret. Each participant will hold a so-called "shadow" file.
tparticipants: The minimum number of people required to recover the secret. Any tparticipants of the nparticipants participants can recover the secret by using the their shadow files, unless the tparticipants people constitute a forbidden combination.
f : The number of forbidden combinations. For example, the preceding example is a (3, 6, 1) scheme if there is only one forbidden combination {P1, P3, P5} and a (3,6,4) scheme if all {P1, P3, x} combinations are forbidden. Traditional sharing, which has no forbidden combinations, can be denoted as (tparticipants, nparticipants, f) = (3, 6, 0), and it is thus treated as a special case of Type 1 image sharing.
Type 2 (Sharing with cross-department support): Without loss of generality, assume there are 3 departments, namely {DEPH, DEPM, DEPL}. For officers in department DEPH, assume any 3 of them can recover the secret. For officers in department DEPM, assume any 4 of them can recover the same secret. For officers in department DEPL, assume any 5 of them can recover the same secret. Notice that this one secret has 3 thresholds: tH = 3, tM = 4, tL = 5. Hence, each department has its own threshold. This system can be easily implemented by repeating the traditional sharing system thrice: first, use a (t, n) = (3, nH) sharing system to share the secret among the nH officers of department DEPH, then use a (t, n) = (4, nM) sharing system to share the "same" secret among the nM officers of department DEPM, and then use a (t, n) = (5, nL) sharing system to share the "same" secret among the nL officers of department DEPL. However, due to illness, a terrorist attack, or business trips abroad, not every officer of the same department comes to the office every day. It is thus quite possible that on some workdays, the department has insufficient personnel to unveil the secret. For the company or government to still function, an auxiliary system must be designed to allow the secret to be unveiled in such a circumstance. Our so-called "crossdepartment support system" is one such system, where the secret can be unveiled through the cooperation of multiple departments.
In the following section, we detail our design for these two types. Section 3.1 and Section 3.2 describe the use of the polynomial and linear-equations approaches, respectively, in implementing the two types.
Using the Polynomial Approach to Design Types 1 and 2
Type 1 (Sharing with forbidden combinations): Without loss of generality, consider 6 participants {P1,..., P6} in the company, of which, any 3 can unveil the secret, unless the 3 participants constitute a forbidden combination. Examples 1 and 2 illustrate the steps in creating a sharing scheme with one and two forbidden combinations, respectively, and the appendix at the end of paper illustrates the cases when the number of forbidden combinations is 3 or 4. In general, the design of Type 1 is case by case, and the design of Type 2 is easier.
Example 1 (one forbidden combination): A sharing scheme with one forbidden combination is easy to design. Without loss of generality, let the only forbidden combination be the combination {P4, P5, P6}. Because there are only 6 participants, we shall create 6 "shadows" for the 6 participants per the following steps 1 and 2. This results in the creation of a (tparticipants = 3, nparticipants = 6, f = 1) threshold scheme, where {P4, P5, P6} is the only forbidden combination.
Note that we deliberately let the final two components of the participant P6 be the already-used shares S12 and S15, which had already appeared in P4 and P5. By doing this, the total number of different shares in the combination {P4, P5, P6} is only 3 + 3 + 2 = 8, which is less than the threshold of 9, meaning that the secret cannot be revealed. Conversely, any other combination {Pi, Pj, Pk} yields at least 9 distinct shares, and hence, can reveal the secret. Notably, for Participants P1 to P5, every participant holds a shadow (the data held by a participant), which is constituted by 3 shares; and each share is 9 times smaller in size than the original secret in the (9, 17) threshold scheme. Therefore, in our (tparticipants = 3, nparticipants = 6, f = 1) scheme, the data size of each participant P1 to P5 is 3 1/9 = 1/3 of the size of the original secret. By contrast, because P6 holds 4 shares, the data size is 4 1/9 = 4/9 of the size of the original secret.
Notably, because more constraints must be considered, the design is likely to become more difficult when the number of forbidden combinations increases. Example 3. (3, nparticipants, f) sharing whose f forbidden combinations always contain a troublemaking couple P1 and P2. See the appendix for the design.
Type 2 (Sharing with cross-department support): Let each participant belong to one of several departments, for example, 3 departments {DEPH, DEPM, DEPL}. Without loss of generality, assume the parameter pair (t = threshold, n = number of people in this department) for the 3 departments are, respectively, (tH = 3, nH = 4) for DEPH, (tM = 5, nM = 6) for DEPM, and (tL = 7, nL = 7) for DEPL. As the first step in the design, we shall assign a value to an artificial threshold tartificial, which is larger than any given local threshold. Subsequently, we use this artificial threshold tartificial to share the secret and thus obtain several shares such that any tartificial shares can recover the secret. We then distribute these shares to each person of each department. In this cross-department support type, we assume that shares are not repeatedly distributed. Let each person in DEPH, DEPM, and DEPL obtain, respectively, QH, QM, and QL shares. Then, to satisfy each department's rule for unveiling the secret, we must have: Because tartificial, QH, QM, and QL are all positive integers, if QL is 1, then Equation (2) implies 7 tartificial > 6; hence, tartificial = 7. Furthermore, the right-hand side of Equation (3) implies tartificial = 7 > 4 QM, which means the positive integer QM is 1, thus contradicting the left-hand side of Equation (3) because 5 QM tartificial then becomes 5 1 7. Hence, we cannot use QL = 1. Thus, we try the next smallest value 2 for QL. Equation (2) then implies 14 tartificial > 12. Hence, tartificial is 13 or 14, proven as follows. If tartificial = 13, Equation (3) becomes 5QM 13 > 4QM, which can be satisfied by QM = 13/5 = 3, and Equation (4) We now examine the system further. For example, let the artificial threshold be 14. We can then use (tartificial, nartificial) = (14, 52) in the traditional polynomial-based sharing scheme to create 52 shares. Each participant in DEPH, DEPM, and DEPL gets 5 = 14/3, 3 = 14/5, and 2 = 14/7 shares, respectively. Moreover, for any 2 participants (whether from the same department or from different departments), their set of shares do not intersect. Notably, because no intersection is allowed, any 3 DEPH participants can hand in 3 5 = 15 > 14 distinct shares; any 5 DEPM participants can hand in 5 3 = 15 > 14 distinct shares; and any 7 DEPL participants can hand in 7 2 = 14 distinct shares. However, 2 DEPH participants receive only 2 5 = 10 < 14 distinct shares; 4 DEPM participants receive only 4 3 = 12 < 14 distinct shares; and 6 DEPL participants receive only 6 2 = 12 < 14 distinct shares. Hence, the intra-department thresholds to unveil the secret are 3 for DEPH, 5 for DEPM, and 7 for DEPL. We create 52 shares because 4 5 + 6 3 + 7 2 = 52. In the deciphering meeting, the secret is revealed once the total number of available shares is equal to or greater than the threshold number 14, regardless of whether the participants are from the same department. Table 2 lists some of the many possible combinations that can be used to reveal the secret. Table 2. Solutions to unveil secret using within-department or cross-department support. The (t, n) instances are (3,4), (5,6), and (7, 7), respectively, for each of the 3 departments. For the 3 departments, the size of each shadow file held by a participant is, respectively, 5/14, 3/14, and 2/14 of the original secret image size. Moreover, in the aforementioned design for crossdepartment sharing, the artificial threshold tartificial is not unique, and the designers have the freedom to choose their own artificial threshold tartificial. For example, rather than using the aforementioned tartificial = 14, we may also use, say, tartificial = 20, to create shares. Then each participant in DEPH, DEPM and DEPL has 7 (= 20/3), 4 (= 20/5) and 3(= 20/7) shares, respectively. For each of the 3 departments, the shadow size is, respectively, 7/20, 4/20, and 3/20 of the size of the original secret. In addition to following the preceding steps to obtain valid values of tartificial by checking whether Equations (2-4) are satisfied, another method for obtaining tartificial is to use the least common multiple (LCM) of the threshold values of all departments. If the LCM is used, Equations (2-4) will be automatically satisfied: positive integer solutions for QH, QM, and QL necessarily exist because we can let QH = tartificial/tH, QM = tartificial/tM, and QL = tartificial/tL. Algorithm 1 shows how to use LCM to create shadows for cross-department support. [4], each share is LCM times smaller than the original secret is when tartificial = LCM is used to create shares.
Possible Solutions The Number of Participants in Three Departments
Remark 2: Many values can be used as the value of tartificial, but they must satisfy the necessary but insufficient condition of tartificial ≥ Max{thresholds of all departments}. We may use multiple thresholds. For example, to confuse hackers, we can use tartificial = 19, 14, 13, 20, 21, 25, 26, 28, 27, 31, 32, 35, 34, 33, ... to confuse hackers. For instance, a small part of the secret is shared using tartificial = 19; then a small part of the secret is shared using tartificial = 14; then certain parts of the secret are shared using tartificial = 13; then... Since there are so many possible combinations, it will be more difficult for the hackers. Figure 2 illustrates the design (Figure 2a-c) and the secret disclosure ( Figure 2d) for a crossdepartment support system involving 3 departments, as described in Section 3.1. We use (tartificial, nartificial) = (14, 52) in traditional polynomial-based sharing to create 52 shares. Then, we distribute these 52 shares to all participants; and each participant of department DEPH gets more shares to form their shadows.
DEPH is a (3,4) sharing scheme; thus, create the 4 shadows for the 4 participants by using the shares in (b).
DEPM :
DEPM is a (5, 6) sharing scheme; thus, create 6 shadows for the 6 people by using the shares in (b).
Using the Linear-Equations Approach to Design Types 1 and 2
The linear-equations approach can be mapped from the polynomial approach. Because we already introduced the polynomial approach, we only need to know how to map from the polynomial approach to the linear-equations approach. In the polynomial approach, every participant has a shadow file comprising several shares, whereas in the linear-equations approach, every participant has a shadow file comprising several equations. For both approaches, the threshold tartificial must be met to unveil the secret. Specifically, in the polynomial approach, participants attending a meeting must have at least tartificial distinct shares, whereas in the linear-equations approach, participant attendees must have at least tartificial independent equations. The linear-equations approach thus proceeds as follows. = 3, 4, 6, 8. Moreover, if rows are created using a specified process or algorithm, as discussed in Section 4.1, then each participant does not need to store the rows; for example, Pa only stores {IPj}j = 1, 2,5,7 and Pb only stores {IPj}j = 3, 4, 6, 8. Types 1 and 2 are detailed as follows. Example 1* is derived from Example 1 of Section 3.1, where Example 1*'s steps are such that Step 1* is derived from Step 1 of Section 3.1, Step 2* is derived from Step 2 of Section 3.1, and so on.
Type 1: Sharing with forbidden combinations Because the linear-equations approach can be mapped from the polynomial approach, we can, without loss of generality, simply use the examples in Section 3.1 to detail such a mapping. Hence, as in Section 3.1, we still consider 6 participants (P1,…, P6), where any 3 of the 6 participants can unveil the secret, unless the 3 participants form a forbidden combination. Examples 1* and 2* illustrate the steps required to create the sharing scheme with one and two forbidden combinations, respectively. Every other example of Section 3.1 also has a linear-equations counterpart in this section, Section 3.2.
Step 1*: As in Step 1 of Example 1 for the polynomial-based approach, tartificial = 9 and nartificial = 17. First, create a matrix with 17 rows where each row is 9-dimensional and any tartificial of the nartificial rows are independent. Subsequently, grab the next tartificial = 9 not-shared-yet numbers from the secret, where these secret numbers are termed Dsecret. Then, for i = 1 to i = nartificial (1 to 17 in this case), let Eqi be the equation (Rowi) • (Dsecret) = IPi, where IPi is the value of the inner product of the two vectors Rowi and Dsecret. Notably, because any tartificial = 9 of the nartificial rows are independent, any tartificial = 9 of the nartificial equations can uniquely solve for Dsecret, which has tartificial = 9 secret numbers.
Remark 3: Assume that the rule to create the independent 17-by-9 matrix A in Step 1* is from an algorithm-for example, each element ai,j is (i) j-1 -then there is no need to store the matrix. In this case, only the right-hand side of that equation-for example, the value IPi of the inner product for Eqi-needs to be stored. Hence, when a 9-number Dsecret is shared, every participant from P1 to P5 holds 3 inner product values, and P6 holds 4 inner product values. Example 2*. (two forbidden combinations): Similarly, assume that tparticipants = 3 and nparticipants = 6, meaning that any 3 participants can unveil the secret, unless the 3 participants constitute a forbidden combination. As in Example 2 of the polynomial approach, only the two combinations {P1, P2, P3} and {P4, P5, P6} are forbidden; hence, f = 2. Because we have 6 participants, we create 6 shadow files for these 6 participants. Each participant gets a shadow comprising some equations. As in the first step of Example 1*, we use the (tartificial = 9, nartificial = 16) scheme to share the 9-number secret section Dsecret and obtain 16 combinations, the secret can be revealed because at least 9 equations are available. The proofs of the preceding statements are routine and are thus not presented.
Type 2: Cross-department support system We must demonstrate how the polynomial approach can be extended to the linear-equations approach. Without loss of generality, only demonstrate such an extension for the example where tartificial = 14 in the cross-department support algorithm (Algorithm 1) of Section 3.1. Other examples of Type 2 in Section 3.1 can be analogously extended. In Algorithm 3, still use the parameter values used in Figure 2, namely, (tH = 3, nH = 4), (tM = 5, nM = 6), (tL = 7, nL = 7), and tartificial = 14. Grab a tartificial-values not-yet-shared segment, Dsecret, of the secret image. 8: for k = 1 to nartificial do 9: Calculate the inner product value IPk = Rowk • Dsecret for this segment. 10: end for 11: end while 12: for each participant in DEPH do 13: Grab tartificial/tH = 14/3 = 5 of the not-yet-assigned rows of A. The participant stores these 5 rows and the 5 × nSEG inner product values created above using these 5 rows. 14: end for 15: for each participant in DEPM do 16: Grab tartificial/tH = 14/5 = 3 of the not-yet-assigned rows of A. The participant stores these 3 rows and the 3 × nSEG inner product values created above using these 3 rows. 17: end for 18: for each participant in DEPL do 19: Grab 14/7 = 2 of the not-yet-assigned rows of A. The participant stores these 2 rows and the 2 × nSEG inner product values created above using these 2 rows. 20: end for 21: return the equation shadows, which are the stored rows and corresponding stored inner product values, of each participant of each department.
Discussion: Other Details of the Design
Section 4.1 introduces some methods to generate linear independent equations in the linearequations approach. Section 4.2 discusses the mixed use of the polynomial and linear-equations approaches to improve security. Section 4.3 introduces one more application type.
Mixed Use of the Two Approaches
In this paper, by introducing two distinct approaches when sharing, we can confuse hackers as follows. We may divide the given secret image into several regions; then apply the polynomial approach to some regions and the linear-equations approach to others. Figure 3 illustrates one of many examples. In general, there are a large number of possible ways to partition an image into regions. This is because either of the two approaches can be chosen for each region, and regionspecific parameters (for either the polynomial or linear-equations approach) can be used. This makes hacking much more difficult. For example, if we partition the image into 100 blocks, then, even if the blocks are of uniform size, and even if the hackers know that we have used 100 blocks of the same size, there are still 2 100 = 10 30 possible choices that the hackers have to sieve through before they can arrive at the correct local sharing parameters. This difficulty is compounded if other shapes, such as triangular blocks, or irregular shapes are used.
Linear-equations approach is used in white regions.
Polynomial approach is used in gray regions.
Sharing that Requires All Departments
In some circumstances, representatives from every department must be present. One example is a labor union meeting, where every department of the union must have at least one attendee to disclose a secret. We thus apply our method to this type of sharing, which we call "all-department" sharing. Without loss of generality, we still consider 3 departments: DEPH, DEPM, and DEPL. For simplicity, we also assume that every department has 4 people (i.e., 4 participants). As with the preceding examples, every participant owns a shadow. We now demonstrate how shadows can be created for these participants.
Step 2: Partition these 12 shares into 3 equal parts of 4 shares each. Next, assign shares S1-S4 to DEPL, S5-S8 to DEPM, and S9-S12 to DEPH. Note that each share appears in exactly one department.
Step 3: Every department can use its 5 shares to create Use S11-S15 to create likewise. Every department has 5 shadows, and any 2 departments together can only have 5 + 5 = 10 different shares, which are still less than tartificial = 12 shares. Therefore, the secret can only be revealed when all 3 departments have attendees because at least 4 3 = tartificial = 12 different shares are present.
Practical Applications and Implementation Results
Practical application examples of the forbidden-type sharing include the concealment of sensitive information in medical or military images or legal documents. For instance, for a given xray image, the corresponding patient's name, age, gender, and medical history are very sensitive and must be protected if this information is to be attached to the image for the convenience use of hospital's treatment team. To avoid being charged by the patient, no doctor of the treatment team should be allowed to see any personally identifiable information of the patient (except the x-ray image), unless sufficient number of the members of the treatment team agree to simultaneously unveil the hidden information in the treatment meeting. Figure 4 gives an example of this kind of application. Figure 4a is the original 512 1024 lung xray and medical chart image of a patient. We may use (tparticipants, nparticipants) = (2,4) sharing to share the image among 4 doctors {D1-D4}. The left half (the x-ray image) can be viewed by the cooperation of any two doctors. However, for the right half of the image, due to the sensitivity of the medical history, and also due to the fact that doctors D1 and D2 are brothers, we may particularly forbid the disclosure of the medical history if only these two brothers attend the disclosure meeting. Therefore, the left half image is shared using traditional sharing without any forbidding, i.e., (tparticipants, nparticipants, f) = (tartificial, nartificial, f) = (2, 4, 0). However, the right half image is shared using (tparticipants, nparticipants, f) = (2, 4, 1) with only one forbidden combination, namely {D1, D2}. In the above, the experiment is designed as follows. Split the original 512 1,024 image in Figure 4a to two 512 512 images: one is the lung image, and the other is the medical history image. To show the mixed use of the two approaches, the lung image and the medical history image are shared using distinct approaches. The 512 512 lung image is shared using linear-equations approach with (tparticipants, nparticipants, f) = (2, 4, 0). To achieve this, we need a 2-by-4 matrix A such that each row has 2 elements, and any 2 of the 4 rows are independent. There are infinitely many ways to design this matrix. One such way is to let Rowi = (i, i+1) for i = 1,…,4. Then, for each two-pixels pair of the x-ray image, let doctor Di store an integer IPi which is the inner product value of Rowi and a 2-dimensional vector formed of the two pixel values. Therefore, for the 512 512 x-ray image, each doctor will store 512 512/2 = 512 256 integers.
On the other hand, the right half of Figure 4a, i.e., the 512 512 medical history image, is shared using polynomial approach with (tparticipants, nparticipants, f) = (2, 4, 1). To achieve this, we use (tartificial, nartificial) = (4, 7) in traditional polynomial-based sharing to share the 512 512 medical history image and obtain nartificial = 7 shares {S1,..., S7} such that any tartificial = 4 of them can recover the medical history. Then we distribute these 7 shares to the 4 doctors. The distribution is that D1 gets {S1, S2}, D2 gets {S2, S3}, D3 gets {S4, S5}, D4 gets {S6, S7}. Note that we deliberately let the share S2 appear in both D1 and D2. By doing this, the total number of different shares in the combination {D1, D2} is only 2 + 1 = 3, which is less than the threshold value 4, meaning that the medical history cannot be revealed. Conversely, any other combination {Di, Dj} yields 4 distinct shares, and hence, can reveal the medical history. Notably, for the 512 512 medical history image, every doctor holds a record which is constituted by 2 shares; and each share is 4 times smaller in size than the original 512 512 medical history image in the (4, 7) threshold scheme. Therefore, the medical history data held by each doctor has 512 256 bytes which is 2(1/4) = 1/2 of the size of the 512 512 medical history image. Now, combining the two results, we can see that each doctor holds 512 512/2 = 512 256 integers for the x-ray image and also holds 512256 bytes for the medical history image. Since each integer is an inner product value of a 2-element row and a vector formed of two pixel values, the integer is between 0 and 2295 = (4 255) + (5 255) where 255 is the largest possible gray value, and 5 is the largest element of the matrix A whose Rowi = (i, i+1) for i = 1,…, 4. Hence, each integer needs (log 2,296)/(log 2) = 12 bits, or equivalently, 12/8 bytes. Hence each doctor holds 512 256 (12/8 + 1) = 512 256 (5/2) bytes as his shadow data. Therefore, each shadow size is (512 1,024)/(512 256 2.5) = 1.6 times smaller than the size of the image of Figure 4a.
For military, the image in Figure 4a can be replaced by a military image such as Figure 5, and then shared likewise so that certain combination of participants are forbidden to view the dynamics data shown in the right half of Figure 5. The above are the practical applications of the sharing with forbidden combinations. Below we discuss the practical applications of the sharing with cross-department support. In some events, quite often there are several departments involved in the same events simultaneously. For example, after a terrorist attack in a city, many departments of the government will repeatedly check the same encrypted items such as the photos of the suspects, the weapons being used, or the protection program of the eyewitness. In this case, the national security department, the provincial police department, and the city police department form the three departments {DEPH, DEPM, DEPL} mentioned in Section 3. Let each participant (security agent or policeman) belong to one of the three departments. Without loss of generality, assume the parameter pair (t = threshold, n = number of people in this department) for the three departments are, respectively, (tH = 3, nH = 4) for DEPH, (tM = 5, nM = 6) for DEPM, and (tL = 7, nL = 7) for DEPL. For the 4 officers in department DEPH, any 3 of them can recover Figure 6. For the 6 officers in department DEPM, any 5 of them can recover Figure 6. For the 7 officers in department DEPL, all 7 of them must gather together in order to recover Figure 6. However, due to illness, road accident, or business trips abroad, not every officer of the same department comes to the office every day. It is thus quite possible that on some working days, the department has insufficient personnel to unveil the secret. For the government to still function, our cross-department support system can help us to unveil Figure 6 through the cooperation of multiple departments.
Using the design in Section 3.1, we did the experiments and found that the image in Figure 6 could be really unveiled as described in Table 2, either within-department or cross-department. The other application example is for the mobile-phone/automobile/airplane/ship factories or any company using blueprint to build machine or product (see Figure 7). We can treat the blueprint as a secret image. Then the blueprint is shared among the engineers of the production department. It is also shared among the managers of the administration departments, or among the co-owners of the company. Hence, the cross-department support system is also helpful for the same reasons mentioned in the last paragraph. Figure 7. A blueprint image for a company that designs, builds, and sells houses.
Conclusions
In this paper, two types of secret image sharing, which are extensions of traditional image sharing, are proposed. Type 1 is the forbidden type, and Type 2 is the cross-department support type. Both the polynomial approach and linear-equations approach can be applied for each type. By using the concept of redundant shares when assigning traditional shares to participants, and by mapping between the polynomial approach and linear-equations approach, the two proposed designs to achieve the requirement of the two types are obtained. Notably, hacking becomes more difficult if both approaches are used in the same image. Furthermore, as stated in Remark 3 of Section 3, the value of the threshold tartificial can be randomly chosen; hence, we may also use a predetermined sequence of multiple thresholds such as tartificial = 14, 7, 11, 20, 43, 31, ... to share the same secret in order to confuse hackers. Hacking becomes difficult because of this multiplicity of possible combinations.
We now compare between approaches and types. First, because Type 1 systems are designed on a case-by-case basis, systems of the forbidden type are harder to design than those of the crossdepartment support type. Second, the linear-equations approach is harder to apply than the polynomial approach is; although we have Algorithm 2, which is a general procedure to obtain the corresponding instance of the linear-equations approach from an instance of the polynomial approach. This is because Step 1 of Algorithm 2 implies the need to create an nartificial-by-tartificial matrix in the conversion to the linear-equations approach, where tartificial of the nartificial rows are linearly independent. This makes the linear-equations approach slightly harder to apply, but such nontrivial complexity also makes systems designed using the linear-equations approach more difficult to hack. As for storage, if the nartificial-by-tartificial matrix A needs to be stored, then the required storage space of each participant in the linear-equations approach is approximately tartificial times larger than the corresponding space in the polynomial approach. However, if this matrix A can be automatically generated by an algorithm or a preassigned method, then matrix A does not need to be stored, making the storage space of the two approaches approximately equal; we analyze this claim as follows.
Each participant i in the linear-equations approach needs to store a value IPi, which is the inner product of a row of A and a vector of tartificial pixel values, that is, IPi = (Rowi) • (Dsecret). Because (Rowi) has tartificial elements and Dsecret also has tartificial pixel values, notice that the inner product value cannot exceed tartificial (Max A) 256-where (Max A) is the maximal value of the elements of matrix A, 256 is the pixel value range, and tartificial has its value because each inner product is the sum of tartificial integers in which each integer is less than (Max A) 256. To share a vector constituted by tartificial pixels, each participant for whom the polynomial approach is used stores a byte, but each participant in the linear-equations approach stores approximately log256[tartificial (Max A) 256] = 1 + log256(Max A) + log256(tartificial) bytes. Notably, 1 + log256(Max A) + log256(tartificial) ≤ 1 + 1 + 1 = 3 if the maximal absolute value of elements of matrix A is < 256 and tartificial ≤ 256. If Max A becomes 65,535, the size amplification factor between the two approaches is still only 1 + 2 + 1 = 4. In the preceding analysis, if A contains negative elements, then replace Max A by the maximal absolute value of the elements of A. In summary, the use of the linear-equations approach (either singly or in conjunction with the polynomial approach, as described in Section 4.2) is meant to increase security against hackers, not decrease the complexity of the user's work.
Third, we analyze the use of the LCM for tartificial for all department thresholds in the crossdepartment support type, specifically regarding why the LCM yields greater economy in storage space relative to other candidate values. The shadow file size of each participant in the three departments are, respectively, tartificial/tH/tartificial = (LCM/tH)/LCM = 1/tH, tartificial/tM/tartificial = (LCM/tM)/LCM = 1/tM, and tartificial/tL/tartificial = (LCM/tL)/LCM = 1/tL of the size of the original secret. To understand why, note that each share is 1/tartificial smaller than the original secret is when we use tartificial as the threshold in traditional sharing to create shares, and also that each DEPx participant uses tartificial/tx shares to create their shadow, where x is the department. However, if tartificial ≠ LCM, then tartificial/tH or tartificial/tM or tartificial/tL can be non-integers, which implies the possibility of tartificial/tH/tartificial > 1/tH or tartificial/tM/tartificial > 1/tM or tartificial/tL/tartificial > 1/tL. If the preceding ">" relation holds for some departments, then the shadow files in those departments are larger than the shadow files created by using tartificial = LCM. However, although the LCM yields a more economical shadow size, the LCM's disadvantage is that because nartificial shares must be created, and nartificial > tartificial, nartificial may be too large if tartificial is very large. In fact, in the linear-equations approach, since we need to create a nartificial-bytartificial matrix where any tartificial of the nartificial rows are independent; such independence becomes harder to achieve for increasing values of tartificial.
Fourth, cross-department-supported sharing is different from progressive sharing [7,8,19]. In progressive sharing, the unveiling of the secret decreases in error to finally become error free. In cross-department-supported sharing, the unveiling of the secret is either error free or completely nothing (100% vs. 0%). In words, the secret disclosure of progressive sharing crosses several resolutions of image quality, whereas the secret disclosure of the cross-department support type crosses participants from several departments.
nparticipants)
The goal of Type 1 is that any tparticipants of the n participants given people can recover the secret together unless these t participants people form a forbidden combination.
(tartificial, nartificial)
To achieve the goal specified in Type 1 (or Type 2), we use the traditional method to create nartificial shares so that any tartificial shares can recover the secret. Then distribute these nartificial shares to the participating people. Each person gets a socalled "shadow" which is formed of several "shares".
Pi
The person who is the i th Participant.
Sj
The j th share.
Shadow
Each person holds a "shadow". Each shadow is formed of several shares.
IPi
The inner product value held by the person Pi in linear-equations approach.
Author Contributions: Conceptualization, methodology, and draft preparation, C.Y.C.; Review & Editing, supervision and validation, J.C.L. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest:
The authors declare there are no conflict of interest.
|
v3-fos-license
|
2019-11-28T12:06:12.427Z
|
2019-06-01T00:00:00.000
|
214039104
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/59/e3sconf_ag2019_01008.pdf",
"pdf_hash": "1f6b2a2bbbc7377839bb0fecfc3d38bd4fc45f9e",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45512",
"s2fieldsofstudy": [
"Geology"
],
"sha1": "0a67e72f2814aa49f5028aa8f28305d1e55f0ea6",
"year": 2019
}
|
pes2o/s2orc
|
VSP data inversion for vertical velocity gradient and elliptical anisotropy model
Inversion of velocity parameters for the walkaway VSP data in a multilayered medium can be impeded by velocity gradients and anisotropy in some layers. A problem occurs if we compare velocities obtained from borehole seismic profiling which are equal to their vertical components with the velocities calculated with paths coming from far offsets where the horizontal component plays an important role, especially when the vertical gradient exists and the ray paths are curve-shaped. In this contribution we present the results of velocity model inversion for VSP data considering velocity gradient and elliptical anisotropy. The algorithm consists of two steps, optimization of velocity parameters and optimization of ray paths for the given model. Both procedures use the Nelder-Mead simplex method which finds local minima. Due to the character of optimization we performed also multistart analysis which can provide information about possible equivalences between parameters. Analysis was conducted for different parameterizations, in some cases allowing introduction of additional parameters: vertical gradient and elliptical anisotropy coefficient. The optimal model for a specific set of data is chosen with the help of Bayesian Information Criterion to balance complexity of model with quality of approximation of traveltimes.
Introduction
Correct inversion of the velocity model is one of the most important problems in seismic data analysis. In the simplest version, geological model is composed of several parallel layers characterized with one parameter: velocity. Commonly, the layers has not one homogeneous velocity but it varies on depth or direction of wave propagation. In practical investigations, simplifications of real geological conditions are used. One of them is vertical velocity gradient which is especially useful in deposit rocks where velocity increases with depth due to compaction. Another factor influencing obtained seismic data is elastic anisotropy. Changes of velocity with direction of wave propagation are common, but not always plays significant role. Especially important is model of vertical transverse isotropy where velocity is changing only with respect to vertical angle of propagation. One of the useful simplification to describe this phenomenon is model of elliptical anisotropy utilized in many recent researches, e.g. [1].
In this paper we used the model of elliptical anisotropy combined with vertical velocity gradient to inverse velocity model from walk-away VSP data from Newfoundland Shelf. Since the model was composed of three layers, ray tracing based on Fermat's principle was used. Optimal number of parameters was chosen with Bayesian Information Criterion to achieve balance between data fitting and model complexity.
Elliptical anisotropy
We consider a medium described with 3 parameters: vertical velocity, its gradient and elliptical anisotropy coefficient. The last of them is commonly used to simplify the phenomenon of anisotropy and formulate it as a one value. It is especially useful in horizontally layered models where velocity of wave propagation in horizontal direction (along the laminas) is slightly bigger than in vertical direction. If the layers are dipped with small angles, the same approximation can be used. Elliptical anisotropy coefficient connects the horizontal ( ℎ ) and vertical ( ) velocities with the formula [1]: This model is simplification of Thomsen's model with two anisotropy coefficients δ and ε fulfilling the condition that both has the same value [2]. In the medium characterized as VTI, this approximation of the real situation is acceptable.
The value of the P-wave velocity is dependent on the vertical angle θ of wave propagation. With the vertical velocity 0 (equivalent of (0)), the formula for the actual wave velocity is given by Thomsen [3]: where δ is coefficient connecting elements of elastic stiffness tensor : The Rogister and Slawinski [4] formulated analytical formula for the traveltime in such a medium, with the source in the point (0,0) and receiver in the point (x, z): The mentioned formulae are valid for one layer only. If one has to calculate traveltime in multi-layered medium, the ray path optimization have to be performed (for two layer model analytical solution is given in [4]). The algorithm of finding correct ray path is described in the next section.
Inversion algorithm
Since the ray paths in medium characterized by vertical velocity gradient and elliptical anisotropy are not straight lines, we decided to use ray tracing method not basing on the Snell's law. The real wave path through the boundaries can be retrieved with the Fermat's principle used directly. It means that for every calculation of the traveltime one has to find the path giving the least time between source and receiver located in specified coordinates.
The base software algorithm consisted of two-step optimization. At first, the starting velocity model is prepared. For the current model, optimization of ray path is performed with the target function equal to traveltime and X-coordinates of crossing through boundaries being subject of optimization. When the traveltimes for all pairs of sources and receivers are correctly calculated, the target function value for velocity model optimization is found as a sum of squared differences between modelled and measured traveltimes. Then, velocity parameters are changed according to the target function value and new ray paths, for new velocity model are computed.
For both these optimizations, Nelder-Mead simplex method is used. The method has local character and is not based on gradient so it can be used also for undifferentiable target functions. In course of optimization, a polyhedron called simplex is created in the space of solutions. Through the modifications of its vertices the simplex converges to the point with minimum of target function. For the detailed description see [5].
Bayesian Information Criterion
Since the geophysical data are always burdened with measurement errors one should not use the model describing the reality in too much detailed way. Excessive simplification of model is also not desired. In order to balance these two restrictions, some objective criterion is necessary. Commonly used tool is Bayesian Information Criterion (BIC) [6]. The BIC value grows either with number of parameters or growth of target function so it ensures optimal number of model parameters (it is more critical than other information criterion e. g. Akaike). The original formula for BIC value is: where L is maximized likelihood, M is the number of observations and k is the number of parameters in the model. According to [7] we can change searching for maximum probability in (6) to finding minimum of following formula: where 2 is error variance. The method is used also in other similar researches, like [8].
Data description
In this paper we used data from VSP measurements on the Newfoundland Shelf presented in [9]. The core of the dataset is walk-away VSP data recorded with 5 receivers inside the borehole at the depth between 1980 m and 2020 m. The airgun was source of a wave, shots were performed with interval about 25 meters and maximal offsets reaching 1000 meters at "short side" and 4000 meters at "long side".
The zero-offset VSP data (Fig. 1) were used to create a starting model for the inversion. According to smoothed interval velocity curve interpreted from the traveltimes, the velocity model should be divided into three layers (in its part significant for walk-away VSP data inversion). The boundaries are visible at the depth of about 1300 meters and 1750 meters and in starting model of inversion these values are used as the fixed depths.
The results of inversion
The optimization of velocity model was performed for many starting models which differ on number of parameters, starting values and starting steps of optimization (as an element of Nelder-Mead algorithm). In the course of tests, the details of the software have changed, e.g. in order to avoid problems with vertical rays the near-offset data were removed from the dataset used to calculate target function. For the near-vertical segments of seismic ray, the numerical errors occurs since the precision of argument of atanh function in equation (4) is limited. Similar conclusions were reached in [9], thus author suggests to not include nearoffset data in the calculations. Excluding the cases disturbed by those numerical errors, all remaining optimizations, despite of many different starting models lead to the traveltimes consistent with measured ones (Fig. 2). The seismic ray paths obtained as a result of optimization based on Fermat's principle seems also correct -the angles of wave propagation for neighbouring rays are consistent and follow the rules connecting velocities and ray angles (Fig. 3).
The multistart analysis
Since the optimization has a local character, the multistart analysis was performed to ensure that obtained results are exactly the best models. The whole procedure of velocity model inversion was repeated 1000 times with differing starting values of parameters for each type of model regarding number of parameters. Depending on the starting model, different values were obtained, however some dependencies between parameters values can be noticed. In the Fig. 4 the values of parameters obtained as a result of multistart analysis for 7 parameter model are presented as crossplots. The data exhibits strong correlation between a and b parameters of the first layer (correlation coefficient about -0.966). Negative correlation means that lower velocity at the depth of zero is compensated with increased vertical gradient. This dependence is not related to number of model parameters and value of elliptical anisotropy coefficient if it is considered. Similarly, in the second layer correlation between a and χ parameters has coefficient about -0.679. In this case gradient does not compensate changes of velocity at the top of the layer.
The comparison of the target function value achieved with different values of specific parameters also provide interesting remarks (Fig. 5). The values of parameters a0 and χ1 in 8 parameter model plotted against target function value create curve-shaped optimization fronts. Some of the results demonstrate that with given value of some parameter the target function cannot be lower than specific number even with changes of another parameters.
BIC analysis
For all the best results for each parameterization, BIC value was calculated to choose the most suitable model with equation (7) (see section 2.3). The obtained values shows that 7 parameter model is the best one as BIC has the lowest value. For both more and less complicated models the BIC value grows with the growth of the target function or model complexity. It means that only in the middle layer anisotropy is significant enough to allow one to introduce elliptical anisotropy coefficient. In other layers the impact of possible anisotropy on the traveltimes can be either not present or hidden under measurement errors. Dependence between level of measurements errors and optimal number of parameters chosen with BIC criterion is analysed in [10] with example of synthetic datasets.
Conclusions
The analysis of results of the velocity model optimization with elliptical anisotropy assumption was conducted. The used algorithm worked correctly finding local minima which allow measured and modelled traveltimes to differ not more than measurement precision. Using Nelder-Mead optimization method brings good results but because of its local character it is important to provide a starting model close to a real one. The created tool can be successfully used to find velocity parameters in multi-layered medium with a starting model created on the base of zero-offset VSP or other a priori data.
The multistart analysis simulates using of global optimization in finding velocity parameters simultaneously showing the regions where local minima occur. Comparing target function value with values of specific parameters can lead to conclusion about effectiveness of optimization algorithm in heading to global minimum. Comparison of velocity parameter values in pairs or groups can suggest existence of correlations between parameters, e.g. a0 -b0, a1 -χ1 (indices refer to number of the layer as described in Fig.4). It may result in difficulties with finding the proper parameter value as they can compensate traveltimes in pairs or greater groups.
The BIC analysis demonstrates that in this case the 7 parameter model is the best one. The optimal values are presented in Table 1. The existence of significant anisotropy described with elliptical anisotropy coefficient helps to obtain model fitting better to measured data simultaneously without complicating the model too much. The presented algorithm composed of finding minima of target function for different parameterization and then choosing the optimal parameterization with BIC criterion provides user with good capabilities to solve the problem of velocity model inversion in multi-layered anisotropic medium.
|
v3-fos-license
|
2021-10-05T13:17:49.988Z
|
2021-10-05T00:00:00.000
|
238261530
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnmol.2021.722396/pdf",
"pdf_hash": "2035a1a11a983f2addffafc9c4c126cc09e3bd80",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45514",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "2035a1a11a983f2addffafc9c4c126cc09e3bd80",
"year": 2021
}
|
pes2o/s2orc
|
Oligodendroglioma: A Review of Management and Pathways
Anaplastic oligodendrogliomas are a type of glioma that occurs primarily in adults but are also found in children. These tumors are genetically defined according to the mutations they harbor. Grade II and grade III tumors can be differentiated most of the times by the presence of anaplastic features. The earliest regimen used for the treatment of these tumors was procarbazine, lomustine, and vincristine. The treatment modalities have shifted over time, and recent studies are considering immunotherapy as an option as well. This review assesses the latest management modalities along with the pathways involved in the pathogenesis of this malignancies.
INTRODUCTION
Oligodendroglial tumors are rare tumors that constitute part of the neuro epithelial tumors of the central nervous system. Accounting to up to 5% of all neuroepithelial tumors (Ostrom et al., 2017), oligodendroglial tumors have an incidence rate of around 1,000 new cases per year in the United States. Oligodendroglial tumors can be divided into two groups based on the classification of the world health organization (WHO): grade II oligodendroglioma and grade III (anaplastic) oligodendroglioma. Most commonly occurring between 25 and 45 years of age, grade III oligodendrogliomas tend to present 10 years later than grade II tumors and can rarely develop in younger and older populations. Oligodendroglioma is genetically defined as a tumor confirmed to harbor either an IDH1 or IDH2 mutation along with codeletion of chromosome arms 1p and 19q. Histologically, oligodendroglial tumors show sheets of isomorphic round nuclei with a clear cytoplasm-the classic "fried egg" appearance. Grade III oligodendroglioma show a worse prognosis than grade II tumors due to the presence of anaplastic features such as nuclear atypia, necrosis, microvascular proliferation, high cell density and number of mitotic figures. It is believed that anaplastic oligodendroglioma (AO) can progress from a lower grade oligodendroglioma after the acquisition of specific genetic alterations (Youssef and Miller, 2020). However, a clear distinction of both grades is not always possible. Increasing interest has been projected toward favorable molecular markers which oligodendrogliomas harbor. In this review article, we describe the clinical management of AO and summarize the different molecular pathways that drive the development, maintenance, and treatment response of these tumors.
CURRENT MANAGEMENT GUIDELINES
To establish the diagnosis of AO, a pathological sample is crucial. Hence, surgeons should biopsy patients suspected to have AO and attempt tumor resection, as with all other highgrade gliomas. In a study by Shin et al. (2020), gross tumor resection (GTR) was done in 43 of 88 patients. Upon multivariate analysis, median progression free survival (PFS) was 41.1 vs. 23.9 months along with a hazard ratio (HR) of 0.58 with a 95% CI 0.35-0.97 (p = 0.038) compared to patients who had no GTR (Shin et al., 2020). However, upon multivariate analysis there was no significant difference in overall survival (OS). Similarly in a retrospective study by Fujii et al. (2017) patients with anaplastic astrocytoma or anaplastic oligoastrocytoma but not AO had a significant survival advantage when resection of at least 53% of the preoperative T2-weighted high-signal intensity volume was done. Alattar et al. (2018) conducted a Surveillance, Epidemiology, and End Results (SEER)-based analysis in 2017 and showed that GTR was not associated with improved survival in patients with WHO grade II and grade III oligodendrogliomas compared to patients with anaplastic astrocytomas and glioblastomas. This was attributed to the sensitivity of oligodendrogliomas to chemotherapy compared to astrocytomas (Alattar et al., 2018).
Although surgery can help relieve symptoms by decreasing the mass effect of the tumor, the tumor's predilection to the frontal lobe hinders its maximal resection. This comes with a risk of sacrificing important brain centers and hence compromising functionality and quality of life. Retrospective studies have revealed that the post-operative seizure-free rate is 67-80% (Luyken et al., 2003;Zaatreh et al., 2003;Benifla et al., 2006;Chang et al., 2008;Englot et al., 2011). Despite utilizing a multimodal approach in nearly all patients, refractory seizures can still be seen in patients suffering from epilepsy in 50% of the cases before the initial surgery and 15-40% of cases following surgery and anticonvulsant therapy (Smits and Duffau, 2011;You et al., 2011;Calatozzolo et al., 2012). Two plausible hypotheses to explain treatment resistance in oligodendrogliomas exist. The first is the presence of alterations in drug targets affecting antiepileptic drugs' binding. The second is diminished intracellular drug transport through the overexpression of ATP-binding cassette transporter proteins such as P-gp (MDR1), MRP1, and MRP5 (Calatozzolo et al., 2012;Alms et al., 2014).
Postoperative radiotherapy (XRT) to a total dose of roughly 60 Gy over 30 fractions is recommended (Blakeley and Grossman, 2008). Although one survey showed that 34% of neurooncologists suggested delaying XRT in patients with 1p19 codeletions (Abrey et al., 2007), clinical trials addressing the efficacy of delayed XRT in this subset of patients are needed.
The earliest reported results of the chemotherapy regimens, procarbazine, lomustine (CCNU), and vincristine (PCV), in AO were reported by Cairncross et al. (1994) and showed that the median time to progression for patients was at least 25.2 months for complete responders, 14.2 months for partial responders and 6.8 months for stable patients. Afterward in 2001, Chinot et al. (2001) reported that 16.7% of patients experienced a complete response and 27.1% experienced a partial response when receiving temozolomide (TMZ) after previous PCV. The radiation Therapy Oncology Group (RTOG) also explored the use of pre-irradiation TMZ followed by concurrent TMZ and radiotherapy in a phase 2 study (RTOGBR013) (Vogelbaum et al., 2009).
Accordingly, the treatment approach is tailored according to the presence of 1p19q co-deletion, which characterizes oligodendrogliomas. Patients harboring co-deleted tumors can receive either PCV or TMZ. The European Organization for Research and Treatment of Cancer study 26951 (EORTC26951) and RTOG9402 showed an increase in OS and PFS when PCV is added to radiotherapy (RT) in patients with 1p19q co-deleted oligodendrogliomas (Cairncross et al., 2013;van den Bent et al., 2013a). With almost 12 years of follow-up, patients harboring tumors with 1p19q co-deletions showed an improved survival when treated with PCV and RT as compared to RT alone (EORTC26951: 157 vs. 50 months; RTOG9402: 14.7 vs. 7.3 years). Moreover, treatment of these patients with PCV demonstrated an improved OS in both groups when compared to RT alone. In patients with astrocytic tumors, only PFS was prolonged in patients treated with XRT who received up-front PCV vs. PCV at the time of recurrence (Pan-Weisz, 2019; Tork and Atkinson, 2020). Consequently, and in terms of improvement in quality of life (QOL), the EORTC study showed no difference between the two groups, and PCV toxicity contributed to a decreased QOL for a prolonged period.
The response of tumors harboring IDH mutations to PCV therapy has also been described in a subset analysis and followup study of RTOG9402 trial. As expected, patients with an IDH mutation and 1p19q co-deletion showed significant benefit in OS. While IDH-WT tumors retained a poor prognosis and showed no benefit from PCV treatment, improved OS was seen in IDH mutant non-co-deleted tumors, and astrocytic tumors when treated with PCV plus RT.
A subset analysis of patients with other methylation profiles, such as CpGisland hypermethylated phenotype (CIMP) and MGMT promoter methylation (MGMT-STP27) status, was also conducted by van den Bent et al. (2013b). It was found that CIMP + or MGMT-STP27 methylated tumors had a superior OS 1.05 vs. 6.46 years and 1.06 vs. 3.8 years (both P < 0.0001) for CIMP and MGMT-STP27 status, respectively. CIMP + and MGMT-STP27 methylated tumors had a clear benefit from adjuvant PCV; the median OS in the RT and RT-PCV arms was 3.27 vs. 9.51 years (P < 0.0033), respectively for CIMP + tumors and 1.98 vs. 8.65 years (P < 0.0001) for MGMT-STP27 methylated tumors (van den Bent et al., 2013b). There was however no such benefit for CIMP-or for MGMT-STP27 unmethylated tumors. Adjuvant TMZ has also been shown to be effective with better tolerability and less toxicity (van den Bent et al., 2003;Brandes et al., 2006). A randomized clinical trial is currently in progress to compare the efficacy of PCV or TMZ when combined with RT in 1p19q co-deleted tumors (CODEL: NCT00887146). Preliminary results are mentioned toward the end of the manuscript. For patients with astrocytic tumors, EORTC26951 and RTOG9402 did not show any benefit of PCV with RT. A trial of adjuvant TMZ with RT in patients harboring this tumor subtype showed a significantly improved PFS and OS (van den Bent et al., 2017). While increasing the risk of toxicity, concurrent TMZ is currently being assessed in comparison to adjuvant treatment in astrocytic tumors (van den Bent et al., 2017). The interim report from the RTOG0131 trial suggests that combination therapy with TMZ and XRT is well tolerated in patients with AO being treated with neoadjuvant TMZ for 6 months, followed by TMZ and concurrent XRT (Tork and Atkinson, 2020). Apart from RTOG9402 and EORTC26951, Wick et al., 2016 conducted NOA-4, a randomized phase 3 trial of sequential RT followed by chemotherapy against anaplastic glioma with PCV or TMZ (Vogelbaum et al., 2009). In this trial, MGMT hypermethylation was associated with prolonged PFS in both arms (Wick et al., 2009;Tork and Atkinson, 2020).
All in all, patients with 1p19q co-deleted tumors should be treated with RT and adjuvant PCV while those lacking this co-deletion should receive adjuvant TMZ. PCV and TMZ are also used in cases of recurrence but result in lower response rates and disease-free survival. Other agents have also been investigated for recurring disease including paclitaxel, irinotecan, carboplatin, etoposide, and cisplatin (Poisson et al., 1991;Yung et al., 1991;Warnick et al., 1994;Kormanik, 1995, 1999;Fulton et al., 1996;Macdonald et al., 1996;Friedman et al., 1999;Chang et al., 2001;Cloughesy et al., 2003;Batchelor et al., 2004;Ascierto et al., 2016). However, no results have proven enough benefit for treating patients with recurrent AO. Table 1 outlines some information related to the major drugs used in treatment.
Recently, immunotherapy has been explored as a potential treatment modality. Elens et al. (2012) reported the survival benefit of immunotherapy in patients with relapsed AO enrolled in the HGG-IMMUNO-2003 trial. The PFS and OS were 3.4 and 18.8 months, respectively. In a recent case report by Yu et al. (2021) a patient who had multiple tumor recurrences, following several regimens was started eventually on nivolumab. On magnetic resonance imaging, he was considered to have disease progression. Upon surgical debulking and pathological diagnosis, he was found to have recurrent diseases. However, tumor samples collected from enhancing and non-enhancing areas for a scRNAseq analysis revealed an abundance of immune cells. Infiltration of these cells might have been perceived as the increased mass on MRI. The patient sustained a disease-free response to nivolumab at least 12 months after surgery. This highlights the importance in incorporating novel techniques to better understand the tumor microenvironment (Yu et al., 2021).
PATHWAYS IN ANAPLASTIC OLIGODENDROGLIOMA
Isocitrate Dehydrogenase 1/2 Mutation (IDH 1/2) The main function of the IDH1 and IDH2 enzymes is the oxidative decarboxylation of isocitrate to alpha-ketoglutarate. This reaction promotes the formation of NADPH, the reduced form of NADP+, which helps in protecting the cell from oxidative radicals that can damage DNA (Soffietti et al., 1998;van den Bent et al., 1998). In the cytosol, the product of the reaction catalyzed by IDH1, alpha-KG, has been reported to be involved in multiple cellular pathways including hypoxia sensing, lipogenesis and epigenetic modification through its action on alpha-KG dependent dioxygenases such as TET and JmjC and other enzymes (Mason et al., 1996;Buckner et al., 2003;Abrey et al., 2006;Taliansky-Aronov et al., 2006). The role of IDH2, on the other hand, is limited to the mitochondria where it catalyzes the same reaction as part of the tricarboxylic acid cycle (TCA).
IDH mutations identified in gliomas tend to occur at the active site of the enzyme at arginine 132 and 172 in IDH1 and IDH2, respectively. IDH mutations can dominantly inhibit WT-IDH when heterozygous through the formation of enzymatically inactive heterodimers (Zhao et al., 2009). It was shown by Uhm (2010) that IDH mutations lead to the acquisition of a new enzymatic function that catalyzes the formation of D-2HG from alpha-KG. 2-HG can inhibit alpha-KG dependent dioxygenases and cause epigenetic alterations (Xu et al., 2011). It can also stimulate the activity of EGLN leading to decreased HIF levels. This in turn allows tumor proliferation in low oxygen conditions (Zhao et al., 2009;Koivunen et al., 2012). It has also been reported that 2-HG can inhibit p53 via microRNA activated by HIF-2α, driving tumorigenesis (Jiang et al., 2018). As a result of the disruption of IDH's enzymatic function, 2-HG tilts off the NADP/NADPH balance thereby increasing the production of ROS and leading to DNA damage and tumor formation (Latini et al., 2003;Rinaldi et al., 2016). JmjC demethylases are one of the many dioxygenases regulated by α-KG and inhibited by 2-HG. They are responsible for histone methylation on lysine residues. It has been observed that in IDH-mutant cell lines, repressive histone methylation precedes global DNA hypermethylation. This in turn provides evidence that IDH mutations could allow cells to remain in a vulnerable state, and prone to additional DNA alterations. Mutant IDH1 has also been shown to inhibit the ALkB family DNA repair enzymes further contributing to erroneous DNA replication (Wang et al., 2015;Rinaldi et al., 2016).
High mutant allele fractions have been found in patient samples at diagnosis and recurrence in tumor evolution studies. IDH1 mutations seem to be at the core of this tumorigenesis (Johnson et al., 2014). IDH mutated enzymes can promote proliferation and colony formation through its end metabolite 2-HG (Koivunen et al., 2012;Bittinger et al., 2013). Turcan et al. (2012) showed that an IDH1 mutation can induce a methylation profile known as the G-CIMP signature, which is a glioma specific methylation pattern at CpG islands. Interestingly, an in vitro treatment of cells with D-2HG also induced a similar methylation pattern which further supports the vital role of this metabolite in epigenetic alteration and tumor formation. Additionally, hypermethylation caused by IDH1 mutations was shown to occur at CTCF-binding sites that normally insulate and prevent the interaction between different parts of the genome (Flavahan et al., 2016). Methylation of these sites promotes the interaction of enhancers with new genes (Flavahan et al., 2016). IDH mutations have also been implicated in the regulation of the recruitment of inflammatory cells to tumor sites, specifically through D-2HG. Evidence from in vivo models have demonstrated reduced levels of STAT1 and CXCL10 in IDH-mutant gliomas. Infiltration of immune cells, specifically T cells, were also reduced in these tumors (Amankulor et al., 2017;Kohanbash et al., 2017). Additionally, the mTOR pathway has been identified at a potential target for treatment due its activation in IDH-mutant gliomas. This occurs via 2-HG's inhibition of KDM4A, an α-KG dependent deoxygenase, and destabilization of DEPTOR, a negative regulator of mTORC1/2, resulting in mTOR pathway activation (Carbonneau et al., 2016). This activation is of special interest since it has been shown that mTOR and its downstream effectors are implicated in tumorigeneses in brain malignancies (Fan and Weiss, 2010;Ryskalin et al., 2017).
The RTOG 9802 trial, which included non-molecularly stratified patients harboring grade II gliomas, demonstrated a 5.5-year survival benefit of PCV administration (Shaw et al., 2008). Results of this trial raise the possibility that the chemosensitivity seen in these tumors might be due to the IDH mutation that is common to both oligodendroglial and low-grade astrocytic gliomas. Upon reanalysis of RTOG 9802 after molecular classification, AO patients with IDHmutated tumors actually showed a survival benefit when treated with PCV chemotherapy (Cairncross et al., 2014). However, analysis of other trials such as the EORTC 26951 did not reveal a correlation between IDH mutations and survival in patients with astrocytic tumors (grade II) (van den Bent et al., 2010Bent et al., , 2013a. Appropriate design of future clinical trials can help in determining better correlations with molecular subclasses.
Chromosome 1p19q Co-deletion: Role of CIC, FUBP1, and NOTCH1 in Promoting Carcinogenesis
The unbalanced translocation of the centromeric regions of chromosomes 1p and 19q attribute to the loss of the whole arm on both chromosomes. This co-deletion, along with the IDH mutation, enables a tumor to be classified as an oligodendroglioma according to the WHO 2016 criteria (Louis et al., 2016). Patients with co-deleted tumors demonstrate favorable prognoses (Smith et al., 2000a;Ino et al., 2001;Cairncross et al., 2006;Kaloshi et al., 2007;Cairncross et al., 2013). The mechanism by which this co-deletion leads to chemosensitivity remains unclear and data showing the implication of other genes in this chemosensitivity is emerging.
Following the stratification of AO according to 1p/19q codeletion status, an in-depth genetic analysis of 1p/19q co-deleted tumors revealed inactivating mutations affecting the FUBP1 gene on chromosome 1p and the CIC gene on chromosome 19 (Bettegowda, 2000;Sahm et al., 2012;Yip et al., 2012). CIC normally functions as a reversible repressor by binding to the DNA regulatory elements downstream of growth factor signaling pathways (Ajuria et al., 2011). Acting as a tumor-suppressor gene, missense mutations affecting CIC are mostly found within the DNA-binding domain thereby inhibiting its binding to regulatory elements. In a population of patients with oligodendroglial tumors, four cases exhibited absent CIC expression with no detectable mutations, suggesting that alterations affecting CIC could occur through other unidentified mechanisms (Chan et al., 2014). Another DNA-binding protein found mutated in AO is FUBP1. The Far Upstream Element (FUSE) Binding Protein 1 (FUBP1) is known to regulate several cell cycle regulators such as MYC and p21. While often found upregulated in many tumors, FUBP1 acts as a tumor suppressor gene due to its inactivating mutations reported in around 15% of oligodendroglial tumors (Baumgarten et al., 2014). As for the clinical relevance of these molecular markers, inactivating mutations affecting FUBP1 have correlated with a shorter time to recurrence and CIC mutations have been associated with worse prognosis, especially in those patients with 1p/19q co-deleted oligodendrogliomas (Chan et al., 2014;Michaud et al., 2018). Nevertheless, further studies are needed to elucidate the role of CIC/FUBP1 alterations in the pathogenesis of AO and oligodendrogliomas, in general.
CDKN2A Homozygous Deletion/9p LOH: Role in Progression to Anaplastic Oligodendroglioma
In addition to the aforementioned pathways, homozygous and the less common hemizygous losses of 9p21 have been reported with high frequencies in gliomas, and up to 55% in AO (Maruno et al., 1996;Perry et al., 1999;Rasheed et al., 2002;Ohgaki and Kleihues, 2009;Michaud et al., 2018). These alterations have correlated with a shorter event free survival (EFS; 29 vs. 53 months, p < 0.0001) and OS (48 vs. 83 months, p < 0.0001). At the molecular level, 9p losses result in the loss of the cyclin-dependent kinase inhibitor CDKN2A gene, which normally inhibits cellular division. CDKN2A inhibits the interaction between the cyclin dependent-kinases CDK4 or CDK6 and D-type cyclins, preventing both the phosphorylation of the retinoblastoma (RB1) protein and the release of the elongation factor (EF2) (Weinberg, 1995;Sherr and Roberts, 1999). Hence, cellular proliferation and dysregulation of proapoptotic pathways ensues (Ruas and Peters, 1998). Although 9p losses can be found in many gliomas, they more commonly occur in higher grade tumors (grades 3 and 4), which make the CDKN2A gene or p16 protein (CDKN2A product) potential players involved in the malignant progression and anaplastic transformation of low-grade gliomas into higher grades (He et al., 1995;Ueki et al., 1996;Watanabe et al., 2001). Interestingly, some tumors exhibited p16 hyperexpression without any chromosome 9p alterations and this was associated with a shorter EFS and OS. Cyclin D1 expression was also significantly higher in AO and was associated with a shorter EFS (Michaud et al., 2018).
TCF12 Transcription Factor Mutations
TCF12 protein is a transcription factor and member of the basic helix-loop-helix (bHLH) E-protein family. Through the formation of homo-and hetero-dimers with other bHLH transcription factors, TCF12 modulates the transcription of specific genes that are intrinsic to the oligodendrocyte lineage (Fu et al., 2009) and are involved in neural development (Uittenbogaard and Chiaramello, 2002). Two main alterations affecting the TCF12 protein have been reported in AO: absence of bHLH DNA-binding domain and single amino acid substitutions such as R602M within the bHLH domain. Both types of alterations have been shown to drastically impact the ability of TCF12 to function as a transcription factor and interact with other bHLH proteins, eventually leading to mutant protein accumulation (Labreche et al., 2015). Patients harboring TCF12 mutations or LOH exhibited a shorter median OS. The frequency of these alterations was much higher in grade III AO as compared to grade II oligodendroglioma. This suggests that TCF12 alterations play a role in dictating an aggressive phenotype in AO. One analysis looking at the downstream effect of TCF12 alterations showed a downregulation of TCF21, EZH2, and BMI1 pathway and especially CDH1 (E-cadherin), which has been shown to be implicated in tumor characteristics and metastasis . Interestingly, it has been reported that TCF12 may have a haploinsufficient tumor suppressor role which increases the risk of developing AO in those patients harboring a TCF12 germline mutation.
TERT Promoter Mutations
Human telomerase reverse transcriptase (TERT) mutations have been found to be present in 77% of grades II and III oligodendrogliomas and 82% of tumors with 1p19q co-deletion (Koelsche et al., 2013). Telomerase reverse transcriptase is a subunit of the enzyme telomerase that protects the overall integrity and length of telomeres. Telomerase normally functions to regenerate chromosomal ends (telomeres) thereby allowing DNA replication and mitosis. While usually unexpressed in mature cells, cancer cells make use of this enzyme to promote their survival and increase proliferation. TERT mutations in glioma are often found within the promoter region. This results in the opening of a binding site for the E26 transformation-specific transcription factors (Killela et al., 2013). TERT reactivation then takes place when GA-binding protein (GABP) transcription factor binds to the mutant TERT promoter (Dahlin et al., 2016).
In addition to being a surrogate for oligodendroglial lineage, TERT mutations seem to have some prognostic significance (Dahlin et al., 2016). Pekmezci et al. (2017) studied the status of both TERT and ATRX mutations along with their prognostic values in cohorts including grade II/III astrocytomas. The wildtype (WT) TERT group was associated with good prognosis only in IDH1/IDH2 WT (IDH-WT) grade II/III astrocytomas. However, in those groups with IDH mutations, including AO, TERT promoter mutation status was not a statistically significant prognostic factor (Dahlin et al., 2016). Thus, prognostic markers should be assessed while accounting for other genetic alterations.
Lately, IDH 1 and 2, which are known to generate nicotinamide adenine dinucleotide phosphate (NADPH), have been heavily observed. Their predictive value stems from their close relationship to human gliomas. Zou et al. (2013) was the first to conduct a meta-analysis on PFS and OS in gliomas based on IDH mutations. A better outcome was associated with IDH mutations and a combined HR estimate for OS and PFS was 0.33 (95% CI: 0.25-0.42) and 0.38 (95% CI: 0.21-0.68) for patients with gliomas harboring IDH mutation (Zou et al., 2013). A study by Kaminska et al. (2019) depicted how the mutant IDH1 (R132H) blocks cellular differentiation and contributes to antitumor immunity. Although a mutated IDH1 cannot generate NADPH since it has lost its normal catalytic activity, it gains the function of producing D-(R)-2-hydroxyglutarate. When the latter is overproduced in cancer cells, it inhibits histone and DNA methylases and interferes with cellular metabolism. The end result is DNA hypermethylation and thus the blockage of cellular differentiation (Kaminska et al., 2019).
Other
The platelet-derived growth factor (PDGF) signaling system has been associated with the development and malignant progression of AO. Overexpression of PDGF system components, particularly the α subtype receptor (PDGFRα), was detected in Southern and Fluorescence in situ hybridization (FISH) analyses 4/41 AO. Although these tumors were not examined for correspondence between PDGFRα expression and PDGFRα gene amplification, application of the same methodology on studies involving EGFR indicate that a high level of protein expression is to be expected in the future (Smith et al., 2000b).
Finally, even though PTEN gene alterations have an unclear association with AO, their function in the control of cellular proliferation could explain their role in pathogenesis of AO. Sasaki et al. (2001) showed that 7/72 AO had PTEN gene alterations; 2 had homozygous DMBT1 deletions, but at least one reflected unmasking of a germline DMBT1 deletion. Moreover, no mutations were found in ERCC6 exon 2 and only two patients had a chemotherapeutic response, but with unexpected short survival times. Therefore, PTEN is a target of 10q loss, and PTEN alterations are associated with aggressive tumor phenotypes regardless of chemosensitivity .
Clinical Trials and the CODEL Trial
There are 11 ongoing clinical trials recruiting patients with AO. NCT03971734 aims to determine the optimal dose of Regadenoson which alters the integrity of the Blood-Brain-Barrier in patients with high grade gliomas. Another phase 2 clinical trial (NCT04623931) is assessing chemotherapy and RT for the treatment of IDH wildtype gliomas or nonhistological glioblastomas in approximately 40 patients. In an ongoing phase 3 study (NCT00887146), patients with AO or low-grade gliomas were split into two arms. Patients in arm A received RT with concomitant TMZ followed by adjuvant TMZ. Patients in arm B received RT first followed by PCV chemotherapy. Another clinical trial is a pediatric long-term follow-up and rollover phase 4 study (NCT03975829), whereby approximately 250 participants will be treated with dabrafenib and/or tametinib. NCT03434262 is a phase 1 study assessing the efficacy of different drugs on children and young adults. Each stratum has different combination treatments and targeted patient populations. While ribociclib is included as a treatment regimen across all strata, gemcitabine, trametinib, and sonidgib are included in strata A, B, and C, respectively. Elsewhere, another phase 1 study (NCT02644291) is assessing the use of mebendazole in recurrent/progressive pediatric brain tumors of 21 participants. Periclinal laboratory models have shown the efficacy of mebendazole against high grade gliomas and medulloblastomas.
NCT01849952 is another clinical trial that will evaluate the expression levels of microRNA-10b in patients with AO, although it will not involve any new therapeutic regimens. Investigators of this trial will be testing the in vitro sensitivity of individual primary tumors to anti-mir-10b treatment. Another currently ongoing phase I study (NCT04135807) is assessing the efficacy of an implantable microdevice in the brain before tumor resection is initiated. This microdevice will be used for 8 intratumor drugs: TMZ, Lomustine, Irinotecan, Carboplatin, Lapatinib, Osimertinib, Abenaciclib, and Everolimus. NCT04708548 is an ongoing European cross-sectional study that is looking at health-related quality of parameters and outcomes in survivors after being treated with surgery, chemotherapy and/or RT. The estimated completion date is August 2022.
NCT04541082 is another ongoing phase 1 study aiming to determine the maximum tolerated dose of the oral drug ONC206, a member of the imipridone class of anti-cancer small molecules which target G protein-coupled receptors. This trial aims to determine the maximum tolerated dose of ONC206. The efficacy and safety of other novel therapeutic drugs such as rQNestin34.5v.2 (an oncolytic viral vector) is also being assessed. As part of an ongoing phase 1 trial to treat recurrent malignant gliomas (NCT03152318), investigators hope that the rQNestin34.5v.2 drug will spread to a glioma cell, kill it, and then make a copy of itself and spread again. With approximately 108 participants included in this study, the estimated completion date is July 2022.
The CODEL study is a phase 3 study whereby 36 patients with newly diagnosed grade III oligodendrogliomas were randomized to receive RT alone (Arm A), RT with concomitant and adjuvant TMZ (Arm B) or TMZ alone (Arm C) (Jaeckle et al., 2021). At a median follow up of 7.5 years, around 80% (n = 10) patients in Arm C progressed vs. approximately 40% (n = 9) in the other arms. The HR was 3.12 with a 95% CI of 1.26-7.19 (P = 0.014) (Jaeckle et al., 2021). Even though there wasn't any difference in OS, the PFS remained shorter for patients not receiving any RT; even after adjusting for IDH status and RT treatment status. The PFS HR was 3.33 with a 95% CI 1.31-8.45 (P = 0.011) while the OS HR was 2.78 with a 95% CI 0.58-13.22 (P = 0.20) (Jaeckle et al., 2021).
Lastly, it is worth noting that there are approximately 230 other clinical trials which involve oligodendrogliomas but are not actively recruiting patients.
CONCLUSION
AO remains an understudied tumor with several unclear pathogenic pathways. Several genetic and protein alterations have been identified in AO. Some of these alterations have correlated with prognosis and response to treatment. Re-analysis of some trials prior to the 2016 WHO brain tumor classification has given further insight into some molecular pathways that were previously poorly defined or investigated. More studies, however, are needed to explore molecular pathways in oligodendroglioma and AO specifically after the 2016 classification.
AUTHOR CONTRIBUTIONS
MB drafted the manuscript and contributed to the discussion section. HA conceived the idea for the manuscript. Both authors have read and approved the final manuscript.
|
v3-fos-license
|
2020-07-02T01:01:47.145Z
|
2020-07-01T00:00:00.000
|
220280443
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP09(2020)103.pdf",
"pdf_hash": "b6d2efb3a55bf7231adfa6e75a12a6c5b0c303fd",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45515",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "6422387142238bf08ba95f36d853fa5aa8732b1d",
"year": 2020
}
|
pes2o/s2orc
|
Operator thermalisation in $d>2$: Huygens or resurgence
Correlation functions of most composite operators decay exponentially with time at non-zero temperature, even in free field theories. This insight was recently codified in an OTH (operator thermalisation hypothesis). We reconsider an early example, with large $N$ free fields subjected to a singlet constraint. This study in dimensions $d>2$ motivates technical modifications of the original OTH to allow for generalised free fields. Furthermore, Huygens' principle, valid for wave equations only in even dimensions, leads to differences in thermalisation. It works straightforwardly when Huygens' principle applies, but thermalisation is more elusive if it does not apply. Instead, in odd dimensions we find a link to resurgence theory by noting that exponential relaxation is analogous to non-perturbative corrections to an asymptotic perturbation expansion. Without applying the power of resurgence technology we still find support for thermalisation in odd dimensions, although these arguments are incomplete.
Introduction
Free field theories are the simplest and most prominent examples of (super-)integrable quantum field theories (QFTs), rendered exactly solvable by the existence of an infinite set of conserved charges. A direct consequence of the presence of such charges is a severely constrained time evolution even in thermal backgrounds. In particular, simple operators in free QFTs fail to satisfy the requirements of the eigenstate thermalisation hypothesis [1,2] and their late time behaviour is therefore unlikely to approach ensemble averages, tantamount to the absence of thermalisation.
Nonetheless, it is known that nontrivial interference effects can effectively mimic equilibration. For example, after quantum quenches [3][4][5][6], correlation functions in free QFTs approach those of a generalised Gibbs ensemble [4,[7][8][9], characterised by chemical potentials for all conserved charges which in free QFTs is equivalent to a momentum-dependent temperature. Similarly, nontrivial time dependence arises when considering composite operators. Such operators can in fact interact with the thermal bath and as such exhibit a range of phenomena that are usually attributed to their interacting counterparts. For instance, their correlation functions can exhibit exponential decay at late times [10] and their spectral densities have support in the deeply off-shell regime [10,11], reminiscent of collision-less Landau damping [12].
Clearly, the composite nature of an operator is a necessary condition for its effective thermalisation, since only then does it couple to a thermal bath, indicated by a temperature dependence of its response functions. On the other hand, to which extent it is also a sufficient condition is less understood. In recent work [13,14], a simple criterion has been formulated that guarantees the absence of thermalisation of a given operator, characterised by a lack of exponentially decaying contributions to its linear response function. In addition, it was conjectured that a converse statement can be made and any operator that fails this non-thermalisation condition in fact thermalises. This conjecture was introduced as the Operator Thermalisation Hypothesis (OTH).
This note aims to shed light on several remaining puzzles. First, in singlet models [10], the calculated correlation functions were observed to display exponential decay in even dimensions d > 2. This decay is directly related to the thermalisation later extracted in [13] by arguments which however do not resolve a difference between even and odd dimensions. Since the odd-dimensional singlet model correlation functions do not decay exponentially, the results appear to be in tension with each other. There is something to learn about thermalisation or singlet models, in fact both, from a closer study. The requisite developments of concepts indeed leads to a more precise formulation of the OTH. Second, the singlet model study demonstrated how phases below or above a critical temperature exhibit different relaxation properties, most plainly for auto-correlators. Since response functions diagnose thermalisation, previous singlet model studies should be extended with results on response functions in different phases.
Thus, we will study OTH in a particular class of free field theories, namely those with a large-N singlet constraint. These theories have received widespread attention in the context of gauge/gravity duality as the holographic duals of gravitational theories with an infinite tower of massless fields of higher spin. They exhibit an interesting thermal structure on compact spaces with a large N confinement/deconfinement phase transition. In ordinary AdS/CFT, this transition is also present and can be mapped to the Hawking-Page transition from thermal AdS to the large AdS black hole in the bulk. In the deconfined phase, thermalisation in holographic gauge theories is in direct correspondence with black hole formation and equilibration in the bulk. Understanding thermal properties of free singlet models thus provides insight on putative black holes in higher spin gravity. More generally, however, they allow one to disentangle generic properties of composite operators from those particular to strong coupling, thereby teaching valuable lessons on the inner workings of gauge/gravity duality.
In the low temperature phase we observe the absence of thermalisation to leading order in 1/N , in complete accordance with the OTH. Below the phase transition, a composite operator tr(Φ(x)Φ(x)), built of N adjoint scalars, plays the role of a generalised free field with interaction strength of order 1/N . To leading order, it obeys our generalisation of the non-thermalisation condition, confirmed by the absence of temperature dependent contributions to its response functions and in particular the lack of exponential damping. This is generic to all QFTs that admit a description in terms of generalised free fields and the thermal version of this concept will be presented below. At high temperatures, the time dependence becomes significantly richer. Response functions become temperature dependent and are characterised by non-analyticities off the real axis in the complex frequency plane. They describe a damped response to sources, with a power law tail and sub-leading exponentially decaying contributions. The latter contribution is the exponential damping predicted by the OTH. The presence of the power law tail implies that information about the source is retained to a larger degree than in standard thermalisation, although parts are effectively lost in exponentially damped terms.
The general lessons from our study concern details of the formulation of OTH, and the difference between even and odd dimensions. Indeed, it is well known that the interior of the light cone plays a fundamentally different role in wave propagation in even and odd dimensions (cf Huygen's principle and Hadamard's problem [15]). By explicitly focusing on evaluating correlators close to the light cone we reduce the difference between odd and even dimensions, and identify the damped quantities that continue analytically between different dimensions, to put d > 2 OTH on a firmer footing. This light cone limit notwithstanding, crucial differences between even and odd dimensions remain. While OTH can be confirmed straightforwardly in even dimensions, we observe that subtleties involved in isolating exponentially decaying terms in the response functions become critical in odd dimensions. We describe the difficulties and find some support for thermalisation, but also indications that the resolution requires more powerful tools from the theory of resurgence [16][17][18][19]. That cautionary observation aside, our scrutiny of OTH permits us to give a more precise formulation of both the hypothesis and the converse non-thermalisation condition in all d > 2.
Our paper is organised as follows. In section 2.1, we introduce the concept of operator thermalisation, non-thermalisation and the role stable thermal quasi-particles and generalised free fields. Before going into basics of singlet models in 2.2 we also introduce the potential relation of thermalisation to resurgence. In section 3.1, we then deduce and discuss absence of exponential relaxation in singlet model response functions in the low temperature phase. The high temperature phase, which displays relaxation in even dimensions and appears to allow for it in odd dimensions, is analysed in section 3.2 and a discussion in section 4 leads up to our conclusions 5.
Operator thermalisation
Sabella-Garnier et al formulated the operator thermalisation hypothesis in [13] and considered the thermalisation properties of operator correlation functions in a fixed background rather than operator expectation values in the presence of of assumptions on the energy spectrum, as done by the eigenstate thermalisation hypothesis, ETH [1,2]. They discuss thermalisation in terms of an exponentially fast return to equilibrium of operator expectation values in response to a perturbation by the operator in question. More precisely, in [13] the retarded Green's function of the operator in question is taken to define thermalisation of a perturbation, when it decays exponentially, in line with the retarded Green's function encoding the linearised response of the operator O induced by a perturbation by the same operator O. The requirement of exponential decay for a perturbation to thermalise corresponds to the intuition that a thermalising perturbation is "forgotten" by the system at late times. The latter means that exponential precision would be required in order to fully reconstruct the source from the response of the medium.
A motivation behind the operator thermalisation hypothesis, and one of its strengths, is that it can be used to study surprising similarities between ordinary interacting systems and free or integrable systems [6,10,11,13]. While pure exponential decay occurs in free systems in contrast to naive expectations, it is generally masked by leading power law decay for d > 2, as well as multiplied by inverse powers of time. We will provide such examples below. In reviewing the operator thermalisation hypothesis, we will therefore introduce new terminology which precisely captures these features. In effect, we demonstrate an operator non-thermalisation condition which excludes this kind of partial thermalisation, and state a converse partial operator thermalisation hypothesis. Our arguments are essentially copied from [13], and the "partial" qualifier only indicates a slight shift of definitions. The new definitions are important for consistency with the examples we discuss, but the idea is approximately the same.
Partial operator thermalisation
We define partial thermalisation of an operator O to mean that: The retarded Green's function of O contains terms with exponentially damped factors at late times. This definition allows for leading power-law decay, and exponential terms which are only sub-leading 1 . In such cases, time evolution still "forgets" part of the initial perturbation, but not all of it. Clearly, partial thermalisation includes the thermalisation notion discussed in [13] and the more conventional notion of approach to a thermal ensemble, but it is a broader concept 2 .
Crucially, partial operator thermalisation captures the observation that conservation laws prevent some operators in free or integrable theories to thermalise, but that almost all other operators thermalise partially. A special class of non-thermalising operators was characterised by Sabella-Garnier et al [13]. We will see that these operators do not even thermalise partially. In essence, these non-thermalising operators are generalisations of free fields which satisfy a sharp dispersion relation relating energy to momentum. Formally, the conditions on the operators are given by the mathematical descriptions below. Physically, they correspond to stable thermal quasi-particle fields having clear-cut dispersion relations, which are permitted to differ from those of free relativistic particles. One may invoke the Narnhofer-Requardt-Thirring theorem [21] to argue that they describe a sector of the thermal system which is completely free from interactions, except for modified dispersion relations. The theorem permits other sectors, but they are completely decoupled from the quasi-particles.
We interpret the operator thermalisation hypothesis proposed in [13] to state that any other local operator, not representing a stable quasi-particle field, will thermalise. This is the converse of the above non-thermalisation condition. For it to hold, the notion of thermalisation has to be weakened to partial thermalisation. Thus, we propose a more precise partial operator thermalisation hypothesis: Any local operator not representing what we call a thermal generalised free field 3 , or a generalised quasi-particle field, thermalises partially. Note that we still have not proven this hypothesis, though we find it reasonable. All the plausibility arguments in [13] still apply.
Non-thermalisation and the thermalisation hypothesis
We now proceed to essentially repeat the arguments of [13], expressed in our terminology.
Consider a stable thermal quasi-particle operator, which we denote Q(t, x) to distinguish it from more general local operators O(t, x). By definition it has a definite dispersion relation. In finite volume and in a basis which simultaneously diagonalises energy and momentum, this means that the transitions Q can mediate between momentum states determine the simultaneous transitions between energy eigenvalues. We do not need to know if there is a single functional relation between momentum and energy for the operator Q or if there are several branches of solutions to the dispersion relations coupling to Q. To reproduce branch cuts which can be found, for example, in singlet models, it will turn out to be important to allow for a growth of the number of solutions to dispersion relations with volume.
The retarded thermal Green's function is which can be expanded in a sum of expectation values 2) where s is the spin of the operator Q. Making use of translations and inserting a complete set of states In Fourier space where ǫ > 0 is infinitesimal. Now, the special properties of the quasi-particle operator Q lead to a proof of nonthermalisation. Denoting by M the number of different branches of solutions labeled j = 1, . . . , M to the dispersion relations for Q and defining the residue functions we find (2.8) The frequencies and wave numbers above are related to the matrix elements m| Q |n by signifying that all contributions from the operator Q are due to transitions between states whose energies and momenta differ by amounts related by the allowed dispersion relations in eq. (2.6). For a more detailed analysis of thermodynamic and large N limits, it may become useful to allow an effective temperature dependence in the dispersion relations contributing to eq. (2.8). Noting that Ω Q j (k) has to be real by definition, the retarded Green's function only has singularities on the real axis, which is tantamount to non-thermalisation of the operator Q. For finite M , the singularities are manifestly poles. If M grows without bound in the thermodynamic or large N limit, branch cuts may also arise, but they will be on the real axis. There will still not even be partial thermalisation, since only singularities off the real axis can produce exponentially decaying terms.
The converse of the original non-thermalisation result would be that only stable quasiparticle operators are non-thermalising. Allowing for partial thermalisation, which includes power law fall-offs related to branch cuts on the real axis, it seems judicious to consider branch cuts also in the non-thermalisation results. Thus we are led to allow unbounded M . This generalisation replaces quasi-particles with generalised quasi-particles or thermal generalised free fields.
The OTH in our version becomes: All local operators which are not generalised quasiparticles thermalise partially. The original plausibility arguments of [13] remain, and this adjusted version survives all tests we have considered.
Thermalisation and resurgence
Below we will introduce examples of retarded Green's functions with asymptotic late time expansions containing both inverse powers and damped exponentials of time. In free systems, they force us to consider the partial, and more general, version of operator thermalisation, which allows for the possibility that exponential damping terms are sub-leading. Unfortunately, the price for the generalisation is another level of mathematical sophistication. It is required for a physical reason: Only under very special circumstances, e.g. when an asymptotic series of inverse powers terminates, is it possible to operationally separate sub-leading exponentials from more important inverse powers. Only under these special circumstances can we have a chance to resolve and observe the damped exponentials, even in principle.
This discussion is parallel to the potentially more familiar discussion about prescription dependence of non-perturbative terms in quantum mechanics and in quantum field theory. There, one encounters non-perturbative exponentials e −1/g 2 complementing power series in a coupling g. Substituting where t is time and β is inverse temperature, we are alerted to the possibility that thermalisation, signalled by exponential damping at late times, can be analogous to nonperturbative effects. The analogy indeed holds for standard Green's functions: Their late time expansion in inverse powers of t/β is typically asymptotic rather than convergent, and exponential terms can sometimes be extracted from integral representations of the Green's functions. Cases with terminating or at least a convergent (inverse) power series would be useful in practice, and would allow unambiguous identification of exponentials, but are exceptional.
The beautiful idea that there is a relation between the form of non-perturbative terms and the divergence of asymptotic series [22] can be systematised in non-perturbative techniques like Borel resummation, but does not always yield a unique answer for the series. To be clear, for thermal Green's functions in free field theory, the integral representations are unambiguous. A series representation does not improve the already complete encoding of a response function. However, a well-defined representation of the result of the integral in a double series expansion with inverse powers and exponentials as above, a trans-series in the framework of resurgence theory [17][18][19], would lend itself nicely to an extended definition of partial thermalisation. The response function would be said to thermalise partially if the series contained exponentials 4 .
Thermal singlet models
In order to distinguish low and high temperatures, we consider free field theories on R×S d−1 leading to a characteristic temperature scaling as 1/R, the inverse of the radius R of the sphere S d−1 . To make the distinction sharper we consider a large number N of fields. A large N will then allow for qualitatively different limits for physics below and above the characteristic temperature. We consider a scalar field transforming in a representation, usually fundamental or adjoint, of some large N symmetry group, for example U(N ) or O(N ). Projection onto the singlet sector is achieved by weakly gauging the symmetry, i.e. introducing a gauge field A µ in the limit of vanishing gauge coupling, where only the zero mode α ∼ S d−1 A 0 that imposes the Gauss' law constraint remains. We will focus attention on correlation functions on scales much smaller than R, corresponding to times and distances t ≪ R, |x| ≈ Rθ ≪ R, where θ is the polar angle on the sphere. The entire difference between low and high temperature physics in effectively flat space can then be encoded completely in functions ρ(λ), which appear as eigenvalue densities in the more detailed description in the next two paragraphs.
At finite temperature, the integral over the gauge field can be recast into a unitary matrix model, where the projection onto singlets results from the integral over the gauge group over unitary matrices [24] corresponding to the Polyakov loop operator, P ∼ e i S 1 dτ α , or gauge holonomy around the thermal circle [25]. The distribution of the large N number of matrix eigenvalues then controls the thermal behaviour.
At large N , the model can be solved in a saddle point approximation [24]. This is readily achieved by introducing the eigenvalue density ρ(λ). At low temperatures, T < O(1), the dominant saddle corresponds to a constant eigenvalue distribution, ρ(λ) = 1 2π . This is the confined phase, with a free energy of order N 0 . At intermediate temperatures whose N scaling depends on the representation under consideration, there is a transition to a deconfined phase, characterised by a free energy that is extensive in N . At very high temperatures, the eigenvalue distribution becomes a delta-function 5 , ρ(λ) → δ(λ).
Correlation functions of singlet operators can be constructed through finite temperature Wick contractions. For simplicity, we focus here on the scalar singlet primary, x)) for scalars in the adjoint representation, whose time ordered twopoint function is given by [10] (2.10) where we have used rotational and time translational invariance to set one of the insertion points to zero. The pre-factor has been chosen to simplify the expression, while the operator is normalised such that its two-point function is of order N 0 . Eq.(2.10) can formally be derived using the aforementioned Wick contraction, as well as the fact that the unitary matrix is represented in the scalar kinetic term like a temporal gauge field. The retarded Green's function can be extracted using its definition, G R (t, x) = Θ(t)Im G(t, x). It is simple to see that the purely thermal contributions to eq. (2.10) are real. An imaginary part can thus arise only from the vacuum piece, and the mixed thermal-vacuum term.
Explicitly, one finds [10] G R (t, x) = Θ(t) Im 1 (cos t − cos θ) d−2 where in the last line we have introduced the k-th Fourier cosine coefficient of the eigenvalue distribution, ρ k = dλ ρ(λ) cos(kλ). We note that the infinite series in the second term captures all temperature dependence, and in fact is precisely that of the thermal Feynman propagator of the fundamental scalar field, when all ρ m become equal, which is the case in the high temperature limit.
Thermalisation in singlet models
A number of challenges to the OTH may be tested in thermal singlet modes in d > 2.
In this section, we describe our technical results, which support the hypothesis in even dimensions, given the adjustments we have introduced in section 2.1. In odd dimensions the interpretation of results is intricate, and will be deferred to the discussion 4. The low and high temperature phases of the singlet models are qualitatively different and are discussed separately below, with equations specialised to scalars in the adjoint representation. In both cases, the concrete operator under study is the lowest dimension singlet operator O(t, x) = 1 N tr(Φ 2 (t, x)).
Low temperatures: T < T H
As noted in the thermal singlet model section 2.2, the eigenvalue distribution at low temperatures is constant, ρ(λ) = 1 2π , and thus ρ 0 = 1 and ρ k =0 = 0. For the retarded Green's function (2.11), this implies in the large N limit. From the explicit lack of exponentials we see that O(t, x) fails to thermalise at low temperatures. For completeness, let us take the "thermodynamic limit" of large R corresponding to t, θ ≪ 1, and Fourier transform, thus for example obtaining where µ is a renormalisation scale. In this Lorentz invariant expression, there is only one branch cut located at ω 2 > k 2 on the real line, representing a continuum of physical excitations on top of the vacuum state. That (3.2) is analytic everywhere off the real line corresponds one-to-one with the fact that the corresponding expression in configuration space lacks exponentially decaying contributions. We see explicitly that it is useful to extend the notions of the non-thermalisation condition beyond poles in the frequency plane to cuts, as long as they are on the real axis. In position space, the corresponding thermodynamic limit of (3.1) involves power-law fall-off, and we will find similar fall-offs to be general consequences of free field dynamics in d > 2 below, even for response functions of operators that display relaxation after long time in exponentially decaying terms. The physical origin of non-thermalisation is clear from large N considerations. At low T , one finds for the connected components of n-point functions Here, the conservation of individual momentum modes is explicit to zeroth order in 1/N . In consequence, quasi-particles remain intact to this order. Of course, this argument is rather superficial, but can be made more precise by properly constructing the effective action, for example using collective field theory [27]. By the above argument, taking into account 1/N corrections will reveal nontrivial features in the response functions even below the phase transition. While this requires a finite N analysis, and is therefore beyond the scope of this work, even the leading order behaviour can change drastically once occupation numbers in the thermal background are of order of the inverse coupling. Indeed, as we will show now, this is what happens in the high temperature phase.
High temperatures: T ≫ T H
At very high temperature, the eigenvalue distribution can be approximated by a deltafunction. One thus obtains for the Fourier cosine coefficients Note that one should only really expect effective thermalisation in the "thermodynamic limit" of large R, here corresponding to t ≪ 1, θ ≪ 1 and β ≪ 1. In this regime the retarded Green's function (2.11) becomes upon insertion of (3.5) Clearly, the operator now responds to the thermal bath, which may induce thermalisation. In fact, the second term represents the cross term between vacuum and thermal propagation contributing to the response function of the quadratic composite operator O(t, x).
d = 4
To get a better understanding of the precise dynamics, we will confine ourselves to d = 4, since generalisation to higher even dimensions is simple once the basic ingredients are understood. There, which can be simplified to Evidently, the Green's function falls off as a power law, with a power that is smaller than in vacuum. This is in fact a manifestation of the effective dimensional reduction that is prevalent in generic thermal systems in the high temperature limit (see e.g. [28]). However, judging by the coth term there are sub-leading exponentially decaying contributions. This may be further illuminated by Fourier transforming (3.8), yielding [10], This expression allows us to map the late-time dominant behaviour of (3.8) to the branch cut in the ω plane located between −k and k on the real line and the subdominant exponential decay to the branch cuts located off the real line. Similar analytic structures are discussed in [29]. It can be contrasted with that of eq. (3.2). Let us now return to how thermalisation could be consistent with the effective action arguments presented in the low temperature discussion 3.1. Only large N counting, which is the same at high temperature, seemed to be important. The large N suppression of interactions is indeed the same as at low temperature, but the action (3.4) assumes the vanishing of thermal one-point functions O β . Above the phase transition, the equilibrium background expectation value is non-zero and of order N , which invalidates the argument that individual O momentum modes are conserved in the large N limit, due to order N 0 interactions with the background. Generalised quasi-particles are then not intact in the large N expansion, although their response functions are well-defined. The thermalisation of O ensures that O does not represent a generalised quasi-particle, by the arguments of subsection 2.1.2. As explained above, this is consistent with large N counting, thanks to the thermal condensate of O above the critical temperature, which eq. (3.6) thus probes indirectly.
General d > 2.
The above retarded Green's function of an operator quadratic in free fields clearly separates into a vacuum-vacuum term and a mixed vacuum-thermal term. Higher powers of free fields also decompose analogously. (Purely thermal terms will not contribute to the retarded propagator.) Now, the vacuum factors differ significantly in behaviour between odd and even dimensions. The imaginary part of (x 2 − t 2 ) −(n+1) for integer n ≥ 0 is given by which demonstrates that the support of the retarded Green's function is confined to the light cone for even d, while square root branch cuts ensures support also inside the light cone for odd d. This is a known property of the wave equation, which evidently is inherited by thermal systems probed by composite operators built of powers of free fields. While the behaviour in the interior of the light cone is interesting, both for other correlation functions than the retarded Green's function, and for odd d, the temperature dependent term of the response function is entirely determined by the factor which multiplies a simple light cone divergence. We thus factor out its singular light cone behaviour and study the behaviour of what might be called the position space "residue" of the singularity, by abuse of terminology. The light-cone factor isolated from eq. (3.6) is then This expression lends itself to a comparatively uniform treatment independently of dimension, and it measures the effect of the heat bath on the light cone in position space. The functionsS d (t, x) and its light cone limit S d (t) are discussed in the appendix A.
In even dimensions the calculation confirms thermalisation on the light cone, essentially by expressing S d (t) as an expansion in modified Bessel functions each of which equals a decaying exponential times a terminating sum of inverse powers for even dimensions (i.e. half-integer orders of the Bessel function). Details concerning finiteness of the expressions are also given in the appendix A. For all even dimensions the positive m terms in the series above explicitly yield exponentially decaying terms, which are of a form that cannot cancel with other exponentials. The negative m terms produce power law fall-off. Thus, the response functions signal partial thermalisation on the light cone in even dimensions. The odd-dimensional case is significantly more subtle. A similar treatment of the Bessel function terms in series (3.12) now leads to an asymptotic expansion in inverse powers for each m, which does not terminate. Hence, it is far from clear what significance to attach to exponentially small terms. If the asymptotic series is truncated, the error terms will be larger than the exponentials we have extracted, even if exponentially improved expansions are used [30].
Discussion
Our description of an important class of non-thermalising operators as generalised free field operators in section 3 connects to the intuition that such operators should not thermalise. Generally, however, naive intuition is treacherous, and our study is founded on the observation that free field equations of motion do not generally guarantee absence of relaxation for operators which are non-linear in free fields. Composite operators are regularised operators belonging to this class, and as described in the introduction 1, they have in many instances been shown to display the decay which we take to define thermalisation. The idea of the OTH is that the implication could go in the other direction: Operators that do not thermalise partially would have to be generalised quasi-particle operators. Or equivalently, any other operators thermalise partially.
Known thermal behaviour of singlet models motivated a closer study of response functions in order to compare with the OTH in dimensions d > 2. The d > 2 treatment of [13] is somewhat less detailed than the d = 2 discussion, and we were able to resolve new even/odd dimension differences in eqs. (3.11-3.12) from the high temperature phase of the singlet model response functions. To get expressions which depend analytically on d, it proved important to focus on the light cone. Thereby, the qualitative difference between the support for the Green's functions in the light cone, related to the absence or not of Huygens' principle were factored out.
In the large N limit, where there is a phase transition, and below the critical temperature, the response function (3.1) lacks exponentially decaying terms and the individual momentum modes are independently conserved. This means non-thermalisation and also the presence of generalised quasi-particles of definite momenta, as expected from the general non-thermalisation results. Clearly, we can only expect a precise match to the general operator thermalisation theory, described in section 2.1.2, to be valid at leading order in small 1/N . To the extent that the generalised quasi-particle picture holds, we can rely on non-thermalisation. Indeed, the idea that the general theory applies parametrically close to ideal cases makes the results much more powerful. In this example, we see how it works.
Above the critical temperature, the response functions develop exponentially decaying terms, as for example in the d = 4 expressions (3.8) and (A.3). These examples clearly show how power law tails and damped exponentials combine non-trivially. Indeed, such terms which generally appear in d > 2 motivate us to consider partial operator thermalisation. This partial thermalisation concept also simplifies the non-thermalisation results for stable thermal quasi-particles, by allowing branch cuts in the thermodynamic limit as in the paragraph after eq. (2.9).
To conclude the match of the OTH and singlet model response functions we should now argue that the operators we consider fail to be generalised quasi-particle operators above the critical temperature. Without going deeply into the physics of singlet models, we have found a suitable mechanism, namely that the background condensate in the high temperature phase modifies the propagation of perturbations at order N 0 which is too much for a generalised free field, unless there is extreme fine-tuning.
Partial thermalisation diffuses the dichotomy between thermalising and non-thermalising operators to some degree, but in even dimensions calculations like (A.9) and (A.12) demonstrate the general structure from modified Bessel functions of order d−3 2 . At half integer order the resulting functions are simple polynomials of exponentials exp (−4πt/β) and powers of β/t. The further sums in the thermal response functions primarily gives rise to an infinite series of higher order terms exp (−4πnt/β), where n are integers, but the expansion in β/t terminates and the damped exponential terms can be distinguished from the resulting polynomial. Thermalisation can be confirmed although with a bit more work than if power law tails had not been present. This comparatively simple procedure works in even dimensions, when the whole effect of the induced thermalisation is confined to the light cone by Huygens' principle.
In odd dimensions Huygens' principle does not apply and some of the induced thermalisation diffuses into the interior of the light cone. The series encoding the thermalisation on the light cone, which corresponds to the polynomial in β/t, now fails to terminate. Instead it produces an infinite asymptotic expansion controlled by the asymptotic expansion of modified Bessel functions of integer order. The resulting series in β/t is divergent and the task to identify sub-leading exponentials becomes quite subtle. The potential meaning of exponential terms can only be ascertained within a larger framework, such as the study of resurgence of asymptotic series. In such a framework one should be able to assign a meaning to partial thermalisation of composite operators in odd-dimensional free field theories, but a firm conclusion is beyond the scope of the present work.
Tentatively, the Borel summability of modified Bessel asymptotic expansion indicates that there are no exponential correction terms in its asymptotic expansions, which would suggest that the exponential terms we actually find in the appendix are not masked, but on the other hand error terms of even doubly improved asymptotic series are of the same order as the sub-leading exponential terms.
Conclusions
We have refined the operator thermalisation concept and the OTH, and related it to generalised free fields. Except in the special case d = 2, operator thermalisation is generally incomplete and partial, since there are power law tails that dominate exponentially decaying terms at late times. This finding establishes the intermediate nature of thermalisation in free field theories: while exponential relaxation is ubiquitous, it typically coexists with the more unyielding time dependence expected from the presence of conservation laws. In our model system, large N singlet models, we have found both non-thermalising and thermalising behaviour of the same operator: generalised quasi-particle behaviour without exponential relaxation below the critical temperature, and thermalising exponential behaviour above the critical temperature. Importantly, the operator thermalisation concepts turn out to be applicable to operators which only satisfy the theoretical conditions in a limit, in this case when 1/N vanishes. This enhances the scope of our analysis.
The analysis is comparatively straightforward in even dimensions, where Huygens' principle holds and ensures that the thermalised responses induced by a heat bath are localised to the light cone. In contrast, the thermalised responses in odd dimensions are quite intricate due to their distribution over the forward light cone and the whole of its interior. We refrain from formulating a definite conclusion in odd dimensions, since we believe in a deeper conceptual analysis. The importance of simultaneous infinite expansions in inverse powers, and decaying exponentials, of time, suggests resurgent analysis. A connection between thermalisation of integrable systems and resurgence may find further applications.
Some properties of singlet models that are highlighted by our study generate further questions. For example, an efficient description of the high temperature phase remains elusive. We expect that all standard composite operators will thermalise and no longer represent generalised quasi-particles. The fundamental free fields Φ describe the thermodynamics of the high temperature "deconfined" limit well, but they do not represent physical singlet states. Do they provide the best description, or are there better alternatives? There are also holographic gravity duals to these questions, since singlet models are limits of large N gauge theories, some of which are conformal.
Finally, we find it inspiring to contemplate other conformal or integrable systems, in particular in odd dimensions, where resurgence appears to be fundamental.
Since in even d the retarded Green's function only has support on the light cone we will only evaluate the sum there. We have The sum (A.1) can be rewritten using the formula 1 y α = 1 Γ(α) where ϑ(z, q) is the third Jacobi theta function andt = t/β. Using the modular transformation property of ϑ, we obtain ϑ(rt, e −r ) = π r e −rt 2 ϑ(iπt, e − π 2 r ) = π r e −rt 2 The last term is divergent but will be canceled by a divergence stemming from the sum. In order to allow for such a cancellation we regularise the integral by introducing an exponential suppression e −r/R with R taken to infinity after performing the integral. We have which is valid for d > 3. As we are interested in large t-behaviour we employ the asymptotic expansion As a check we set d = 4 to compare with (A.3). We obtain which agrees with (A.3).
For odd dimensions we investigate (A.11) for d = 3 even though the expression is technically only valid for d > 3 because of the logarithmic divergence in the first term. As can be seen, the asymptotic power series does not terminate and the sub-leading exponentials are masked, suggesting that more powerful asymptotic or resurgence method should be employed.
|
v3-fos-license
|
2022-08-21T15:12:11.236Z
|
2022-08-01T00:00:00.000
|
251698135
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/23/16/9327/pdf?version=1660835438",
"pdf_hash": "8374bcce11c4b2da67c192951bb90cab60da0f0f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45516",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "a9b78909a4bdd08fa71a934f6f4249f05e4e0bcd",
"year": 2022
}
|
pes2o/s2orc
|
Role of HMGB1 in Cutaneous Melanoma: State of the Art
High-mobility Group Box 1 (HMGB1) is a nuclear protein that plays a key role in acute and chronic inflammation. It has already been studied in several diseases, among them melanoma. Indeed, HMGB1 is closely associated with cell survival and proliferation and may be directly involved in tumor cell metastasis development thanks to its ability to promote cell migration. This research aims to assess the role of this molecule in the pathogenesis of human melanoma and its potential therapeutic role. The research has been conducted on the PubMed database, and the resulting articles are sorted by year of publication, showing an increasing interest in the last five years. The results showed that HMGB1 plays a crucial role in the pathogenesis of skin cancer, prognosis, and therapeutical response to therapy. Traditional therapies target this molecule indirectly, but future perspectives could include the development of new target therapy against HMGB1, thus adding a new approach to the therapy, which has often shown primary and secondary resistance. This could add a new therapy arm which has to be prolonged and specific for each patient.
. Cutaneous Melanoma Clinical Types and Pathogenesis
Melanoma is considered one of the most aggressive forms of skin tumors, consisting of the abnormal growth of melanocytic cells. It has been increasing steadily as a global incidence over the past five decades [1]. Despite being a relatively rare skin cancer compared to the others (<5%), melanoma is the leading cause of skin cancer-related mortality [2]. Superficial spreading melanoma, lentigo maligna melanoma, nodular melanoma, and acral melanoma are the most common clinical types [3]. The histological classification of melanoma is useful in its diagnosis and is an important feature in defining cancer-related survival; however, its molecular subtypes are often determined by various somatic mutations [4]. Melanoma pathogenesis, defined as melanomagenesis, is based on the acquisition of sequential alterations in various genes and pathways controling metabolic or molecular mechanisms which regulate crucial cell function, survival, and replication rate [1]. When mutated, the genes involved in cancer can cause dysregulation of molecular processes with subsequent phenotypic manifestations. Main pathways leading to melanomagenesis include, but are not limited to, the mitogen-activated protein kinase (MAPK) pathway, which includes neuroblastoma RAS viral oncogene homolog (NRAS) and V-RAF murine sarcoma viral oncogene homolog B1 (BRAF), the cyclin-dependent kinase inhibitor 2A (CDKN2A)
Mechanism of Release and Receptors
Two HMGB1 release mechanisms are known: passive release and active secretion. HMGB1 is passively released from damaged or necrotic cells triggering an immediate inflammatory response via pro-inflammatory cytokines such as TNFα. Active secretion of HMGB1 occurs via immune cells, endothelial cells, platelets, neurons, astrocytes, and tumor cells during stress or secondary to other DAMP signals as reinforcement [11].
HMGB1 can bind to several extracellular receptors, such as the one for advanced glycation end products (RAGE), Toll-like receptor (TLR) 9, TLR4, TLR2, integrins, mucin domain (TIM-3), and C-X-C chemokine receptor type 4 (CXCR4) [9]. Given the plethora of possible target receptors for HMGB1, each domain can interact with different molecules and, in particular, residues 150-183 bind to RAGE, a major actor in the beginning and maintenance of the inflammatory process. RAGE, belonging to the immunoglobulin superfamily, is a transmembrane receptor with an extracellular domain, a transmembrane domain, and a cytoplasmic tail of 43 amino acids. While the extracellular domain is responsible for ligand binding, the cytoplasmic tail is involved in intracellular signal transduction [2]. Firstly, described as a receptor for advanced glycation end products (AGE), RAGE is now recognized as a multi-ligand receptor, including HMGB1. RAGE is involved in HMGB1-induced cell inflammation, proliferation, migration, and immunity. Furthermore, extracellular HMGB1 stimulates RAGE expression in several cytotypes [12].
Function
HMGB1 plays a key role in both acute and chronic inflammation. In physiological conditions, this protein is inside the nucleus of quiescent macrophages/monocytes. Cell stress or death are the main mechanisms that lead to the release of HMGB1 outside the cell membrane, thus HMGB1 functions as an alarmin, meaning a molecule that triggers an inflammatory response in combination with other cytokines, DAMPs, and pathogenassociated molecular patterns (PAMPs) [13]. HMGB1 plays a central role in autophagy induction, an endogenous survival mechanism against cell stress [14]. HMGB1 induces autophagy in the nucleus upregulating the expression of heat shock protein (HSP) 27, while in the extracellular space, once released by cancer cells, it binds RAGE inducing in turn autophagocytic activity in nearby cells [9]. It is demonstrated that cancer can upregulate autophagy, which leads to drug resistance thwarting chemotherapy [11]. HMGB1-induced autophagy can also be regulated by several miRNAs, a family of small non-coding RNAs which, through epigenetic mechanisms, play an important role in the maintenance of immune homeostasis, cancer progression, and inflammation. Recently, it was demonstrated that several miRNAs modulate HMGB1 expression and its functions [9]. Moreover, miR-NAs can affect HMGB1 gene expression, modulating cancer progression. HMGB1 is a direct target of miR-548b, and the expression level of this miRNA can suppress melanoma cells' growth by targeting the HMGB1 pathway [15].
This narrative review aims to assess the role of this molecule in the pathogenesis of human melanoma and its potential therapeutic role. This review has been conducted by researching the PubMed database, and the results are sorted by year of publication.
HMGB1-Related Melanoma Growth
HMGB1, after being released from damaged or necrotic cells, is used by the immune system to recognize tissue damage in order to initiate repair responses and to promote lymphocyte maturation [16]. From here, we can deduce a role in oncological diseases [16]. One of the important stages of disease progression is neoangiogenesis, which is fundamental for sustaining the metabolic and oxygen demands of cancer cells. It is known that solid tumors exhibit large hypoxic areas because of an imbalance between their oxygen supply and consumption. This hypoxic environment results in focal areas of tumor cell necrosis with consequent release of DAMPs and alarmins, including HMGB1 [17]. Extracellular HMGB1, released from the tumor cells under hypoxia, mediates communication between cells in the tumor microenvironment through binding several receptors, especially RAGE and TLR4, which contribute to tumor growth via sustenance of long-term inflammation [16]. Once secreted by melanoma cells, HMGB1 binds to RAGE, activating the endothelial cells with consequent increased expression of the adhesion molecules VCAM-1, ICAM-1, and E-selectin [16]. Within the tumor necrotic zones, the release of HMGB1 leads to the activation of kappa-light-chain-enhancer of activated B cells (NF-κB), which in turn upregulates leukocyte adhesion molecules and the production of pro-inflammatory cytokines and angiogenic factors, including vascular endothelial growth factor (VEGF) [16]. Tumor-Associated Macrophages (TAMs) are recruited in the tumoral milieu characterized by the presence of chemokines and proinflammatory factors [18][19][20]. After tumor infiltration, macrophages can acquire two different phenotypes: the M1 phenotype exerts a cytotoxic effect on tumor cells, with increasing production of nitric oxide (NO) and reactive oxygen species (ROS), which mediate the apoptosis of neoplastic cells [21]. Tumor cells create a microenvironment that promotes the acquisition of an M2 phenotype, which possesses protumor characteristics, and, in contrast to M1 phenotype, exhibits low cytotoxic properties with defective production of NO and ROS, promoting the growth and vascularization of tumor cells [21,22]. HMGB1, released by melanoma cells, promotes the accumulation of M2macrophages, which enrich the tumoral microenvironment with IL-10, turning down the tumoral killing [22]. Further evidence that the production of HMGB1/RAGE-dependent IL-10 by macrophages promotes melanoma growth has come from a human melanoma tissue study, in which high infiltration of IL-10-producing macrophages was detected in melanoma tissue with a high expression of HMGB1 [17]. Since HMGB1 directly induces IL-10 production in TAMs, blocking IL-10 with a neutralizing antibody led to delayed tumor growth in a B16 mouse melanoma model [17]. RAGE/HMGB1 axis also activates T lymphocytes, as demonstrated by a study with animal specimens: the study showed that the HMGB1/RAGE axis influences melanoma growth via the expression of IL-23 and IL-17 from a subpopulation of T cells, (γδ-T cells) [23]. Growth of melanoma cell line B16-F10 was significantly inhibited, and expression of IL-23 and IL-17 was markedly reduced in RAGE−/− mice compared with wild-type mice. The same study also showed that HMGB1 stimulates the production of IL-23 in a RAGE-dependent manner, which in turn promotes the expression of IL-17; subsequently, IL-17 promotes tumor growth through IL-6 induction with the consequent activation of the Signal transducer and activator of transcription 3 (STAT3) [23]. In addition to promoting carcinogenesis, interleukin production, and shift of T cell subpopulation, the HMGB1/RAGE axis has been shown to play a key role in suppressing cytotoxic T cell activity by increasing PD-L1 expression levels [24,25]. Moreover, HMGB1 levels were higher in patients who did not respond to the immune checkpoint inhibitor ipilimumab than in responding patients, supporting the hypothesis that the HMGB1/RAGE axis also leads to a tumor-promoting microenvironment [26]. In addition, RAGE was detected in the cytoplasm of human melanoma cells (G361 and A375) and the treatment with AGEs induced the proliferation and migration of human melanoma cells. Thus, treatment with anti-RAGE antibodies could explain the inhibition of tumor formation, invasion, and increase in the survival rate of specimens in an in vivo animal model [27]. AGEs and RAGE could be a valuable target in the near future for the treatment of melanoma since their levels are abundantly inferior in healthy skin, thus suggesting that this type of therapy could have potentially very few side effects [28]. The main pathways involved in melanoma growth are represented in Figure 1. Tumor microenvironment and hypoxia, caused by cancer cells' metabolic and oxygen demands, enhance HMGB1 release. HMGB1 binds to RAGE and actives several pathways: γδ-T cells, pro-inflammatory cytokines release, M2-macrophages, VEGF activation, and endothelial cell activation. All these lead to melanoma growth through the maintenance of an inflammatory microenvironment. Created with BioRender.com.
HMGB1 and UVB
Among the risk factors for melanoma, exposure to ultraviolet (UV) rays is certainly the best known. By suppressing skin immunity, UV rays ease the initiation of skin lesions and establish tumoral evasion mechanisms. In the setting of UVB-induced DNA damage, a time-dependent increase in the release of damage-associated molecular patterns such as HMGB1 has been detected [25,29]. The expression of PD-L1, an immune checkpoint molecule, which can inhibit effector T cell activity and reduce anti-tumor immunity, was shown to significantly increase in melanoma cells after UV exposure. HMGB1, secreted by melanocytes and keratinocytes after UVR irradiation, binds to RAGE thus promoting the downstream NF-κB-and interferon regulatory factor 3 (IRF3) -dependent transcription of PD-L1 in melanocytes, furthering survival mechanisms. UV exposure significantly reduced the susceptibility of melanoma cells to CD8+ T cell-dependent cytotoxicity through activation of the HMGB1/TBK1/IRF3/NF-κB cascade which in turn triggers the PD-1/PD-L1 checkpoint. We can deduce that the increased levels of PD-L1-UV lead to the suppression of immunity in the cutaneous microenvironment, promoting immune evasion of cancer cells and inducing the onset and progression of melanoma [25]. Targeting the UVB-induced HMGB1/RAGE axis could inhibit PD-L1 induction in UVB-exposed melanocytes and melanoma cells, which may serve as potential drug targets to mitigate immune escape of malignant and premalignant melanocytes. This could play a role both in the therapeutic setting, treating patients with marked photodamage [25]. Furthermore, another key mechanism linked to acute and chronic UV exposure is the TLR4-dependent inflammatory dysregulation. HMGB1 secreted by keratinocytes in response to UV induces the activation of TLR4 signaling which enhances the migration of melanoma cells, further- Tumor microenvironment and hypoxia, caused by cancer cells' metabolic and oxygen demands, enhance HMGB1 release. HMGB1 binds to RAGE and actives several pathways: γδ-T cells, pro-inflammatory cytokines release, M2-macrophages, VEGF activation, and endothelial cell activation. All these lead to melanoma growth through the maintenance of an inflammatory microenvironment. Created with BioRender.com.
HMGB1 and UVB
Among the risk factors for melanoma, exposure to ultraviolet (UV) rays is certainly the best known. By suppressing skin immunity, UV rays ease the initiation of skin lesions and establish tumoral evasion mechanisms. In the setting of UVB-induced DNA damage, a time-dependent increase in the release of damage-associated molecular patterns such as HMGB1 has been detected [25,29]. The expression of PD-L1, an immune checkpoint molecule, which can inhibit effector T cell activity and reduce anti-tumor immunity, was shown to significantly increase in melanoma cells after UV exposure. HMGB1, secreted by melanocytes and keratinocytes after UVR irradiation, binds to RAGE thus promoting the downstream NF-κB-and interferon regulatory factor 3 (IRF3) -dependent transcription of PD-L1 in melanocytes, furthering survival mechanisms. UV exposure significantly reduced the susceptibility of melanoma cells to CD8+ T cell-dependent cytotoxicity through activation of the HMGB1/TBK1/IRF3/NF-κB cascade which in turn triggers the PD-1/PD-L1 checkpoint. We can deduce that the increased levels of PD-L1-UV lead to the suppression of immunity in the cutaneous microenvironment, promoting immune evasion of cancer cells and inducing the onset and progression of melanoma [25]. Targeting the UVB-induced HMGB1/RAGE axis could inhibit PD-L1 induction in UVB-exposed melanocytes and melanoma cells, which may serve as potential drug targets to mitigate immune escape of malignant and premalignant melanocytes. This could play a role both in the therapeutic setting, treating patients with marked photodamage [25]. Furthermore, another key mechanism linked to acute and chronic UV exposure is the TLR4-dependent inflam-matory dysregulation. HMGB1 secreted by keratinocytes in response to UV induces the activation of TLR4 signaling which enhances the migration of melanoma cells, furthering the hypothesis that TLR4 plays a pivotal role in UV-driven progression and metastasis of this tumor, thus explaining one of the main features of melanoma, which is linfovascular metastatization [30]. TLR4 was found to be most expressed in melanoma tissue. MiR-145-5p, a TLR4-expression antagonist that inhibits carcinogenesis and metastasis via the NF-κB signaling pathway, is downregulated in tumor tissue of patients with melanoma [30]. Targeting the TLR4-signaling pathway has shown promising results in preclinical and clinical investigations using small molecule modulators of natural and synthetic origin [30]. Moreover, the expression of HMGB1 receptor RAGE, in its surface form, increases over time with a positive trend even after a single UVB dose, suggesting a positive feedback mechanism that could sustain the HMGB1 production. Finally, lower levels of RAGE act upon UVB-induced resistance to apoptosis and response to UV damage, with overall increased tumor resistance to oxidative damage and subsequent cellular death and may have implications for early stages of melanoma development or as a predictor of disease progression [29].
HMGB1 as a Marker
Currently, no consensus on the use of blood tests for monitoring melanoma recurrence exists. A plethora of molecules have been evaluated for their potential clinical values as melanoma biomarkers, such as lactate dehydrogenase (LDH), tyrosinase, and PD-L1 [2]. Nevertheless, despite progress in the prevention and early detection of melanoma, biomarkers available to date present several limitations and, for this reason, there is currently no ideal biomarker for melanoma. Among the molecules studied as possible future markers of disease activity, one is represented by RAGE, a receptor for HMGB1, which perpetrates the local inflammatory levels and was evaluated as a marker of disease activity. Since RAGE levels are increased in the environment surrounding melanoma cells and cancer cells themselves, is safe to assume that targeted therapies against RAGE signaling may represent a new strategy, although further studies are needed to make any sensible statements [27]. Increasing attention is being paid to several of its ligands, including HMGB1, as a sophisticated signal of danger with a pleiotropic function, which can serve as a possible biomarker of disease and prognostic marker of therapeutic response. On this topic, Li et al. demonstrated that HMGB1 levels were overexpressed in melanoma samples when compared to normal skin and nevi tissues. It was also noted that higher levels of HMGB1 correlate with more severe disease stages and with worse survival rates in melanoma patients [31]. Interestingly, elevated levels of HMGB1 positively correlated with several clinicopathological features of melanoma, among them tumor thickness, mitotic index, and metastases [31]. To further explore the possible role of HMGB1 as a prognostic marker, the connection between this molecule and the status of melanoma cell proliferation was studied by measuring the mitotic index. Higher HMGB1 levels showed a positive correlation with mitotic index, which in turn is linked to advanced stages of melanoma [31,32]. Wang et al. demonstrated the dual role of HMGB1 in cancer [33]. Excessive production of HMGB1 causes chronic inflammatory responses, mediated by the release of cytokines such as IL-6 and IL-8 which, in turn, stimulate carcinogenesis through tumor cell proliferation, angiogenesis, EMT, invasion, and metastatization [33]. On the other hand, nuclear HMGB1 plays a protective role in tumor suppression and tumor chemoradiotherapy and immunotherapy, reducing potential side effects occurring after systemic therapies that target not only tumoral cells but also actively replicating, healthy cells, such as the bone marrow, and basal layer-based cells [33]. Nucleus-located HMGB1 promotes the regulation of telomeres and the maintenance of genome stability [9]. Therefore, the roles of HMGB1 in the regulation of DNA damage repair and carcinogenesis suggest that targeting HMGB1 could provide a new therapeutic perspective, opening a possible new line of research [33]. The role of HMGB1 has been evaluated also in association with other molecules, such as interferon-inducible protein 1 (IRGM). Tian et al. investigated the function of IRGM in human melanoma demonstrating that overexpression of IRGM was related to melanomagenesis. By blocking the translocation of HMGB1 from the nucleus to the cytoplasm, IRGM1-mediated cellular autophagy is inhibited, thereby reducing cell survival. This evidence confirmed that IRGM is an independent risk factor that promotes melanoma progression and is associated with poor patient survival. It's safe to assume that IRGM may be a prognostic marker as much as a therapeutic target [34].
Despite the immunogenicity demonstrated in several studies, malignant melanoma is characterized by rapid progression and primary or secondary resistance to treatment. Ipilimumab (Ipi), a monoclonal antibody against the human CTLA-4, has proven to be one of the most effective immunotherapy drugs for melanoma therapy, with a clinical response rate of only 10%. A study conducted to evaluate the differences in the sera between responder and non-responder patients demonstrated an early increase in eosinophil counts as well as a decrease in S100A8/A9 and HMGB1 in responding melanoma patients. Conversely, higher baseline neutrophil and monocyte counts, as well as serum levels of S100A8/A9 and HMGB1, indicated a lack of response to Ipi therapy. This data represents further evidence of the possible use of HMGB1 both as a prognostic marker and as a marker to predict the therapeutic response [26]. HMGB1 was also investigated as a biomarker response for Boron neutron capture therapy (BNCT), a non-invasive therapeutic technique for treating malignant tumors but, as a newly developed technique, results are too scarce to make any final considerations [35]. Beyond the aforementioned ones, recent studies have also evaluated the role of another RAGE mediator, namely S100B, as a possible biomarker. S100B, a small EF-hand calcium-binding protein in the intracellular space interacts with the transcription factor p53 inhibiting its transcriptional activity, thus resulting in a decrease in p53-dependent apoptosis and a consequent increase in melanoma cell survival. S100B is used as a prognostic factor and predictor of overall survival [2,36]. S100B was found to play a role as a prognostic biomarker of treatment response: when secreted by tumors, higher levels of S100B are predictive of poorer outcomes [37]. Moreover, it is used in the management of melanoma to predict response to therapy [37,38]. Therefore, from our research, we can affirm that there is an urgent need to identify suitable biomarkers to improve early diagnosis, precise staging, and prognosis, but most of all therapy selection and monitoring biomarkers are needed to select the appropriate therapy and follow-up the patient with a non-invasive method.
Melanoma Metastases and HMGB-1-Based Possible Future Therapies
HMGB1 is closely associated with cell survival and proliferation and may be directly involved in tumor cell metastasis development thanks to its ability to promote cell migration, enhance the adhesive properties of cells, and rearrange components of the extracellular matrix [9]. Serum HMGB1 interacts with the cell-surface receptor RAGE, which is a primary signaling pathway triggering the onset of various diseases and, most importantly, in the maintenance of chronic inflammation. HMGB1 binds to RAGE, which then activates several signaling molecules including NF-κB extracellular signal-regulated kinase (ERK1/2) and p38. HMGB1 can bind also to TLR2 and TLR4, which, through Myeloid differentiation primary response 88 (MyD88), activate the expression and release of pro-inflammatory cytokines, such as TNF and IL-6 [2]. HMGB1 also activates the NF-κB pathway through interaction with CXCL12/CXCR4, thus inducing the chemotaxis and recruitment of inflammatory cells [39]. HMGB1 combined with TIM-3 induces the secretion of VEGF, promoting tumor angiogenesis, which is the first step toward the metastasic process [39]. Among the inflammatory cells that play a role in the metastasis process, M2 polarized TAMs can help the tumor to overcome a hypoxic environment in order to support its progression. Hypoxia-induced HMGB1 attacks M2-TAMs, which secrete IL-10. IL-10, via regulatory T cells, suppresses CD8+ T lymphocytes. Moreover, IL-10 induces the downregulation of molecules involved in antigen presentation to CD8+ T lymphocytes, thus promoting immunoregulatory responses, inducing T cell regulation and suppression of the production of pro-inflammatory cytokines [40]. Given these premises, it can be hypothesized that inhibiting HMGB1 activity during treatment may positively affect antitumor therapy. It has been hypothesized that HMGB1 knockout in melanoma cells may suppress tumor growth in vivo via CD8+ T cells and accelerate the infiltration of CD8+ T cells, macrophages, and the activation of dendritic cells resident in tumor tissues. On this topic, a study conducted by Yakomizo et al., showed that the knockout of HMGB1 in tumor cells converted tumors from scarcely immunogenic phenotypes to inflammation-prone-ones, de facto inhibiting in vivo tumor growth. Thus, manipulation of tumor derived HMGB1 might be applicable to improve the clinical outcome of cancer therapies, including immune checkpoint blockades and cancer vaccine therapies [41]. Highly metastatic tumor cells preferentially enter senescence and adopt survival mechanisms, while apoptosis predominates in weakly metastatic tumor cells. This has been seen to be related to HMGB1 levels, suggesting that HMGB1 modulation in tumors with different metastatic states could be useful in disease containment. However, advanced stages of metastasis may represent a limitation to this strategy [42]. Recently, agents targeting the MEK-ERK1/2 pathway or immune checkpoints have emerged as an effective treatment to improve progressionfree survival and overall survival for patients with stage III and stage IV melanomas. It must be pointed out, though, that they have limitations since in the former, resistance to therapy is obtained within 13 months, while in the latter reverse dysfunctional antitumor T-cell states and induced durable antitumor responses occur in a relevant percentage of patients [43]. However, BRAFi + MEKi induce lasting regression of melanoma through immune-mediated mechanisms. The BRAFi + MEKi treatment promotes the cleavage of gasdermin E (GSDME) and the consequent release of HMGB1, a marker of pyroptotic cell death. Unfortunately, BRAFi + MEKi-resistant melanoma cells lack pyroptosis markers, thus dampening the original inflammatory processes, and show decreased intratumoral T-cell infiltration but are still sensitive to pyroptosis-inducing chemotherapy. These data implicate that BRAFi + MEKi-induced pyroptosis in antitumor immune responses is a valid therapeutic strategy and highlights possible new therapeutic approaches for resistant melanoma [43].
A growing interest is emerging in the Role of miRNAs: small single-stranded noncoding RNA molecules, as therapeutic agents which can stop the progression of malignancies by the reintroduction of miRNAs into a population of cancer cells or by using the mRNA antagonist. One of the most relevant miRNAs in melanoma, MiR-548b, was significantly downregulated in tumor samples when compared to adjacent normal tissues and relates to worse overall survival in patients with melanoma. Overexpression of miR-548b suppresses the growth and metastasis-linked traits of melanoma cells. HMGB1 is a target of miR-548b, and its expression level is negatively regulated by miR-548b, while the reintroduction of HMGB1 abolishes the inhibiting effects of miR-548b on melanoma cells. All these findings demonstrated that miR-548b might act as a cancer-suppressive miRNA in human melanoma by inhibiting HMGB1, thus suggesting potential systemic and local usefulness [15]. On a side note, miRNAs have also been evaluated for their role in drug response modulation, in particular for dabrafenib. Namely, miR-26a is involved in the upregulation of dabrafenib efficacy via an HMGB1-dependent autophagy pathway in melanoma. The treatment with a miR-26a mimic and HMGB1 shRNA increased the efficacy of dabrafenib in melanoma cells, according to one study [44]. These results shed light on a novel treatment for conventional dabrafenib-based chemotherapy for melanoma and its potential mechanism.
Switching to peptides, they play a relevant role in cell biology and many diseases including cancer. Peptide Rb4, derived from protein proteolipid protein 2 (PLP2), acts directly on tumor cell multiplication inducing the expression of two DAMPs molecules, HMGB1 and calreticulin, which trigger immunoprotective effects in vivo against melanoma cells. Overexpression of PLP2 increased tumor metastasis while the suppression of PLP2 inhibited the growth and metastasis of melanoma cells. This evidence may suggest that peptide Rb4 could act as a promising adjuvant to be developed as an anticancer drug [45]. Finally, other compounds such as glycyrrhizin have been studied as coadjuvants in melanoma therapy. It has been reported that this product inhibits pulmonary metastasis in mice inoculated with B16 melanoma (a murine tumor cell line used for research as a model for human skin cancers) by regulating the HMGB1/RAGE and HMGB1/TLR4 signal transduction pathways. The inhibition of the HMGB1/RAGE pathway reduces NF-κB expression, phosphorylation, and nuclear translocation which altogether induces cellular invasion. Moreover, RAGE/NF-κB signaling stimulates TGF-beta secretion which, in turn, increases the process of migration and invasiveness, thus starting the development of metastasis [46]. Aloin (ALO) has also been studied as the major anthraquinone glycoside extracted from the Aloe species whose anti-tumoral effects are well known but not fully understood. ALO was demonstrated to exert protective effects in melanoma-promoting cell apoptosis via the inhibition of HMGB1 release in melanoma cells. HMGB1 was demonstrated to facilitate ALO-mediated apoptosis by binding to its receptor, TLR4, and activating extracellular regulated protein kinases (ERK) signal pathways. Although ALO cannot be suggested to eradicate melanoma, this remedy may be combined with the more conventional cytotoxic chemotherapy or any other methods to interfere with cancer progression [47]. The main findings about the role of HMGB1 in melanoma metastasis and possible future therapies are summarized in Table 1. The molecules and pathways involved in melanoma metastatization are represented in Figure 2. signal transduction pathways. The inhibition of the HMGB1/RAGE pathway reduces NF-κB expression, phosphorylation, and nuclear translocation which altogether induces cellular invasion. Moreover, RAGE/NF-κB signaling stimulates TGF-beta secretion which, in turn, increases the process of migration and invasiveness, thus starting the development of metastasis [46]. Aloin (ALO) has also been studied as the major anthraquinone glycoside extracted from the Aloe species whose anti-tumoral effects are well known but not fully understood. ALO was demonstrated to exert protective effects in melanoma-promoting cell apoptosis via the inhibition of HMGB1 release in melanoma cells. HMGB1 was demonstrated to facilitate ALO-mediated apoptosis by binding to its receptor, TLR4, and activating extracellular regulated protein kinases (ERK) signal pathways. Although ALO cannot be suggested to eradicate melanoma, this remedy may be combined with the more conventional cytotoxic chemotherapy or any other methods to interfere with cancer progression [47]. The main findings about the role of HMGB1 in melanoma metastasis and possible future therapies are summarized in Table 1. The molecules and pathways involved in melanoma metastatization are represented in Figure 2. HMGB1 is released into the extracellular environment through a passive mechanism by melanoma cells. HMGB1 binds to RAGE, TLR2, and TLR4 and transduces cellular signals through a common pathway that induces the NF-κB pathway. The activated NF-κB translocates to the nucleus. HMGB1 also interacts with CXCR4 to activate the NF-κB pathway and induce chemotaxis and recruitment of inflammatory cells. The interaction of HMGB1 and TIM-3 induces the secretion of VEGF to promote tumor angiogenesis. All these pathways promote cell survival, cell proliferation and, finally, melanoma progression. Created with BioRender.com. Figure 2. HMGB1 is released into the extracellular environment through a passive mechanism by melanoma cells. HMGB1 binds to RAGE, TLR2, and TLR4 and transduces cellular signals through a common pathway that induces the NF-κB pathway. The activated NF-κB translocates to the nucleus. HMGB1 also interacts with CXCR4 to activate the NF-κB pathway and induce chemotaxis and recruitment of inflammatory cells. The interaction of HMGB1 and TIM-3 induces the secretion of VEGF to promote tumor angiogenesis. All these pathways promote cell survival, cell proliferation and, finally, melanoma progression. Created with BioRender.com. Table 1. The role of HMGB1 in metastasis process and possible future therapies.
Metastases
Yokomizo, K et al. [41] In a mouse model, knockout of HMGB1 in tumor cells converted tumors from scarcely immunogenic phenotypes to inflammation-prone-ones, inhibiting in vivo tumor progression.
Metastases
Lee, Y-Y et al. [42] HMGB1 plays a role in the senescence or apoptosis of cancer cells.
Highly metastatic tumor cells preferentially enter senescence and adopt survival mechanisms, while apoptosis predominates in weakly metastatic tumor cells.
Possible future therapies Yu, Y et al. [44] miR-26a is involved in the upregulation of dabrafenib efficacy via an HMGB1-dependent autophagy pathway in melanoma. The treatment with a miR-26a mimic and HMGB1 shRNA increased the efficacy of dabrafenib in melanoma cells.
Possible future therapies Maia, V.S.C. et al. [45] Peptide Rb4 acts directly on tumor cells inducing the expression of HMGB1, which trigger the immunoprotective effect in vivo against melanoma cells.
Possible future therapies Li, P et al. [47] HMGB1 facilitates ALO-mediated apoptosis by binding to its receptor, TLR4, and activation of ERK signal pathways.
HMGB1 and Immunological Cell Death
As already stated, DAMPs are molecules that once secreted, released, or exposed to the surface by dying or injured cells, secrete adjuvant or dangerous signals for the immune system. Among the DAMPs, the main actors are surface-exposed calreticulin (CRT), secreted adenosine triphosphate (ATP) and passively released HMGB1: they represent the main hallmarks of immunogenic cell death (ICD) of cancer cells [48]. Although extracellular HMGB1 is essential for the development of ICD-mediated immunogenicity, it is also associated with tumor progression. In recent years, ICD has emerged as a possible therapeutic approach for the development of novel therapeutics for the treatment of tumors, in which cytotoxic compounds promote both cancer cell death and the release of DAMP from dying cells [49]. These DAMP molecules recruit and activate dendritic cells (DCs) that present tumor-specific antigens to T cells that clear out the neoplastic cells. The ability of some anticancer therapies to induce ICDs depends on their ability to induce endoplasmic reticulum (ER) stress and reactive oxygen species (ROS) production, both of which are essential components that activate danger-associated intracellular signaling pathways [50]. Interestingly, the expression of DAMP molecules occurs in a stress-dependent manner in the ER. To date, only a limited number of chemotherapy drugs can activate the ICD of cancer cells, which may be classified into two groups: group I ICD inducers target DNA and repair machinery proteins, cytosolic proteins, plasma membrane, or nucleic proteins; examples are chemotherapeutic agents including anthracyclines, oxaliplatin (OXP), and ultraviolet C irradiation. The endoplasmic reticulum is the target of group II ICD inducers, which include photodynamic therapy and Coxsackievirus B3 [51]. Imiquimod (IMQ) is a synthetic ligand of toll-like receptor 7 that exerts antitumor activity and is already topically used in non-melanoma skin cancer and lentigo maligna melanoma in hard-to-treat areas, such as the face. IMQ stimulates cell-mediated immunity or directly induces apoptosis. In a mouse model of B16F10 melanoma, IMQ reduced tumor growth, either by direct injection in situ or by vaccinating mice with IMQ. IMQ-related tumor-specific T cell proliferation promoted tumor-specific cytotoxic killing by CD8 + T lymphocytes, thus increasing the infiltration of immune cells into the tumor [52]. In a study conducted by Giglio et al., mitoxantrone and doxorubicin, two pro-ICD agents, stimulated the release of high levels of HMGB1 in melanoma cell lines, thus confirming that both agents could induce cell death. HMGB1 was released passively in the extracellular space by dying cells both in wild-type and BRAF mutated melanoma cells [53].
Also, the RT53 peptide was demonstrated to mediate anticancer effects by selectively inducing cancer cell death in vitro and in vivo in cells treated with this peptide. Plasma membrane exposure of CRT, the release of ATP, and the exodus of HMGB1 from dying cancer cells through membranolytic action were detected [54].
Switching to photodynamic therapy (PDT), it is a minimally invasive anti-cancer treatment widely used in clinical practice. It usually uses 5-aminolaevulinic acid (ALA) or its methylated ester (MAL), two light-responsive prodrugs, which, after irradiation with a red light of~630nm, stimulate cytotoxic ROS causing tumor-selective destruction and providing low side effects [55]. PDT-induced oxidative stress can also trigger immune responses against cancer cells through the induction of ICD [56]. In a mouse model, PDT treatment with ML19B01 and ML19B02, two structurally related ruthenium photosensitizers, induced death of melanoma cells containing hallmarks of the ICD including HMGB1, which in turn activated antigen-presenting cells resulting in efficient phagocytosis by dendritic cells [57]. Another protocol investigated the PDT effects using different concentrations of aluminum-phthalocyanine (AlPcNE) in mice models. The exposure of DAMPs, namely HMGB1, CRT, and ATP, all hallmarks of ICD, and the presence of apoptotic and necrotic cells were assessed. PDT induced ICD in B16F10 cells proportionally to the concentration of AlPcNE [58]. This evidence suggests the potential role of PDT in inducing ICD, even using different types of photosensitizers. PDT as an ICD-inducing approach can be particularly interesting for the in situ immunotherapy of superficial tumors, such as melanoma. This local approach is further justified by the relatively low availability of treatment modalities for this cancer type.
Moving to systemic therapies, radiotherapy and chemotherapy are standard cancer therapies, although cancer cells often develop resistance to these treatments [59]. Hyperthermia, through radiosensitization, reduces the resistance of cancer cells inducing enhanced immune responses. The association of hyperthermia plus radiotherapy induces ICD in irradiated melanoma B16F10 cells, with an ICD-related mechanism. Indeed, HSP70, HMGB1/DNA complexes and ATP, markers of ICD, were detected. Chronic inflammation is linked to tumorigenesis and extracellular HMGB1 behaves like a pro-inflammatory cytokine that induces the expression of further inflammatory factors, favoring the persistence of the inflammation mechanism which in turn manipulates the immune system [60].
Immunepotent CRP (ICRP) is a mixture of substances that were shown to have cytotoxic activity on different tumor cell lines in vitro and can modulate the immune response. ICRP was demonstrated to be cytotoxic in B16F10 melanoma cells, increasing the rate of cell death when combined with Oxaliplatin (OXP). When administered alone, OXP treatment only induced CRT exposure, ATP, and HMGB1 release, while the combination of ICRP + OXP managed to increase the levels of DAMPs, exposure of CRT, and release of ATP, HSP70, HSP90, and HMGB. It can be suggested that ICRP may enhance the release of ICDmolecules with antitumor abilities which can prevent and block the growth of melanoma like an antitumor drug [61].
In recent years, increasing scientific attention is being paid to the role of oncolytic viruses both as a single therapy and as a combined therapy for the treatment of unresectable melanoma representing a leading search field. Studies have shown that the mechanism through which these viruses act is the ICD model, through the release of mediators such as HMGB1, CRT, HSP70, HSP90, and ATP. Oncolytic viruses have emerged as promising vectors for treating cancer which can selectively enter, replicate, and lyse tumor cells [62]. Currently, only one oncolytic virus has been approved by the Food and Drug Administration for the treatment of melanoma, the Talimogene Laherparepvec (T-VEC), a herpes simplex virus 1 (HSV-1). It is a gene-modified variant obtained through deletion of γ34.5 and ICP47 and insertion of GM-CSF to enhance therapeutic activity and attenuate pathogenicity [63]. Phase I, II, and III clinical trials were concluded with promising results from the use of T-VEC in the treatment of melanoma [64][65][66]. T-VEC contacts tumor cells directly entering the tumor environment, normally by local injection, and starting replication which leads to consequent lysis of the infected tumor cell and release of tumor antigens, thus stimulating the local immune response [67]. In addition, the GM-CSF expression stimulates migration and maturation of dendritic cells and subsequent antigen presentation to CD4+ and CD8+ T cells, which reach distant metastases [63]. Figure 3 summarizes the mechanism through which T-VEC acts in melanoma.
presentation to CD4+ and CD8+ T cells, which reach distant metastases [6 marizes the mechanism through which T-VEC acts in melanoma. A 2017 randomized, open-label phase 2 trial compared Ipi (n = 100 plus Ipi (n = 98) in the treatment of 198 patients with unresectable, sta noma. In the combination group, 38 (39%) of 98 patients achieved an ob the primary endpoint, to treatment versus 18 (18%) of 100 in the Ipi group showed an almost doubled response rate and a higher degree of regre metastases with no unexpected increase in toxicity. This study suggest viruses and checkpoint inhibitors appear to have a more favorable ther than other immunotherapy combinations [68]. Another study sampled t cal strains of HSV-1 to identify the strain with the most potent oncolytic a further optimize HSV-1-based immunotherapy. Viral strains have been re deletion of the genes encoding ICP34.5 and ICP47, and the insertion of a form of the envelope glycoprotein of gibbon ape leukemia virus (GAL provides tumor selectivity and enhance the immunogenicity of cell death of GALV-GP-R-improved the oncolytic ability in several tumor cell l mouse xenograft models. The increased immunogenic cell death in vitr by the release of HMGB1 and ATP and by high levels of CRT on the cell s In the combination group, 38 (39%) of 98 patients achieved an objective response, the primary endpoint, to treatment versus 18 (18%) of 100 in the Ipi group. T-VEC plus Ipi showed an almost doubled response rate and a higher degree of regression in visceral metastases with no unexpected increase in toxicity. This study suggested that oncolytic viruses and checkpoint inhibitors appear to have a more favorable therapeutic window than other immunotherapy combinations [68]. Another study sampled twenty-nine clinical strains of HSV-1 to identify the strain with the most potent oncolytic ability in order to further optimize HSV-1-based immunotherapy. Viral strains have been reworked through deletion of the genes encoding ICP34.5 and ICP47, and the insertion of a gene encoding a form of the envelope glycoprotein of gibbon ape leukemia virus (GALV-GP-R-) which provides tumor selectivity and enhance the immunogenicity of cell death. The expression of GALV-GP-R-improved the oncolytic ability in several tumor cell lines in vitro and mouse xenograft models. The increased immunogenic cell death in vitro was confirmed by the release of HMGB1 and ATP and by high levels of CRT on the cell surface. This new HSV-1-based platform may allow the development of new oncolytic immunotherapies; these are believed to be more effective in combination with other anticancer agents, in particular the blockade of the immune checkpoint targeting PD1/PDL1 [69]. Additionally, HF10, a spontaneously mutated strain of HSV-1 with a deletion mutation in some viral genes, was used in an in vitro study revealing relevant cytolytic effects in murine and human melanoma tumor cells injected with the virus [70].
In addition to HSVs, other oncolytic viruses are also currently being studied. Among these are Coxsackievirus A21 [71] and Oncolytic Newcastle disease virus (NDV) [72]. A study aimed to investigate whether the Oncolytic NDV strain, NDV/FMW, induces ICD in melanoma cells by assaying the expression and release of ICD markers in melanoma cellderived tumors in intratumorally injected mice. CRT exposure, the release of HMGB1 and HSP70, HSP90, and the secretion of ATP in melanoma cells were detected, thus suggesting that oncolytic NDV/FMW might be a potent inducer of ICD in melanoma cells, which cooperates with several other forms of cell death [72]. As already mentioned, the main therapeutic concern of melanoma is the primary and secondary resistance to therapy connected to the immune evasion mechanisms of the tumor. To improve the clinical conditions of patients, stimulation of the host's immune system or direct lysis of abnormal cells represents a valid therapeutic option in the future, as genetic engineering techniques are improving the safety and efficiency of the vector. Furthermore, the most recent studies are showing that it is possible to associate oncolytic viruses with other therapies already on the market, with promising therapeutic results. The main findings about the role of HMGB1 in the development of ICD in melanoma and its possible use as a novel therapeutic approach are summarized in Table 2. Table 2. The main findings about the role of HMGB1 in the development of ICD in melanoma and its possible use as a novel therapeutic approach.
Topic
Author, Reference Study Characteristics
ICD
Huang, SW et al. [52] Exposure of HMGB1 and calreticulin in a mouse model of B16F10 melanoma, following direct injection in situ or vaccinating mice with IMQ, reduced tumor growth, demonstrating the role of HMGB1 in ICD.
ICD
Giglio, P et al. [53] Mitoxantrone and doxorubicin could induce cell death through the release of high levels of HMGB1 in melanoma cell lines.
ICD
Konda, P et al. [57] In a mouse model, PDT treatment with ML19B01 and ML19B02 induced dying melanoma cells contained hallmarks of the ICD, including HMGB1.
ICD
Chesney, J et al. [68] T-VEC plus Ipi showed an almost doubled response rate and a higher degree of regression in visceral metastases with no unexpected increase in toxicity. The ICD was confirmed by the release of HMGB1.
ICD
IGNYTE study [69] The expression of GALV-GP-R-improved the oncolytic ability in several tumor cell lines in vitro and mouse xenograft models. The increased immunogenic cell death in vitro was confirmed by the release of HMGB1 and ATP and by high levels of CRT on the cell surface.
ICD
Shao, X et al. [72] HMGB1, HSP70, HSP90, CRT exposure and the secretion of ATP in melanoma cells were detected in melanoma cells line after treating with oncolytc NDV/FMW, demonstrating the role of these molecules in the ICD.
Conclusions and Future Perspectives
HMGB1 is a molecule that originates from damaged cells. In physiological conditions, this protein is located inside the nucleus, but cell stress leads to its release outside the plasma membrane, either by passive release or active secretion. The main function of HMGB1 is that of an alarmin which binds several extracellular receptors, such as RAGE, TLR9, TLR4, TLR2 integrins, which subsequently trigger an inflammatory response. Among the stimuli that cause overexpression of HMGB1 it's worth mentioning several types of cancers, including melanoma. The collective amount of works presented in this study suggests that HMGB1 plays a role in carcinogenesis, both via the inflammatory stimulus and the modification of tumoral microenvironment. Elevated HMGB1 serum levels have been detected in patients with melanoma. Extracellular HMGB1 behaves as a paracrine/autocrine factor for cancer growth, proliferation, migration, and angiogenesis, as high levels of HMGB1 positively correlate with metastasis development risk. Furthermore, HMGB1 is closely associated with disease severity and poor prognosis, thus validating the role of this molecule both as a diagnostic and prognostic marker. Endogenous HMGB1 can serve as a marker to predict the therapeutic response, as higher baseline levels have been shown to predict failure or poor response to systemic therapies. Beyond the utility of a new biomarker for disease management, HMGB1 might, in the future, help to select patient-specific therapies and evaluate therapy responsiveness. The therapeutic potential of HMGB1 has been confirmed in several in vivo and in vitro studies. Targeting extracellular HMGB1 induces positive effects in cancer management by reducing inflammatory tissue damage and causing a thorough remodeling of the immunologic actors involved in it. Among the molecules capable of targeting HMGB1, miRNAs might exert cancer-suppressive effects, suggesting their potential systemic and local usefulness. Several miRNAs, acting alone or in combination with chemotherapy, can modulate and attenuate HMGB1 expression, thus inhibiting inflammation and reducing tumor growth and metastatic processes. Further investigation and studies on HMGB1/miRNA correlation may provide a new line of research for a novel therapeutic approach. Another option is represented by anti-HMGB1 monoclonal antibodies, as more and more biologic therapies are used in every field, not least in the management of melanoma. To date, there are no available molecules that specifically target HMGB1 for the treatment of human melanoma, however, studies on animal subjects are underway for the therapy of several inflammatory diseases: in the near future, dampening the inflammatory processes in the case of skin tumors as well might represent an effective strategy for growth inhibition and metastasis risk. Finally, HMGB1 plays a role in the ICD, representing its main hallmark. Various therapies act via an ICD-induced mechanism, as in the case of PDT or chemotherapy. To this day, the most interesting field of research is represented by oncolytic viruses, either as a single therapy or combined to already known medications for unresectable melanomas. All these findings open new perspectives for the development of cancer therapies, aimed to improve patients' prognosis and overall survival. It is becoming increasingly clear that there cannot be a single cure for all patients with melanoma. We are entering an era in which the chemotherapeutic approach is being surpassed, as side effects leave us longing for more tailored and precise approaches to each patient. In fact, both treatment personalization and combinatorial targeting of different immune defects represent the way forward to cure a disease that just a few years ago was considered incurable, let alone in the ending stages of its natural course. Further efforts in the study of HMGB1 in pathologies with an increasingly high incidence, such as melanoma, are necessary to reach the distribution of anti-HMGB1 drugs.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2019-03-08T14:06:58.854Z
|
2019-02-01T00:00:00.000
|
73441853
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2077-0383/8/2/178/pdf",
"pdf_hash": "74619b8ea47b4e12a6c23387c66d8287caab9184",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45517",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3f4b3497ebcc5fbfa90a3a22b2e7c3fe21a1c4c2",
"year": 2019
}
|
pes2o/s2orc
|
Features of Autosomal Recessive Alport Syndrome: A Systematic Review
Alport syndrome (AS) is one of the most frequent hereditary nephritis leading to end-stage renal disease (ESRD). Although X-linked (XLAS) inheritance is the most common form, cases with autosomal recessive inheritance with mutations in COL4A3 or COL4A4 are being increasingly recognized. A systematic review was conducted on autosomal recessive Alport syndrome (ARAS). Electronic databases were searched using related terms (until Oct 10th, 2018). From 1601 articles searched, there were 26 eligible studies with 148 patients. Female and male patients were equally affected. About 62% of patients had ESRD, 64% had sensorineural hearing loss (SNHL) and 17% had ocular manifestation. The median at onset was 2.5 years for hematuria (HU), 21 years for ESRD, and 13 years for SNHL. Patients without missense mutations had more severe outcomes at earlier ages, while those who had one or two missense mutations had delayed onset and lower prevalence of extrarenal manifestations. Of 49 patients with kidney biopsy available for electron microscopy (EM) pathology, 42 (86%) had typical glomerular basement membrane (GBM) changes, while 5 (10%) patients showed GBM thinning only. SNHL developed earlier than previously reported. There was a genotype phenotype correlation according to the number of missense mutations. Patients with missense mutations had delayed onset of hematuria, ESRD, and SNHL and lower prevalence of extrarenal manifestations.
Supplementary
Objectives 4 Provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes, and study design (PICOS).
METHODS
Protocol and registration 5 Indicate if a review protocol exists, if and where it can be accessed (e.g., Web address), and, if available, provide registration information including registration number.
5-7
Eligibility criteria 6 Specify study characteristics (e.g., PICOS, length of follow-up) and report characteristics (e.g., years considered, language, publication status) used as criteria for eligibility, giving rationale.
6-7
Information sources 7 Describe all information sources (e.g., databases with dates of coverage, contact with study authors to identify additional studies) in the search and date last searched.
5-6
Search 8 Present full electronic search strategy for at least one database, including any limits used, such that it could be repeated. Fig.1 Study selection 9 State the process for selecting studies (i.e., screening, eligibility, included in systematic review, and, if applicable, included in the meta-analysis).
6-7
Data collection process 10 Describe method of data extraction from reports (e.g., piloted forms, independently, in duplicate) and any processes for obtaining and confirming data from investigators.
5-7
Data items 11 List and define all variables for which data were sought (e.g., PICOS, funding sources) and any assumptions and simplifications made.
5-7
Risk of bias in individual studies 12 Describe methods used for assessing risk of bias of individual studies (including specification of whether this was done at the study or outcome level), and how this information is to be used in any data synthesis.
N/A
Summary measures 13 State the principal summary measures (e.g., risk ratio, difference in means). N/A Synthesis of results 14 Describe the methods of handling data and combining results of studies, if done, including measures of consistency (e.g., I 2 ) for each meta-analysis.
Section/topic # Checklist item Reported on page #
Risk of bias across studies 15 Specify any assessment of risk of bias that may affect the cumulative evidence (e.g., publication bias, selective reporting within studies).
N/A
Additional analyses 16 Describe methods of additional analyses (e.g., sensitivity or subgroup analyses, meta-regression), if done, indicating which were pre-specified.
Study selection
17 Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally with a flow diagram.
8
Study characteristics 18 For each study, present characteristics for which data were extracted (e.g., study size, PICOS, follow-up period) and provide the citations.
8, Table 1 Risk of bias within studies 19 Present data on risk of bias of each study and, if available, any outcome level assessment (see item 12).
N/A
Results of individual studies 20 For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data for each intervention group (b) effect estimates and confidence intervals, ideally with a forest plot.
DISCUSSION
Summary of evidence 24 Summarize the main findings including the strength of evidence for each main outcome; consider their relevance to key groups (e.g., healthcare providers, users, and policy makers).
11-13
Limitations 25 Discuss limitations at study and outcome level (e.g., risk of bias), and at review-level (e.g., incomplete retrieval of identified research, reporting bias).
13
Conclusions 26 Provide a general interpretation of the results in the context of other evidence, and implications for future research.
FUNDING
Funding 27 Describe sources of funding for the systematic review and other support (e.g., supply of data); role of funders for the systematic review.
|
v3-fos-license
|
2019-04-22T13:03:55.890Z
|
2016-08-08T00:00:00.000
|
126053779
|
{
"extfieldsofstudy": [
"Physics",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1088/0031-8949/91/9/093003",
"pdf_hash": "98b18f7ea91e7c241dbc8234ce035476e4fa4192",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45519",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "17a89456b872f07e61a195a80808f2e9fd09587f",
"year": 2016
}
|
pes2o/s2orc
|
The quest for ultimate super resolution
With the wealth of super-resolution techniques available in the literature it is useful to provide a succinct review of the general concepts involved in the different schemes. In this paper we group super-resolution schemes into several broad categories to simplify comparison, and to elucidate the factors limiting their respective resolutions.
Introduction
As long as there have been microscopes, researchers have been pushing the frontier of optical resolution in a quest to image ever smaller objects. Success might mean unlocking the secrets to biological processes, materials properties, and other mysteries. Early on, the optical wavelength was known to be a limit [1][2][3][4], and this motivated the push toward ever smaller wavelength imaging systems, like x-ray or electron beams [5,6]. However, in the latter part of the 20th century, it was realized that other options existed [7][8][9][10][11][12][13][14][15][16]. Beginning with near-field scanning probes, images were produced that did not appear to have a wavelength limit [17][18][19]. Then, superresolution techniques came on the scene, which made use of nonlinearities to push far-field image resolution beyond the diffraction limit [20][21][22][23][24][25][26]. Now there are so many superresolution schemes on the scene, each claiming to out-perform the others, that a casual observer could easily be confused into thinking that there are no fundamental limits to achievable resolution [3]. In this paper, we briefly summarize the resolution limits of existing techniques by dividing them into broad categories so that the reader will clearly understand the current limits and options for the future.
The paper is divided into several sections. First is an introduction to the important concepts involved in superresolution. Second is a discussion of incoherent super-resolution schemes. Several schemes are discussed, and the role of nonlinearity in these super-resolution techniques is emphasized. Third, coherent super-resolution schemes are discussed, and the factors determining their resolution are explained. Examples are used to simplify comparison of coherent and incoherent [27][28][29][30] schemes. Fourth, super-resolution schemes are discussed that do not beat the diffraction limit encompassed by Maxwell's equations, but nevertheless are sub-wavelength limit. Finally, other related topics are briefly discussed.
Important concepts
Before beginning, it is necessary to define what is meant by resolution. Normally, two objects are considered to be resolved when two conditions are satisfied: (1) the two objects can be distinguished from one another, and (2) The location of each object is known to higher precision (ideally accuracy) than their separation. In the most common definition of diffraction-limited resolution, the Rayleigh criteria, both of these conditions are satisfied simultaneously. This is illustrated in figure 1 for two point objects. Here, each object independently produces a sinc function response at the detector. Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
When one object lies at the position corresponding to the first node of the other object's (sinc) response, then the sum of the two sinc functions (solid line) develops a dip in the middle. At this separation, curve fitting can determine both the number and locations of the objects, and so both of the above criteria are met. Note that curve fitting can sometimes be used to resolve objects slightly closer than this, but ambiguities quickly arise as the distance decreases, especially when noise is present.
In super-resolution, the conditions of distinguishability and precision can often be addressed separately. For example, in what is perhaps the most widely-used super-resolution technique (PALM/STORM [24][25][26], which is discussed later), the location precision is determined by the centroid position, while the distinguishability is achieved by associating different color or time bins with different objects [24][25][26][31][32][33][34]. Note that in this case, distinguishability need not have anything to do with the diffraction limit. Conversely, other super-resolution techniques (e.g., STED/GSD [20,21,23], also discussed later), simultaneously achieve both distinguishability and sub-diffraction precision.
Noise is a crucial concept in super-resolution. Without noise, it is possible to devise super-resolution schemes which appear to have no resolution limit. For example, the separation of two point sources could be exactly determined using curve fitting if noise were absent. Therefore, although superresolution schemes are often described without reference to noise, in an attempt to simplify the conceptual analysis, their ultimate resolution limit must include a noise analysis. To see this, consider a popular definition of resolution as being the maximum spatial frequency in the image [35,36]. In figure 2 we illustrate how this definition is lacking in the presence of noise [37]. Here figure 2(a) shows two different spatial frequencies whose relative amplitudes are adjusted such that they have the same slope at the origin. The higher spatial frequency is better at distinguishing nearby objects, which satisfies the first requirement of super-resolution as stated above. However, the second requirement which relates to location precision does not necessarily have a clear relation to spatial frequency. This is illustrated in figure 2(b) where the same noise amplitude is added to both spatial frequencies. In general, the precision to which the centroid of a noisy image spot can be determined is given by the noise amplitude divided by the steepest slope of the curve. For simplicity, figure 2(b) compares regions of steepest slope in two sine waves. Since the slopes and noise amplitudes are the same for both spatial frequencies, curve fitting will give the same location precision (or resolution) for both. Now since distinguishability need not have anything to do with position resolution, for example using different colors, then the resolution is independent of spatial frequency in this example. Thus the spatial frequency definition of resolution is incomplete without a noise analysis.
Another example illustrating the tradeoff between noise, spatial frequency, and resolution is shown in figure 3. By blocking low spatial-frequency components in a conventional lens-based imaging system, the spatial frequency is increased but the localization precision is significantly degraded because of the resulting lower light-levels. This leads to a worse location precision than when the lower spatial frequencies are also allowed to pass. Hence the resolution increases even as the average spatial-frequency decreases.
Finally, it should be noted that the concept of image contrast is often considered to be separate from resolution, placing its own limits on what information can be extracted from an image. However, the two are actually closely related, since a low-contrast image with noise-free background could easily be converted to a high contrast image by subtracting off Figure 1. Diffraction-limited resolution of two objects in an imaging system that gives a sinc (=sin(x)/(x)) function (dashed line) response for a point object. The solid line is the total intensity as a function of position in the image plane. A two-hump pattern is seen in the total intensity curve, which indicates both the presence of two objects and their separation. In this example, one object is at the node of the sinc function centered on the other object (dashed curve). One has a high-frequency and a small-amplitude, while the other has a low-frequency and a large-amplitude, such that they have the same gradient at the origin. (b) When a constant noise is added and curve fitting is used to compute the location of the zero-crossing, the precision is the same for both spatial frequencies since it is given by the noise divided by the gradient. the background. The disadvantage of a low contrast image is that its signal to noise ratio is determined by the noise on the background, which is often larger than the noise on the interesting parts of the image.
Incoherent super-resolution techniques
In the photo-activated localization microscopy and stochastic optical reconstruction microscopy (PALM and STORM, respectively) super-resolution schemes, location precision is determined by centroid-based or curve-fitting techniques. While centroid techniques have been around for a long time [38], the distinguishability needed to resolve objects separated by less than the diffraction limit has historically posed a major hurdle. Initially, this problem was addressed by attaching to each object a different colored fluorophore [39]. However, due to the relatively broad spectrum of optical emitters at room temperature, only a handful of resolvable colors are possible. To overcome this limitation, in PALM/STORM, time is used instead of frequency (color) [24][25][26][31][32][33][34] for providing distinguishability, as illustrated in figure 4(a). Briefly, fluorophores with a limited stability are first activated, and then illuminated until they bleach. The diffraction limited image spot produced by each fluorophore (large spots in figure 4(a)) is then used to calculate a centroid (small spots in figure 4(a)). By sensitizing only a few of these bleachable fluorophores at any one time, it is insured that no more than one emitter contributes to each diffraction-limited image spot. When these fluorophores have bleached and their corresponding centroids have been computed, then another set is sensitized, and the process repeated until the full super-resolution image is constructed.
Note that the centroid computation method can sometimes diverge for certain diffraction limited imaging functions. However, this is not a problem in practice as computational work-arounds exist, and in any case a region of interest is often defined to be within a few diffraction widths to avoid interference from nearby emitters.
Another major incoherent super-resolution technique is stimulated emission depletion (STED) [20]. To see how STED works, first consider figure 4(b). Here a fluorphore is excited with a laser beam having a donut shape (i.e. a node in the center). Treating the fluorophore as a two-level molecule, strong laser excitation produces a competition between fluorescence and stimulated emission which causes saturation of the fluorescence at the high intensity regions of the laser spot, as illustrated in figure 4(b). This saturation causes steepening of the fluorescence intensity gradient near the center of the donut beam. Alternatively saturation can be viewed as producing a higher effective spatial frequency [35]. For the saturated donut beam, both interpretations give the same answer; that the location precision increases with the square root of the peak laser intensity, where the square root is a consequence of the quadratic intensity distribution near the node of the laser donut beam (i.e. the effective node size is determined by position at which the laser pumping rate is equal to the incoherent decay rate).
Although a saturated donut beam is capable of superresolution, it is mainly restricted to relatively simple objects with only a few fluorophores within each diffraction limited spot. This is because the fluorphores outside of the node region still give a fluorescence signal that contributes to noise. There is also ambiguity from the steep fluorescence gradient at outer edge of the donut beam. These limitations are overcome by Role of spatial frequencies in the diffraction limit. When all but the edges of the lens are masked off (top row), the imaging system has a higher spatial frequency, and therefore higher resolution in the absence of noise (middle column). However, when photon noise is included (right column), adding lower spatial frequencies (middle and bottom rows) is found to greatly improve the localization precision, while having almost no effect on distinguishability.
STED as illustrated in figure 4(c). Briefly, the donut beam is only used to de-excite via stimulated emission, but not to excite the molecule [20]. This is possible because the phonon sidebands of most dye molecules allow absorption and stimulated emission at very different wavelengths (or colors). For excitation (or absorption) of the fluorphore a uniform (Gaussian) probe beam with a moderate (non-saturating) intensity is used instead of a donut beam. For de-excitation a donut beam is still used and as its intensity increases it dominates over both the excitation rate of the probe and the fluorescence emission rate. The result is that fluorescence is quenched everywhere except in the region near the donut beam node. The resulting fluorescence lineshape becomes narrow, with no background. This is illustrated in figure 4(c). Note that the position precision is the same as for the single donut beam excitation of figure 4(b), but now the elimination of background gives the distinguishability needed to satisfy both requirements for super-resolution. Although the laser intensity requirements for STED can be severe, a variant of STED, ground state depletion (GSD) uses stimulated emission into a metastable (non-fluorescing) state to greatly reduce the donut beam intensity needed [23].
Resolution of incoherent techniques PALM/STORM and STED/GSD
The resolution of the centroid techniques PALM/STORM is given by the location precision, since distinguishability is achieved by time separation. As stated above, the precision is the noise divided by the intensity gradient. The intensity gradient is bounded by the maximum intensity (of the point source image) divided by its (diffraction-limited) width. Thus, the precision (and resolution) is improved by increasing the signal to noise ratio (SNR). It can be shown that the maximum SNR is given by the square root of the total number of photons N in the image spot. The number of photons in turn depends on the pumping rate W (absorption and re-emission rate) divided by the bleaching frequency g bleach (inverse bleaching time). Therefore, the resolution improvement R over the conventional diffraction limit is given by [3]. For STED/GSD, the resolution is determined by the effective size of the saturated node in the donut beam. This is in turn determined by the position at which the de-excitation pumping rate W begins to exceed the natural decay rate G for excited state in STED, or g meta for metastable decay rate in GSD (i.e. the pumping rate is 'thresholded' by the relevant decay rate). For a donut beam, as mentioned above, the intensity increases quadratically away from the node, so a straightforward calculation gives a resolution improvement R over the diffraction limit which goes as the square root of the max pumping rate divided by the decay rate, /g = R W meta [3] for GSD.
Since PALM/STORM and STED/GSD have similar formulas for resolution improvement, it is of interest to see if there is an example that could relate the width of the saturated hole in the donut beam (see figure 4(b)) to the total number of photons collected. This is shown in figure 5(a). Here the light grey trace shows a donut-beam image with Poissonian intensity noise. By greatly magnifying this (black trace), the This produces higher spatial frequencies. This is shown for peak laser pumping rates of 1, 10, and 100. (c) In STED the excitation laser is replaced with a weak Gaussian-shaped probe beam. The de-excitation laser is also a strong donut beam. The result is background-free super-resolution. This is shown for peak donut beam intensities of 1, 10, and 100. donut-beam node appears to become narrow, producing an effectively higher spatial frequency, as in figure 4(b). The question is how to set the threshold, since no saturation nonlinearity has been assumed in the emitter. Examining the dark trace of figure 5(a), it is seen that nonlinearity of the photon detection process itself gives a discrete number of thresholds; namely there must be an integer number of photodetection events per pixel. When the threshold is set at the single photon level, the approximate width of the saturated node in the donut beam matches the position precision given by centroid-based curve fitting. Hence even in PALM/ STORM, where resolution enhancement might seem like it requires a non-linearity, closer examination shows that the photo-detection process itself provides the nonlinearity.
Transition from incoherent to coherent super resolution techniques
To make the connection between coherent and incoherent super resolution, recall that STED/GSD made use of a saturation nonlinearity to improve resolution. A driven twolevel atom (or molecule) can also be saturated by strong excitation. This is illustrated in figure 5(b) for a damped twolevel atom, where it is illustrated that donut beam excitation can produce narrow features suitable for super-resolution. Although the damped two-level system can give superresolution, the real advantage to coherent systems appears when the damping is small. This is illustrated in figure 6. Since optical transitions are always heavily damped at room temperature, the example in figure 6 uses a radio-frequency (RF) spin transition as the two-level system. The donut laser beam is replaced by a donut-like (in 1D) RF excitation, produced by applying RF to a pair of anti-Helmholtz coils. A strong resonant RF field drives the spins from ground state to excited state and back again periodically in time by a process known as Rabi population oscillation [40]. If the interaction time is fixed and the resonant RF field amplitude varies in space, the excited-state population will depend on position. For example, near the coils where the field is strong many Rabi oscillations will have taken place, but at the field node none will take place (see figure 6) [16]. The result is that the excited state population versus position can have a high effective spatial frequency that in-principle can give superresolution. In practice however, a single sinusoidal image function is not useful for imaging, and so something more is needed.
To make the coherent two-level system useful for imaging, an approach analogous to STED/GSD is used, wherein excitation and de-excitation are done with fields of different frequencies; namely a strong donut-like field is applied at DC, and a weak RF field is used to excite the two-level atoms. This is illustrated in figure 7. In contrast to STED, the donut field does not de-excite the atoms, but rather tunes the atoms out of resonance with the RF field to prevent excitation. At the node of the donut field, the atoms can be excited by a resonant RF field giving a STED-like image, as shown by the object at the node in figure 7. The big advantage of a coherent system is that this position selectivity is not limited to only the node, but can be located anywhere between the coils by simply shifting the RF frequency, as illustrated by the second object (on the right) in figure 7. Note that this RF selectivity gives distinguishability in addition to position precision, and that these improve with increasing DC field gradient. This imaging scheme is known as magnetic resonance imaging (MRI) which is a special case of gradient field imaging (GFI) [36,41,42].
Resolution of coherent super-resolution schemes
For the Rabi gradient scheme of figure 6, the resolution is given in-principle by the spatial frequency, which is determined by the coil separation divided by the maximum number of Rabi population oscillations near the coils, N . R This in turn is determined by the product of the interaction time t and the Rabi frequency m W = B, where m is the magnetic moment and B is the RF magnetic field strength. The interaction time is ultimately limited by the inverse spin decay rate, s This gives a super-resolution improvement factor of /g = = W R N . R s Note that at RF frequencies the coil separation s is analogous to the diffraction limit, as will be discussed later, because the RF wavelength is very long.
For MRI ( figure 7), the resolution is determined by the spin linewidth g s divided by the field gradient, / D s, where D is the maximum RF detuning that can be produced the DC fields near the coil. Again, taking s as the effective diffraction limit gives a resolution improvement factor of /g = D R . s To relate W and D, consider two spin levels which are initially degenerate. Applying a DC magnetic field along the quantization axis gives an energy level splitting of m D = B, where here B is the DC magnetic field strength. This is known as the Zeeman shift. If the same magnetic field is applied perpendicular to the quantization axis, population oscillations are produced between the two spin states at the Larmor frequency m W = B. L Since a DC field can be considered resonant with the degenerate spin levels, this Larmor oscillation is analogous to Rabi oscillation. Thus, the resolution improvement of the Rabi gradient and the MRI schemes are essentially the same.
Near-field super-resolution techniques
The classic resolution limit was obtained for propagating plane waves in the far-field limit. Near-field techniques can of course achieve a higher resolution. In the absence of any nonlinearities, the resolution limit of these techniques is determined from Maxwell's equations [45]. To illustrate this, consider the example of a scanning near-field illumination source that excites two objects as shown in figure 8. As seen, the two objects can be resolved when their separation is larger than the size of the illumination source, or its distance away, whichever is larger. In the far-field, the source size is replaced by the propagation wavelength in Maxwell's equations. Hence, the Maxwell resolution limit encompasses not only the far-field diffraction limit, but also scanning probe techniques like near-field scanning optical microscopy [46] and variants. It should be noted that near-field imaging is often called super-resolution because it is not limited by the wavelength. However, true super-resolution goes beyond the Maxwell limit by making use of nonlinearities [47]. Here it should be noted that Maxwell's equations relate the maximum field gradient to the maximum field amplitude divided by the source size (or distance), or wavelength in the far field. Thus the definition of position precision as a gradient divided by the noise appears the most convenient metric for comparing different super-resolution schemes (at least for the position precision criteria) [3]. Here the donut laser beam is replaced by a field that crosses zero, thereby giving a radio-frequency (RF) intensity node analogous to a 1D cross-section of a donut laser beam. For strong RF drive, the resulting Rabi population oscillations produce a high spatial-frequency in excited state population. The damped two-level atom response is superimposed. Figure 7. In magnetic resonance imaging, a DC magnetic field gradient allows position-selective excitation of the two-level spin system by a resonant RF field. In this setup, anti-Helmholtz coils create the field gradient, which is depicted by the dashed curve. The horizontal coil (top) is used for exciting spins in the objects as well as for measuring their magnetic response [43,44]. The response of the two objects shown is plotted as solid green and blue curves, which are the images. . An optical field generated by a single near-field source is scanned close to two objects. The resulting scattered light (solid curve) can resolve the objects if their separation is larger than the source size R 0 or source distance R, whichever is larger. For reference, the image of a single object is shown by the dashed curve.
Dielectrics and refractive index
In dielectric materials, optical illumination induces dipoles in the atoms comprising the material. These induced dipoles can be envisioned as near-field sources, and so should enable resolution beyond the free-space diffraction limit. Indeed this is the case. This continuous distribution of sources effectively modifies the speed of light in the medium, and with it, the maximum slope of the electric field. This is quantified by defining an index of refraction n, which is the factor by which resolution is enhanced. Although this is in fact super-resolution, it is usually considered to be encompassed by the diffraction limit.
Negative index materials are often studied for superresolution applications [48][49][50][51][52]. Most negative-index superresolution demonstrations are done with metals because they have negative electric permittivity. This is because a metal has free electrons and acts like it has a resonance at DC, with a width given by the collision frequency. For objects embedded in a metal-based negative index material, the resolution limit would be expected to be improved by the quality factor of the DC resonance, which is approximately the optical frequency divided by the collision frequency. However, the large absorption in metals would tend to limit this super-resolution to small distances from the objects, because the field gradient decays as the field decays.
To achieve full super-resolution performance, it is preferred that negative index materials have both m and negative. This happens above some resonance frequency in the material [53,54], where both electric and magnetic resonances are needed. Such materials are generally rare in nature. Notable exceptions would be materials having optical transitions with both magnetic dipole and electric dipole components. In practice, custom engineered meta-materials are normally for this purpose [55]. However, a review of this field is beyond the scope of the current paper.
Nonlinearities in the propagation medium (laser filamentation)
In addition to object and detector nonlinearities, it is possible to perform super-resolution using nonlinearities of the intervening medium. The most common example of this is to use the self-focusing nonlinearity to create laser filaments near the object being illuminated [56,57]. In this case, the resolution limit is the size of the filament, which is determined when diffraction and self-focusing are balanced (see figure 9). Note this can also be viewed as near-field illumination where the source is produced by a nonlinearity instead of by a fixed-size object as in figure 8. The advantage of this technique over those requiring nonlinear response of the object is that here the illumination power reaching the target could in principle be lower, and therefore could be less damaging to the object. Of course, laser filamentation in air generally requires an intense laser, but in some media the threshold might be greatly reduced. For example, by placing nanoparticles inside the medium [58,59] optical forces could modify the refractive index in the direction needed for self-focusing, even at very low excitation powers.
Quantum and quantum-inspired super-resolution
Quantum imaging has also been proposed for super-resolution, though mostly in the context of lithography. It involves using the fact that multi-photon Fock states interfere to form effective standing waves that have n times as many nodes a conventional optical standing wave, where n is the number of photons in the Fock state [60]. For its implementation, materials are needed that absorb only the Fock state of interest (i.e. will not absorb single photons, but only n photons). It happens that such materials do exist [61][62][63][64], and surprisingly even when these materials are illuminated by intense classical light, they achieve the same resolution as when illuminated by pure Fock states [65,66]. Here it should be mentioned that a number of quantum-inspired super resolution technique have been proposed recently, for example those that make use of quantum interference as in Raman dark states (CPT/EIT) [67][68][69][70][71][72]. Although these schemes do make use of the quantum mechanical properties of the object, their super-resolution enhancement factors do not beat classical limits.
Quantum enhanced imaging can also take advantage of multi-photon correlations [73][74][75]. Centroid methods that involve multiple photons have been demonstrated [76]. Finally, quantum illumination [77] is a technique that is capable of achieving exponential improvement (2 n where n is the Fock state number) in the signal-to-noise ratio (SNR) of a weak image in the presence of strong background light [78]. Since noise and resolution are closely related, this may in principle be adapted in the future to give a dramatic resolution enhancement. Figure 9. Self-focusing of a laser beam by a nonlinear medium. The laser is self-focused by the intervening medium, sharpening the breadth of the exciting field. In the figure, two objects are placed next to one another. The self-focusing of the laser enables interaction with only one of the objects. Here, the resolution limit is determined by the width of the self-focused beam.
Conclusion
In summary, a number of super-resolution techniques were compared. In the incoherent cases, the resolution enhancement was found to be given by a ratio of a pumping rate W and the linewidth of some decaying (preferably metastable) level g, specifically g = R W . For the coherent schemes, the enhancement was given by the ratio of Rabi frequency W to transition linewidth g, g = W R . To relate these two performance factors, note that for a strongly damped two-level atom, the pumping rate is given by / * = W G W , 2 where * G is the de-coherence or damping rate. Substituting this into the incoherent limit gives * g = W G R , which bears a much closer resemblance to the coherent case. Perhaps not surprisingly, coherent gradient-field techniques like MRI have the best super-resolution performance, not only because spin transitions have narrow linewidths compared to the shifts induced by typical gradient fields, but also because their coherence allows super-resolution at all locations in the gradient field; not just near a node. At room temperature, optical coherence times are generally too short for coherent super resolution which is why spin transitions must be used for MRI. Finally, it is noted that MRI is an example of a superresolution scheme that makes use of both near-field enhancement and nonlinearity. For this reason the conventional picture of MRI does not invoke the concept of wavelength, but rather gives the resolution in terms of linewidth divided a field gradient. For this, and other previously-mentioned reasons, we have avoided concepts like wavelength and spatial frequency as a unifying concept for super-resolution, and instead stress the use of field (or intensity) gradients.
We gratefully acknowledge support of the NIH SBIR #HHSN26820150010C, National Science Foundation Grant EEC-0540832 (MIRTHE ERC) and the Robert A Welch Foundation (Award A-1261). JSB is supported by the Herman F Heep and Minnie Belle Heep Texas A&M University Endowed Fund held/administered by the Texas A&M Foundation.
|
v3-fos-license
|
2022-04-26T15:11:48.668Z
|
2022-04-22T00:00:00.000
|
248380428
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2071-1050/14/9/5049/pdf?version=1650628185",
"pdf_hash": "13f53f099c5cf73347d0472fdba8849b26144001",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45520",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "96352f77fb6d04e20454f89532c8ccb2bf827b34",
"year": 2022
}
|
pes2o/s2orc
|
Does the Level of Training Interfere with the Sustainability of Static and Dynamic Strength in Paralympic Powerlifting Athletes?
: Background: Paralympic powerlifting (PP) presents adaptations that the training tends to provide, mainly concerning the mechanical variables. Objective: Our aim was to analyze mechanical, dynamic and static indicators, at different intensities, on the performance of paralympic powerlifting athletes. Methods: 23 athletes of PP, 11 national level (NL) and 12 regional level (RL) performed dynamic and static tests over a comprehensive range of loads. The study evaluated regional and national level athletes and the influence on the training level on the performance of strength. The study was carried out in four weeks, with the first week to familiarize with the one repetition maximum (1RM), day 1, and there was a 72-h rest and familiarization with dynamic and static tests carried out day 2. In week 2, the 1RM tests were performed (day 1 and 72 h later), and the static tests were performed with a distance of 15 cm from the bar to the chest, with the tests of maximum isometric strength, time to maximum isometric strength, rate of force development (RFD), impulse, variability and fatigue index (IF) taking place on day 2. In weeks three and four dynamic tests were performed, including means propulsive velocity, maximum velocity, power and prediction of one maximum repeat. Results: Differences were found, with better results than for RL in relation to NL in MVP (45%, 55%, and 75% 1RM), in VMax (50%, 55%, 75% and 95% 1RM). In power, the NL had better results (40%, 45%, 50%, 60% and 95% 1RM). Conclusion: RL athletes tend to present better results with regard to velocity, however in power, NL athletes tend to present better performances.
Introduction
One of the questions that has been asked in strength training is related to the quantification and monitoring of the load, aiming at better performance. The most used variables in this sense have been type and order of exercise, intensity or load, number of repetitions, and series and rest between sets [1]. The manipulation of these variables has usually been used as a training control [2,3]. Thus, the training load has been determined from the relative load (% of the maximum load of one repetition, 1RM), being the main factor of control and determination of the intensity and fatigue relative to the strength training [4,5]. Although these variables are used to control training, their use can induce excess fatigue and mechanical and metabolic tension [6][7][8][9][10].
Thus, the evaluation of an athlete's training status and initial condition is the crucial point for the correct elaboration of a training program to be applied in different phases of the sports preparation [11]. In these initial conditions, some training variables are manipulated to prescribe and control resistance training programs such as: sets, time intervals, position and intensity [11][12][13]. On the other hand, in these initial conditions, there are possible disparities between different training methods to determine mechanical outputs in strengthpower exercises. Therefore, the velocity-based approach to training becomes a practical and effective alternative for coaches [11,14,15]. This statement is supported by current studies that emphasize that training control based on the percentage of one repetition maximum would have low control and could still lead to sub or super dimensional planning. In this direction, training control through speed would be more indicated [8,11,14,15].
In this direction, the bench-press (BP) is one of the most studied exercises on measures of power, strength, and speed [14]. BP has been shown to be closely correlated with sporting success as a multi-joint exercise that mimics various sporting actions [11,15]. More specifically, in paralympic powerlifting (PP), where the bench press is the only exercise used, being an adaptation of the conventional powerlifting bench press [16]. It is noteworthy that in the PP the athletes have their lower limbs extended over the bench, in view of several eligible disabilities [16], although the differences between the PP and conventional powerlifting are still not clear [14]. In addition, the study in relation to this modality has been growing and the use of mechanical variables has been used [13,14,17].
On the other hand, paralympic powerlifting (PP) has particularities notably in relation to elite athletes in relation to the possible adaptations that training tends to provide, especially in relation to mechanical variables [14,18]. As a strength modality, PP training involves several variables, such as strength (Ability to oppose resistance), power (product of force and velocity), volume, intensity, bar displacement speed (distance divided by time), among others [14,19]. Due to this specificity, it is suggested that powerlifting athletes undertake specialized training programs to obtain non-specific adaptations in relation to strength and velocity of movement [20].
In PP there is only one functional classification category, and all eligible disabilities are physical and compete together, with division only into body weight categories [16]. However, research with PP athletes has evaluated the origin of injuries and functional classification principles of athletes [21][22][23]. In addition, studies investigating relevant aspects regarding the force-velocity of these athletes and aspects that tend to influence performance are not yet clear in the literature [21][22][23]. Therefore, the aim of this study was to analyze mechanical, dynamic and static indicators, at different intensities, on the performance in regional and national level athletes of paralympic powerlifting. From the above, we raised the following study hypotheses: stronger athletes would generate more speed with the same load, and strength-velocity-based assessment could be a way of controlling and evaluating performance in paralympic powerlifting athletes.
Experimental Approach to the Problem
The study was carried out in four weeks. The first week was intended to familiarize with the tests of one maximum repetition (1RM) and with dynamic and static tests. In week Sustainability 2022, 14, 5049 3 of 14 two, 1RM tests and static tests were performed, including maximum isometric force (MIF), time to MIF (Time), rate of force development (RFD), impulse, variability and fatigue index (FI). At weeks three and four, dynamic tests were performed, including mean propulsive velocity (MPV), maximum velocity (VMax), power (Power) and prediction of one repetition maximum (PredRM). Figure 1 exemplifies the experimental design of the study.
Experimental Approach to the Problem
The study was carried out in four weeks. The first week was intended to familiarize with the tests of one maximum repetition (1RM) and with dynamic and static tests. In week two, 1RM tests and static tests were performed, including maximum isometric force (MIF), time to MIF (Time), rate of force development (RFD), impulse, variability and fatigue index (FI). At weeks three and four, dynamic tests were performed, including mean propulsive velocity (MPV), maximum velocity (VMax), power (Power) and prediction of one repetition maximum (PredRM). Figure 1 exemplifies the experimental design of the study.
Day 1 Day 2 Other days
Week
Sample
The sample consisted of 23 male Paralympic powerlifting athletes (PP), 11 at the national level (NL) and 12 at the regional level (RL). All of them were competitors and were part of the extension project at the Federal University of Sergipe, Sergipe, Brazil. All were eligible to compete in the sport [16] and NL athletes are ranked among the top ten in their respective categories and with a minimum of 24 months of experience in the sport. In the NL group, four athletes with spinal cord injury below the eighth thoracic vertebra; two with polio, one with cerebral palsy, and four amputees. In the RL group, a training experience in the modality of a maximum of 12 months was attributed [19]. Five subjects with spinal cord injury due to accidents with injuries below the eighth thoracic vertebra; three with amputation, two with polio, and two with atrogriposis. The sample characterization is shown in Table 1.
The sampling power was calculated a priori using the open-source software G*Power ® (Version 3.0; Berlin, Germany), choosing a "F family statistics (ANOVA)" considering a standard α < 0.05, β = 0.80 and the effect size of 1.33 found for the Rate of Force Development (RFD) in Paralympic powerlifting athletes in the study by Sampaio et al., [12]. Thus, it was possible to estimate a sample power of 0.80 (F (2.0): 4.73) for a minimum
Sample
The sample consisted of 23 male Paralympic powerlifting athletes (PP), 11 at the national level (NL) and 12 at the regional level (RL). All of them were competitors and were part of the extension project at the Federal University of Sergipe, Sergipe, Brazil. All were eligible to compete in the sport [16] and NL athletes are ranked among the top ten in their respective categories and with a minimum of 24 months of experience in the sport. In the NL group, four athletes with spinal cord injury below the eighth thoracic vertebra; two with polio, one with cerebral palsy, and four amputees. In the RL group, a training experience in the modality of a maximum of 12 months was attributed [19]. Five subjects with spinal cord injury due to accidents with injuries below the eighth thoracic vertebra; three with amputation, two with polio, and two with atrogriposis. The sample characterization is shown in Table 1. The sampling power was calculated a priori using the open-source software G*Power ® (Version 3.0; Berlin, Germany), choosing a "F family statistics (ANOVA)" con-sidering a standard α < 0.05, β = 0.80 and the effect size of 1.33 found for the Rate of Force Development (RFD) in Paralympic powerlifting athletes in the study by Sampaio et al., [12]. Thus, it was possible to estimate a sample power of 0.80 (F (2.0): 4.73) for a minimum sample of eight subjects per group, suggesting that the sample size of the present study has statistical strength to respond to the research approach.
The athletes participated in the study on a voluntary basis and signed a free and informed consent form, in accordance with resolution 466/2012 of the National Research Ethics Commission (CONEP), of the National Health Council, in accordance with the ethical principles expressed in Helsinki Declaration (1964, reformulated
Instruments
The determination of body mass was performed using a Michetti Wheelchair Weighing Scale (Michetti, São Paulo, SP, Brazil) to facilitate the weighing of them seated, with a maximum supported weight capacity of 300 kg and a dimension of 1.50 × 1.50 m. In the evaluation, an official adapted bench press (Eleiko Sport AB, Halmstad, Sweden) was used, according to the norms of the International Paralympic Committee (IPC, 2020). The bar was made by Eleiko, 220 cm (Eleiko Sport AB, Halmstad, Sweden), weighting 20 kg [13,16].
Load Determination
Data were collected at a time outside of important competitions, and the athletes' training was related to their experience (RL and NL), as mentioned above. Athletes completed a baseline measurement session to assess 1RM in the bench press using an official bench and IPC Olympic bar (Eleiko Sport AB, Halmstad, Sweden) approved by the International Paralympic Committee [16]. The 1RM test was conducted, and each subject started the trials with a weight they believed that they could lift once, using maximum effort. Weight increments were then added until they reached the maximum load that could be lifted once. If the participant could not perform a single repetition, 2.4% to 2.5% was subtracted from the load used in the test. The subjects rested for 3 to 5 min between attempts [25,26].
The test was preceded by a warm-up set (10 to 12 repetitions) with approximately 50% of the load to be used for the first attempt of the 1RM test. The testing started two minutes after the warm-up. The load recorded as 1RM was the one when the individual could complete only one repetition. The form and the adapted technique used in the performance of each attempt was standardized and continuously monitored to ensure the quality of the data. The test for determining 1RM was performed at week one.
Warm-Up
The warm-up for upper limbs, using three exercises (abduction of the shoulders with dumbbells, elbow extension in the pulley and rotation of the shoulders with dumbbells) with three sets of 10 to 20 repetitions [27,28]. Soon after, a specific warm-up was performed on the bench press with a 30% load of 1RM, 10 slow repetitions (3:1 s, eccentric: concentric) and 10 fast repetitions (1:1 s, eccentric: concentric). Followed with five sets of bench press of five maximum repetitions (5 sets-85 at 90% RM), using a fixed load. During the test, athletes received verbal encouragement in order to achieve maximum performance [27,28]. To perform the bench press, an official straight bench (Eleiko Sport AB, Halmstad, Sweden), approved by the International Paralympic Committee [16] was used.
Dynamic Evaluation
The athletes were evaluated during the competitive phase of the season and were familiar with the testing procedures due to their constant training and testing routines. Athletes were instructed to perform the movement as fast as possible. An official paralympic powerlifting bench (Eleiko Sport AB, Halmstad, Sweden) was used during the measurements. The 1RM bench press test was performed on a paralympic powerlifting bench (Eleiko Sport AB, Halmstad, Sweden), following standard procedures reported in other studies [18]. To measure the velocity of movement, a valid and reliable linear position transducer Speed4Lift (Speed4Lift ® , Madrid, Spain) [29] was attached to the bar [18,28]. The highest averages of bars, mean of propulsion velocity (average values only of the propulsive phase, positive acceleration, that is, above the acceleration of gravity) and peak velocity (peak distance/time ratio), Power (force × velocity) and Prediction of 1 Repetition Maximum (MPV, VMax, Power, PredRM, respectively) were used for analysis purposes. The predicted 1RM was determined by the Bench Press equation provided in the Speed4lift device (Speed4Lift ® , Madrid, Spain) [29].
Isometric Force Measurements
The static variables of force were rate of force development (RFD), maximum isometric force (MIF) (N), fatigue index (FI) (%) and time to MIF (time) (m/s), were determined by a Chronojump force sensor (Chronojump, BoscoSystem, Barcelona, Spain) [17], with a capacity of 500 kg, output impedance of 350 ± 3 ohm, insulation resistance greater than 2000 cc, input impedance 365 ± 5 ohm, analog converter 24-bit 80 Hz digital. The equipment was attached to the bench press, using Spider HMS Simond carabiners (Simmond, Chamonix, France), with a load of 21 kN, (Union Internationaledes Associations d'Alpinisme-UIAA). A steel chain with a load of 2300 kg was also used, used to fix the force sensor to the bench press. The distance from the force sensor to the center of the joint was used to determine torques and other values [18,28]. Maximum isometric strength (MIF) was determined by the maximum strength of the upper limbs, and an elbow angle close to 90 º was maintained, and at a distance of 15 cm from the bar to the chest. Athletes were instructed to make a single maximum movement (as fast as possible). The fatigue index (FI) was determined in the same way as the MIF, where the athletes maintained the maximum contraction for 5.0 s. The FI was calculated by the formula: FI = ((final MIF − initial MIF/final MIF) × 100). The RFD was calculated by the force/time ratio (RFD = ∆ force/∆ time) [18,28]. The instruments used in the evaluations are shown in Figure 2.
Statistics
Descriptive statistics were performed using measures of central tendency, mean (X) ± Standard Deviation (SD) and 95% confidence interval (95% CI). To verify the normality of the variables, the Shapiro Wilk test was used, considering the sample size. Data for all variables analyzed were homogeneous and normally distributed. To evaluate the strength indicators of the groups and percentage of 1RM, the ANOVA (Two Way) test was performed with Bonferroni's Post Hoc. Pearson's "r" was used for correlation, with the following cut-off points: 0.00-0. 10
Statistics
Descriptive statistics were performed using measures of central tendency, mean (X) ± Standard Deviation (SD) and 95% confidence interval (95% CI). To verify the normality of the variables, the Shapiro Wilk test was used, considering the sample size. Data for all variables analyzed were homogeneous and normally distributed. To evaluate the strength indicators of the groups and percentage of 1RM, the ANOVA (Two Way) test was performed with Bonferroni's Post Hoc. Pearson's "r" was used for correlation, with the following cut-off points: 0.00-0. 10 correlation [30]. To check the effect size, (partial Eta squared: η2p), adopting values of low effect (≤0.05), medium effect (0.05 to 0.25), high effect (0.25 to 0.50) and very high effect (>0.50) for ANOVA [31,32]. For the t test, an effect size (Cohen's d) was considered, adopting values of low effect (≤0.20), medium effect (0.20 to 0.80), high effect (0.80 to 1.20) and very high effect (>1.20) [33,34]. Statistical analysis was performed using the computerized Statistical Package for the Social Science (SPSS), version 22.0 (IBM, North Castle, New York, NY, USA). The significance level adopted was p < 0.05.
Results
The results found for the average propulsive velocity (m/s) in the regional and national levels, in the percentages from 40 to 65% and 70 to 95% of 1RM, are found in Figure 3.
The results found in the maximum velocity (m·s −1 ) in the regional (RL) and national levels (NL), in the percentages of 40 to 65% and 70 to 95% of 1RM, are found in Figure 4.
The results found in the power (W) in the regional level (RL) and national level (NL), in the percentages from 40 to 65% and 70 to 95% of 1RM, are found in Figure 5.
The results found in the maximum predicted repetition (kg) in the regional level (RL) and national level (NL), in the percentages from 40 to 65% and 70 to 95% of 1RM, are shown in Figure 6. 75% between RL and NL (p = 0.020). The value of F = 25.224, and η2p = 0.759 was very high effe (IntraClass) and F = 1.606, and η2p = 0.167 medium effect (InterClass).
The results found in the power (W) in the regional level (RL) and national level (NL in the percentages from 40 to 65% and 70 to 95% of 1 RM, are found in Figure 5. . Maximum velocity (m·s −1 ) measured from (A) maximum velocity (m·s −1 ) measured from 40 to 65% of 1RM and (B) maximum velocity (m·s −1 ) measured from 70 to 95% of 1RM in national and regional levels. (A): a: Indicates difference in RL between 60% compared to 50% 1RM (p = 0.030); b: Indicates differences in NL between 40% versus 50% (p = 0.020), 55% (p < 0.001) and 65% 1RM (p = 0.008); c: Indicates differences in NL between 45% compared to 55 and 65% (p < 0.001), and 60% 1RM (p = 0.007); d: Indicates differences in percentage 50% between RL and NL (p = 0.041); e: Indicates differences in percentage 55% between RL and NL (p = 0.049). The value of F = 20.390, and η2p = 0.718 was very high effect (IntraClass) and F = 1.087, and η2p = 0.120 medium effect (InterClass). (B): a: Indicates difference in RL between 70% compared to 95% 1RM (p = 0.008); b: Indicates differences in RL between 75% compared to 85% (p = 0029) and 95% 1RM (p = 0.016); c: Indicates differences in RL between 80% compared to 90% (p = 0.019) and 95% 1RM (p = 0.001); d: Indicates differences in NL between 95% versus 70% (p < 0.001), 75% (p = 0.006) and 80% 1RM (p = 0.041); e: Indicates differences in percentage 75% between RL and NL (p = 0.013); f: Indicates differences in the percentage 95% between RL and NL (p = 0. The results found in the maximum predicted repetition (kg) in the regional level (R and national level (NL), in the percentages from 40 to 65% and 70 to 95% of 1 RM, a shown in Figure 6. The results found in the maximum predicted repetition (kg) in the regional level (R and national level (NL), in the percentages from 40 to 65% and 70 to 95% of 1 RM, a shown in Figure 6. The results found in dynamic dynamic mechanical variables (VMP, V max, pot and 1RM) and isometric (MIF, rime, RFD, impulse, variability, IF) of the regional and national level athletes are shown in Table 2. Table 2. Dynamic and isometric strength indicators (mean ± standard deviation, 95% CI) in regional level and national levels. Table 3 shows the correlations between predicted maximum repetition (PredRM) and static (MIF), in relation to one repetition maximum (1RM). Table 3. Correlation between predicted values in different percentages in relation to the absolute load (1RM) in regional and national level athletes in the paralympic bench press (mean ± standard deviation).
Load
Regional
Discussion
This study was designed to analyze mechanical, dynamic and static indicators of strength at different intensities on the performance of regional and national level athletes in paralympic powerlifting. The results found reveal that regional level athletes impose higher velocity in all loads. In particular, some intensities are found differences for the mean propulsive velocity 45%, 55% and 75% of 1RM and when analyzed the maximum velocity in the percentages 50%, 55%, 75% and 95% of 1RM, in comparison with national level athletes. Statically when evaluated MIF and RFD, there is a difference in MIF from national to regional level. When analyzing the RFD, there were no differences between the national and regional levels. However, when power is evaluated, the national level developed higher power rates than regional level in loads of the 40%, 45%, 50%, 60% and 95% of 1RM. In the results on maximum velocity and average propulsive velocity as well as in the maximum isometric force, the regional level athletes presented lower performance values than the national level athletes [35]. In this direction, when training with very high loads, the longer the time required to overcome an absolute load, the greater the effort performed [35], situation similar to that found in our study, where with higher loads, the velocity tends to decrease. Regarding the VMP and VMax, the regional level athletes tended to present greater effort for the same loads, when compared with the national level athletes, in view of the higher velocity presented. In contrast, other studies have found no differences in velocity between national level athletes compared to regional level athletes [36][37][38].
A point to be analyzed and discussed was to evaluate why athletes who have absolutely higher strength values cannot print higher velocity than athletes who have lower strength. Regardless of whether they are regional or national level athletes, NL athletes performed better in specific percentages of the 1RM (VMP: 45%, 55%, and 75%) (VMax: 50%, 55%, 75%, and 95%), leading to a perspective of adaptation in relation to the National Level loads. In this direction, the average propulsive velocity can be used as a performance marker and has been shown to be more reliable than static indicators [20]. On the other hand, static indicators of force, such as RFD, would be an effective form of control [39], since the generation of force in a short period of time would be of great importance in the maximum force generation, and through an encoder this variable does not tend to be evaluated correctly.
With regard to RFD, Zemková, Poór and Pecho [40] identified an interesting finding. The authors identified that individuals with higher RFD tend to obtain higher performance for power with lower loads, while individuals with higher FIM tend to produce higher power, however, using higher loads. Although the study was not performed on the bench press, nor with Paralympic athletes, these findings are contrary to our findings. In our study, trained athletes had higher FIM than Regional Level athletes, but did not show higher RFD. The differences were present in the NL in relation to the RL in the loads between 40% and 60% of 1RM. For the others, the differences were not significant. Demonstrating that NL athletes, despite having more strength, did not have a higher RFD, that is, they take longer to develop strength than RL athletes.
When analyzing the performance on power production among athletes, Aidar et al. [13], when comparing conditions of execution of the adapted bench press, a significant difference was found favorable for the NL athletes in relation to the RL for the load of 40% of 1RM Corroborating our finding, Miller et al. [41], also found results similar to our findings, corroborating our study. The authors identified that the maximum power produced between trained and untrained men was 40% versus 60% respectively. These findings tend to indicate that training with higher loads tends not to be the most suitable for power development. This information brought by the authors, in Figure 4, where our athletes lost performance with loads closer to 1RM. These findings emphasize that lower loads would be more suitable for power development [41].
These differences in power between trained and untrained athletes can be explained by the training time and the specificity of strength training, since athletes train with very high loads and do not have power as a factor linked to performance. In this study, it was reported that strength training would provide a decrease in the threshold of muscle fiber recruitment and provide an increase in the rate of discharge during submaximal contractions for the same motor units. The authors suggest that muscle strength gains can be attributed to an increase in excitatory synaptic input or to adaptations in motor neuron properties. Thus, athletes with longer training time would present a higher discharge rate during contractions, which would provide a greater advantage in power production than athletes with shorter training time [42], which was not observed in our study.
On the other hand, a review indicated that the regional level would not need to emphasize specific power training but rather strength training, and that experienced athletes can emphasize power development while maintaining their strength levels [43]. This manifestation of force would be influenced by several aspects. In this sense, the main variables that would affect the power would be the force-velocity relationship and the length-tension relationship. These variables would be directly manipulated by morphological factors, which would directly affect the individual's ability to generate force quickly [44]. Thus, these factors would be related to the fiber type of the muscle area, architectural characteristics of the muscle and properties of the tendon, as well as neural factors, including motor unit recruitment, firing frequency, synchronization and intermuscular coordination [45,46].
The aforementioned changes tend to be promoted by strength training. However, to increase power, it is not enough to increase maximum strength [47,48]. Due to the specifics of generating maximum force for some sports, in the shortest possible time (milliseconds or even ≤300 ms). This time would not allow maximum force to be reached [38]. Therefore, the development of power, being a relationship between movement velocity and force, and the two, where these two variables tend to present a linear relationship [49][50][51], that is, if the velocity were higher in the generation of power, the force would tend to be smaller and vice versa. This would explain the fact that our NL athletes have greater force and lower velocity and the RL have the opposite, which is a possible adaptation to training with high loads.
Among all of the specificities presented about power development, long-term development would be linked to the integration of various strength training techniques [52][53][54]. This would probably be a justification for why more experienced athletes would develop higher levels of power than more inexperienced athletes. This would occur due to the adaptations that are promoted by the strength training itself, as well as by the probable form of training that is used.
Regarding fatigue, there were no differences between national and regional athletes (p = 0.180), these findings corroborate another study that evaluated the manifestation of strength in different types of disabilities, where there were no differences in fatigue between the different types of disability. In our study, in absolute terms, athletes at the National level showed less fatigue when compared to athletes at the regional level [18]. Our findings regarding fatigue, indicated a low fatigue (between 9% and 12%) according to the training level. Fatigue has been the subject of many studies for strength gain. Corroborating this, one study demonstrated that there were no differences between high and low fatigue training in terms of isometric strength. Thus, fatigue does not seem to be a critical stimulus for strength gain [7].
Furthermore, our study has some limitations, despite the relevance of the results found. The sample consisted of national and regional athletes with different disabilities eligible for the modality. In this sense, the findings are for Paralympic powerlifting practitioners, not looking for the specifics of different physical disabilities eligible for the modality. However, the findings are relevant to coaches and researchers for a greater understanding of strength training and the relationship of strength to velocity and other strength indicators in Paralympic powerlifting athletes, and their effects on sport performance. Another limitation raised is linked to the fact that the evaluation was performed acutely, that is, in a single training session, so the results could be different when evaluating weeks or even longer periods of training.
Conclusions
Paralympic powerlifting (PP) is characterized by training with high loads, greater than 80% of 1RM. Our findings indicate that barbell velocity in PP was higher in RL athletes compared to NL athletes. Thus, due to the characteristics of the sport, the specific training, adapted to the rules of the sport, tends to provide a more effective control of the bar, which ends up promoting a lower speed of execution in athletes of national level. On the other hand, with regard to power and predicted 1RM, it was higher in NL athletes when compared to RL. In this sense, national level athletes presenting reduced velocity, this indicates that strength, in these athletes, would have a greater importance in the generation of power, mainly in higher loads, demonstrating the specific adaptation to force provided by maximum strength training.
In relation to the results found, and given the fact that stronger athletes tend to generate more strength against the same resistance, I could advise coaches that training with lower loads, with an emphasis on movement velocity, could provide improvements in athletes' strength, even national athletes. Thus, strength-velocity-based assessment appears to be the sustainable method of monitoring and evaluating performance in athletes, including paralympic powerlifting athletes.
On the other hand, other studies should evaluate other deficiencies and their impact on the velocity and strength of paralympic athletes, since the bases of balance and movement execution in the adapted bench press tend to be different in each type of physical disability.
Data Availability Statement:
The data that support this study can be obtained from the address: www.ufs.br/Department of Physical Education, accessed on 7 January 2022.
|
v3-fos-license
|
2019-04-03T13:08:02.838Z
|
2018-10-02T00:00:00.000
|
92202002
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CC0",
"oa_status": "GREEN",
"oa_url": "https://zenodo.org/record/4336330/files/source.pdf",
"pdf_hash": "f364524b61398b34acf55b690f0a997793ff3d09",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45522",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "da6a06fb3aa8b89f401485f4cf301e38889c7cd7",
"year": 2018
}
|
pes2o/s2orc
|
Enlinia Aldrich, 1933 of Mitaraka, French Guiana (Diptera: Dolichopodidae)
ABSTRACT The genus Enlinia Aldrich, 1933 is recorded from French Guiana for the first time and six new species are described: E. loboptera n. sp., E. bova n. sp., E. colossicornis n. sp., E. mitarakensis n. sp., E. touroulti n. sp., and E. dalensi n. sp. A seventh unnamed species belonging to the E. armata Robinson, 1969 species group, and represented by a single female specimen, is also reported. These species were collected as part of the 2015 “Our Planet Revisited” survey in the Mitaraka Mountain area in far southwestern French Guiana. A key to the seven species known from French Guiana is provided.
INTRODUCTION
Enlinia Aldrich, 1933 is a diverse genus of tiny dolichopodid flies with body size of around 1 mm in length (a member of the so-called micro-Dolichopodidae). Species of Enlinia can be recognized by the combination of small body size, wing veins that are nearly straight and evenly diverging from wing base (venation modified in some males), the presence of acrostichal setae, and face without setae. The genus is restricted to the New World and presently contains about 80 species (Yang et al. 2006), with many species awaiting description and discovery. Most representatives have been described from Mexico (Robinson 1969), but species have been recorded from the United States (seven species) and Canada (two species) in the north, to Chile in the south (one species) (Van Duzee 1930). The genus is also widely distributed in the Caribbean (Cuba, Dominica, Grenada, Jamaica, Saint Vincent) ( Robinson 1975). Males of Enlinia are often highly ornate with most body parts subject to modification including commonly the wings, legs, and abdominal sternites (Robinson 1969). The relationship of Enlinia with other micro-dolichopodid genera is discussed in Robinson (1969), Runyon & Robinson (2010), and Runyon (2015).
In 2015 the "Our Planet Reviewed" or "La Planète revisitée" Guyane 2014-2015 expedition, also known as the "Mitaraka 2015 survey", was conducted in French Guiana (Pollet et al. 2014;Pascal et al. 2015;Touroult et al. 2018). This was the 5 th edition of a large scale biodiversity survey undertaken jointly by the Muséum national d'Histoire naturelle, in Paris and the NGO Pro-Natura international (both in France). The "Our Planet Reviewed" program aims to rehabilitate taxonomic work that focuses on largely neglected components of global biodiversity, i.e., invertebrates (both marine and terrestrial). Basic arthropod taxonomy and species discovery were at the heart of the survey, although forest ecology and biodiversity distribution modelling, were also part of the project. The expedition was conducted in the Mitaraka Mountains, a largely unknown and uninhabited area in the southwestern corner of French Guiana, directly bordering Surinam and Brazil. It is part of the Tumuc Humac mountain chain, extending east in the Amapá region of Brazil and west in southern Surinam. The area consists primarily of tropical lowland rain forest with scattered inselbergs, isolated hills that stand above the forest plains. MP participated in this survey as Diptera coordinator, while focusing his own collecting efforts and methods on Dolichopodidae.
The purpose of this paper is to describe six new species of Enlinia collected during the abovementioned survey. This represents just the second record of Enlinia occurring in South America with E. atrata (Van Duzee, 1930) from Chile being the only previous one. No doubt many species of South American Enlinia await discovery, as illustrated by the occurrence of seven species within the 1 km 2 area sampled at Mitaraka, four of which are represented by a single specimen.
MATERIAL AND METHODS
From 22 February till 11 March 2015, a first team of 32 researchers explored the area, including 12 invertebrate experts. During a second period (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27), a second equal-sized team took over and a third smaller team returned to the site from 12 to 20 August 2015. Invertebrate sampling was carried out near the base camp, on the drop zone (an area near the base camp that had been clear-cut entirely to allow helicopters to land) and, in particular, along four trails of about 3.5 km that started from the base camp in four different directions (see Krolow et al. 2017). Details of the collecting methods and sample codes used on labels are described by Pollet et al. (in press). Dipteran subsamples (mostly per family) were subsequently disseminated among experts worldwide, in the case of Enlinia spp. to JR. The identification of the species was conducted using taxonomic reviews and identification keys, original descriptions, and direct comparison to reliably identified species from the National Museum of Natural History, Smithsonian Institution (Washington, D.C.) and the Montana Entomology Collection, Montana State University (Bozeman, Montana). All collected material was stored in 70% alcohol during the expedition, with representatives being dry mounted on pins using hexamethyldisilixane (HMDS) or permanently slide mounted about two years later in the laboratory. This paper generally follows the format used in Robinson (1969) which will assist in comparisons and identifications across species in this large genus since Harold Robinson (USNM) has described nearly all species of Enlinia to date. However, we follow Cumming & Wood (2009) for termino logy of nongenitalic structures including antennal segments and wing veins. Measurements of body and wing lengths were carried out on at least 10 specimens, if available. Eye height is defined as the vertical diameter (from upper to lower eye margin). In descriptions, the position of features on elongate structures, such as leg segments, is given as fractions of the total length, starting from the base. Permount (Fisher Scientific, Pittsburgh, Pennsylvania) mounting medium was used to create permanent slides. Holotypes are deposited in the MNHN. A label citing the Access and Benefit Sharing agreement number for the expedition, APA 973-1, is included with all specimens. productive, with only one specimen in white pan traps, and 11 in blue ones. The latter specimens, however, were collected in a site where only blue pan traps were in operation. Enlinia loboptera n. sp. was by far the most abundant (553 specimens, 86.9% of all specimens) and widespread species, and large populations were discovered both along the Alama river, on one of the rocky outcrops ('savane roche 2') and even on one of the inselbergs (Borne 1). Enlinia colossicornis n. sp. was encountered in the same habitats, though, in lower numbers (62 specimens, 9.7%). All remaining species of Enlinia were collected in low numbers (1-2 specimens each). The rocky outcrop 'savane roche 2' housed the richest Enlinia fauna, with five species, whereas three species were collected along the Alama river. These were also the only sites where the 6 m Malaise trap had been set up (along the Alama river in March 2015, on 'savane roche 2' in August 2015). etyMology. -This species is named for the distinctly lobed posterior margin of the wing in males (Fig. 3).
SYSTEMATICS
DiAgnosis. -The shape of the male wing ( Fig. 3) and the row of large, modified setae on the hind tibia ( Fig. 2) distinguish E. loboptera n. sp. from all other known Enlinia species, except E. bova n. sp. Males of these two species share many unique characteristics (e.g. wing shape and distinct chaetotaxy of fore, mid and hind legs) and form a group that stands apart from other known Enlinia species.
Enlinia loboptera n. sp. is distinguished from the latter species by the short cerci with relatively short setae that extend anteriorly only to sternite 4 ( Fig. 1E) whereas the cerci of E. bova n. sp. possess very long setae that reach the base of the abdomen (Fig. 4E). Females are recognized by the form of the hind basitarsus, the distinct apical ventral seta on fore and mid tibia, and body size less than 1.3 mm. Thorax. Scutum rather arched, dark brown to black with very sparse brown pollen and weak violet to dark green reflections; pleura dark brown to black, slightly lighter brown posteriorly. Setae on dorsum short, brown with pale reflections; 7-9 pairs of small acrostichal setae; 8-10 pairs of dorsocentral setae; scutellum with one pair of widely separated median setae and one pair of very small lateral hairs.
Legs. Brown with mostly dark setae but most setae with very strong pale reflections and appearing yellow in certain views and lights (as in Fig. 2). Fore coxa mostly bare except for a large, black seta on outer anterior surface at apex, most specimens also with a pair of setae on inner anterior surface near 1/2. Hind trochanter (as in E. bova n. sp., Fig. 4B) with a stout black ventrally-directed spine near 1/2 on anterior surface (length subequal to length of trochanter). Fore femur slightly swollen, with long, slender setae on ventral and anterior surface, those on anterior surface longest and slightly curved apically, with row of about 4 long, slender setae along posteroventral edge (as in E. bova n. sp. Fig. 4C). Mid femur with stout ventral seta arising near base, held close to ventral surface of femur and curved apically; with row of three slightly broadened anteroventral setae at apex decreasing in length apically (Fig. 1D). Hind femur with longer anteroventral setae on apical half (longest setae subequal to width of femur). Fore tibia (Fig. 1A) gradually widened toward apex and slightly dorsoventrally flattened, with large ventral setae on apical half (Fig. 1B). Mid tibia (Fig. 1C, D) slightly widened and flattened dorsoventrally, bare dorsally; with one anteroventral row of 10-12 large setae along full-length of tibia; with about 3-4 long, slender posteroventral setae on basal half; with a few long, slender ventral setae near apex. Hind tibia (Fig. 2) somewhat flattened on basal 2/3 and slightly curved with a row of distinct anterior setae that are abruptly narrowed and strongly elbowed near apical 2/3, these setae becoming smaller toward apex of tibia with apical-most seta slightly thickened and hook-like. Fore tarsus (Fig. 1A, B) extremely modified; tarsomere 1 about as wide as long with 3 large, modified setae on flattened ventral surface; tarsomere 2 smaller than tarsomere 1 with short, finger-like posterior lobe bearing a rounded seta, with long dorsal seta, and with small leaf-like, lanceolate posteroventral seta near base; tarsomere 3 somewhat U-shaped with flattened, tongue-shaped, black apical process, with stout, hooked dorsal seta and 3 short, stout setae on anterodorsal edge; tarsomere 4 unmodified with a couple of larger dorsal setae. Mid tarsus with tarsomere 1 bearing short ventral setae that are hooked at apex (Fig. 1D); tarsomeres 2 and 3 with a slightly larger dorsal seta at apex. Hind tarsus with tarsomere 1 short, slightly longer than wide and slightly flattened. Ratios of tibia:tarsomeres for fore leg (for highly modified tarsomeres, width given in parentheses): 12-4-3-5(8)-4-6; for mid leg: 15-6-2-3-3-3; for hind leg: 22-5-8-5-3-3. Wing. (Fig. 3). Approximately oval with cuneate base and with sinuous hind margin and broad, projecting lobe near apex of vein CuA 1 ; hind margin with slightly longer, straight hairs that become very short and dense on lobe; wing clear; vein R 2+3 slightly sinuous and close to costa, curved slightly forward at apex; vein R 4+5 nearly straight, ending before wing tip; vein M 1 nearly straight and evenly diverging from vein R 4+5 , very slightly arched backwards beyond crossvein; vein CuA 1 arching backwards beyond crossvein dm-cu, reaching wing margin and ending near apex of lobe, last part of vein CuA 1 nearly 3 times as long as crossvein dm-cu; vein A 1 short and represented by a streak of pigment along and close to anal margin, becoming a brown streak apically, wing otherwise hyaline. Halter brown.
Abdomen. (Fig. 1E). Dark brown, usually slightly lighter in color than thorax; setae short and brown with pale reflections. Sternite 4 with short median armature at hind margin. Hypopygium brown; cerci pale brown and becoming darker apically, small, longer than wide, spatulate on apical half with comb of black setae on outer margin.
Female
Body size. Length 1.1-1.3 mm, wing length 0.9-1.0 mm by 0.4-0.5 mm (width). As in male except lacking modified wing, legs, and abdominal sternites. Face wider and distinct to mouth (width of face subequal to width of first flagellomere). Antenna with first flagellomere slightly less pointed apically. General form of basitarsus present in females (as in E. bova n. sp., Fig. 4B) and also one large ventral seta at apex of fore and mid tibiae. Wing margin evenly rounded.
reMArks Enlinia loboptera n. sp. was by far the most abundant species taken at Mitaraka, and the sex ratio was distinctly female-biased (154 males: 396 females). To separate females of E. loboptera n. sp. from those of the related species, E. bova n. sp., we primarily used body size since females of E. bova n. sp. should be noticeably larger since the male is larger. etyMology. -The specific epithet is from the Latin bova = "swelling of the legs" in reference to the swollen front legs of the male (Fig. 4C).
Enlinia bova
DiAgnosis. -The shape of the male wing (as in Fig. 3) and the row of large, modified setae on the hind tibia ( Fig. 4B) distinguish E. bova n. sp. from other known Enlinia species, except E. loboptera n. sp. These two species are closely related and males share many unique characteristics most notably the shape of the wing and the anterior row of distinct setae on hind tibia, but their differences are also marked. The large cerci bearing long apical setae that reach the base of the abdomen (Fig. 4E) and the very swollen front legs (Fig. 4C) easily distinguish E. bova n. sp. dark brown to black, slightly lighter than scutum. Setae on dorsum short, brown with strong pale reflections; 9-10 pairs of small acrostichal setae; 10-11 pairs of dorsocentral setae, the posterior-most distinctly larger; one pair of widely separated scutellar setae and one pair of very small lateral hairs.
Legs. Brown with dark setae. Fore leg (Fig. 4C) enlarged and somewhat raptorial. Fore coxa enlarged, mostly bare except for a large, black seta on outer anterior surface at apex and a very small lateral hair just basal to large seta. Hind trochanter ( Fig. 4B) with stout black ventrally-directed spine near 1/2 on anterior surface (length subequal to length of trochanter). Fore femur (Fig. 4C) greatly swollen, excavated anteroventrally at apex, with an erect ventral seta near base, one row of about 4-5 long, slender anterodorsal setae and a similar row of about 5 long, slender setae along posteroventral edge. Mid femur with stout ventral seta arising near base that is held close to ventral surface of femur and curved apically. Hind femur ( Fig. 4B) with two rows of anteroventral setae on apical 2/3 (longest setae subequal to width of femur). Fore tibia (Fig. 4C) slightly thickened and distinctly shorter than fore femur, gradually widened toward apex and slightly dorsoventrally flattened, with one row of ventral setae nearly full-length, and a large apical seta at posteroventral corner. Mid tibia flattened dorsoventrally, bare dorsally; with ventral row of 10-12 stout setae along fulllength of tibia (as in E. loboptera n. sp. Fig. 1D); fringed with long posteroventral setae along most of its length and a few long anteroventral setae on basal half and apical 2/3. Hind tibia ( Fig. 4B) with anterior surface somewhat flattened on basal half, with a row of distinct anteroventral setae that are abruptly narrowed and slightly elbowed near or just beyond 1/2, these setae becoming smaller toward apex of tibia with apical-most seta slightly thickened and hook-like, with stout anteroventral seta at apex of tibia. Fore tarsus (Fig. 4C) extremely modified; tarsomere 1 ( Fig. 4D) subquadrate, with 3 very large ventral setae one of which is hooked; tarsomere 2 slightly smaller than tarsomere 1 with finger-like posterior lobe near base and with dorsal setae near apex; tarsomere 3 expanded and rather thin, with large, darkened posterior lobe and with stout black dorsal seta and a "V"-shaped brown ventral seta near apex; tarsomere 4 nearly normal in shape, with large anterodorsal seta near apex. Mid tarsus with tarsomere 1 bearing short ventral setae that are hooked at apex; tarsomeres 2-4 with a slightly larger dorsal seta at apex. Hind tarsus (Fig. 4B) with tarsomere 1 short, slightly longer than wide and slightly flattened, with 3-4 setae along ventral edge and a couple setae dorsally at apex. Ratios of tibia:tarsomeres for fore leg (for highly modified tarsomeres, width given in parentheses): 12-4-4-6(8)-4-6; for mid leg: 15-6-2-3-3-3; for hind leg: 25-4-9-6-4-3.
Abdomen (Fig. 4E). Dark brown, slightly lighter in color than thorax; setae short and black. Sternite 4 with short median armature at hind margin that is rounded apically; tergite 6 about half width of sternite 5, and hidden beneath latter; sternite 6 wishbone-shaped. Hypopygium brown, relatively large; cerci brown, elongate and slender, ending in an oval disk with marginal setae along ventral edge and very long, stout black apical setae.
Female
Unknown, but likely similar to females of E. loboptera n. sp. and probably larger.
reMArks Enlinia bova n. sp. is one of the more ornate species of Enlinia and males have most body parts modified, sometimes exceptionally so (i.e., front legs and genitalia). This species is also perhaps the largest known in the genus thus far surpassing E. maxima which has a body size of 1.4 mm (Robinson 1975 Head. Face and frons dark brown to black. Face narrowed below but distinct to mouth; anterior eye facets only slightly enlarged. Palpus brown; proboscis yellow-brown. Antenna (Figs 5, 6A) dark brown; first flagellomere longer than scape and pedicel combined, narrowed to narrowly rounded point apically, nearly straight along ventral edge and slightly concave dorsally, with relatively long pale pubescence; arista-like stylus apical, about twice as long as first flagellomere, with basal article very short and about 1/10 length of apical article. Thorax. Scutum dark brown with very sparse gray pollen; pleura lighter brown than scutum. Setae brown with pale reflections; 6-8 pairs of small acrostichal setae; 6-8 pairs of dorsocentral setae; one pair of relatively closely spaced scutellar setae (insertion closer to middle than sides) and one pair of very small lateral hairs.
Wing (Fig. 7). Hyaline but with slight brown tinge, narrowly elliptical with short-fringed hind margin; vein R 2+3 slightly arching and slightly curved forward at tip; veins R 4+5 , M 1 , and CuA 1 nearly straight and evenly diverging from wing base; vein R 4+5 ending at or just before wing apex; last part of vein CuA 1 about 2 times as long as crossvein dm-cu; vein A 1 present as a streak of brown pigment. Halter brown.
Female
Body length. 0.9-1.1 mm, wing length 1.0-1.1 mm by 0.4-0.5 mm (width). Similar to male, but face wider (about as wide as width of first flagellomere) and narrowest at mouth; antenna ( Fig. 6B) smaller, but still somewhat enlarged, globular and reflecting the same general shape as in male; legs without outstanding setae or hairs. Wing essentially as in the male (Fig. 7). DiAgnosis. -The form of the male wing ( Fig. 9) with a brown spot or streak midway between veins R 4+5 and M 1 will distinguish this species from all other known Enlinia. This is also the only known species in the genus with males having an armature on abdominal sternite 2 (Fig. 8D); males of other known species often have sternites 3 and/or 4 modified, but not sternite 2.
Head. Face and frons dark brown to black. Eyes essentially contiguous below antennae; anterior eye facets distinctly enlarged. Palpus dark brown; proboscis yellow-brown. Antenna (Fig. 8A) brown; first flagellomere short, blunt, wider than long; arista-like stylus apical, about as long as height of eye.
Thorax. Scutum dark brown with very sparse gray pollen; pleura dark brown. Setae brown with pale reflections; 6-7 pairs of small acrostichal setae; 9-10 pairs of dorsocentral setae, posterior-most distinctly larger; one pair of widely separated scutellar setae and one pair of minute small lateral hairs.
Wing. (Fig. 9). Elliptical with cuneate base and slightly sinuous, long-fringed hind margin; with a small elongate brown spot near apical 2/3 midway between veins R 4+5 and M 1 and a second small brown area along hind margin just basal to apex of vein CuA 1 ; vein R 2+3 close to and parallel with costa basally and only slightly curved on apical half; vein R 4+5 nearly straight, ending near or just before wing apex; vein M 1 curving toward vein R 4+5 and then backwards beyond crossvein dm-cu; last part of vein CuA 1 not reaching wing margin, about 1.5 times as long as crossvein dm-cu; vein A 1 present as a short streak of brown pigment near wing base and along anal margin which is narrowly brown. Halter brown. Abdomen. (Fig. 8D). Brown with scattered short, stiff, brown setae; sternite 2 with linear, rod-like brown median armature projecting from hind margin. Hypopygium small, brown; cerci brown, small, about twice as long as wide, with approximately dorsal half covered in minute, stiff setae; epandrium with finger-like apical lobe bearing seta near apex.
Female
Unknown.
reMArks This species belongs to the E. magistri (Aldrich, 1932) species group established by Robinson (1975) for species with males featuring a sinuous, long-fringed hind wing margin and specialized setae or hairs on the fore coxa.
Enlinia touroulti n. sp. DiAgnosis. -This species belongs to the E. simplex species group (Robinson 1975) that presently contains 11 relatively inornate species with unarmed abdominal sternites, relatively simple hypopygial appendages, and wing vein R 2+3 bulging slightly inward from costa on apical half. Species in this group are also relatively small in comparison to other Enlinia species (1 mm or less) and have characteristically modified fore tarsi with compressed and broadened tarsomeres 1-2 and tarsomere 3 bearing a small but often stouter seta (as in Fig. 10B). The shape of the subquadrate cerci and hypopygial appendages in E. touroulti n. sp. are distinctive. Details in the form of the male foretarsus, the arrangement of slender ventral setae on femora, and the form of the middle tibia also differ from the other known species in the E. simplex group.
Head. Face and frons dark metallic green. Face narrowed below, eyes essentially contiguous on lower half; anterior eye facets distinctly enlarged. Palpus brown; proboscis brown. Antenna (Fig. 10A) dark brown; first flagellomere short and blunt, about twice as wide as long, nearly square in lateral view and round in anterior view; arista-like stylus apical, about as long as height of eye.
Thorax. Scutum dark brown with slight metallic green reflections and very sparse pollen; setae brown with weak pale reflections; 6 pairs of small acrostichal setae; 6-7 pairs of dorsocentral setae; one pair of relatively widely spaced scutellar setae and one pair of very small lateral hairs.
Wing. (Fig. 10E). Hyaline, elongate-oval, hind margin evenly rounded and short-fringed; vein R 2+3 slightly and evenly arched and curving slightly but distinctly forward at apex; vein R 4+5 and M 1 nearly straight, diverging from near base, with M 1 very slightly arching backwards beyond crossvein dm-cu; crossvein dm-cu perpendicular to vein M 1 , less than half the length of apical part of vein CuA 1 ; vein A 1 represented as a short brown streak close to anal margin. Halter brown.
Abdomen. (Fig. 10D). dark brown with sparse, very short, stiff, black setae. Sternites plain, without armatures; sternite 5 with 2 distinct setae near apex. Hypopygium capping tip of preabdomen, brown; cerci subquadrate, light brown with margin thinly darkened, with a few slender pale hairs; inner appendages larger than cerci, somewhat triangular, thin and translucent with a minute dorsal hair just beyond 1/2. etyMology. -This species is named in honor of, and out of respect to, Pierre-Henri Dalens, président de la Société entomologique Antilles-Guyane (SEAG), who led the Mitaraka entomological team during periods 2 and 3 of the expedition. Thanks to him, the 6 m Malaise trap was installed on 'savane roche 2' which led to the discovery of an unprecedented rich Enlinia fauna on this rocky outcrop.
DiAgnosis. -Enlinia dalensi n. sp. can be recognized by the form of the modified fore tarsus (Fig. 11A), the ventral setae on middle femur (Fig. 11C), the shape of the wing and wing veins (Fig. 12), and modifications of abdominal sternites (Fig. 11D, E). This species is closely related and quite similar to E. bredini Robinson, 1975from Dominica, to which it keys in Robinson (1975. Both species have similarly modified fore tarsi (Fig. 11A), similar modifications in shape and armature on abdominal sternites 3 and 4 (Fig. 11D, E), and similar blunt setae on middle femur (Fig. 11C), among other characteristics. Enlinia dalensi is most readily distinguished by a distinctly sinuous wing vein R 4+5 that is bent backwards at apex (straight in E. bredini), a wing with a less sinuous posterior margin, and larger anal area, and epandrial lobes that lack plumose hairs and feature large prong or hook on inner surface near base.
Head. Face very narrow on ventral half but still distinct to mouth; anterior facets distinctly enlarged. Upper face and frons dark brown. Palpus small, yellow, nearly round with anterior surface truncate and fringed with minute black hairs; proboscis brown. Antenna brown; first flagellomere very short and blunt, about twice as wide as long; arista-like stylus apical, about 1.5 times as long as face.
Legs. Yellow with coxae brownish, with dark setae. Fore coxa with a large seta on inner anterior surface near apex. Mid trochanter (Fig. 11C) with long black ventral seta (in line with row of ventral setae on mid femur). Fore femur (Fig. 11B) with a slender erect ventral seta at base (length subequal to width of femur) and anteroventral row of about 10 minute peg-like setae on apical half; mid femur ( Fig. 11C) with ventral row of 4 large setae on basal half which become larger towards base of femur, these setae blunt apically except basal-most seta which is normal (sharply pointed); femur thickest at insertion of basal-most setae. Fore tibia with ventral surface slightly flattened; mid tibia with a brush of very short, erect ventral setae on apical 1/3; hind tibia gradually and slightly widened toward apex, with a dorsal seta near base, a smaller dorsal seta near 1/3, and a larger dorsal seta near apex. Fore tarsus (Fig. 11A) highly modified, tarsomere 1 broad with 2 dorsal setae near apex; tarsomere 2 apically projecting alongside and partly overlapping tarsomere 3 with two lobes near apex and ventrally with a small darker, more sclerotized area that includes a minute spicule; tarsomere 3 long and slender, slightly bent just before middle, with large arched dorsal seta near middle; tarsomere 4 short, rounded to heart-shaped, arising from near middle of tarsomere 3; tarsomere 5 expanding from a very narrow base. Ratios of tibia:tarsomeres for fore leg: 10-4-4-4-2-4; for mid leg: 12-6-3-2-2-3; for hind leg: 16-6-4-3-2-3.
Wing. (Fig. 12). Elliptical with hind margin slightly sinuous and nearly straight basad to apex of CuA 1 , long-fringed with hairs; hyaline. Vein R 2+3 close to and parallel with costa on about basal half, curving slightly towards vein R 4+5 in distal half before curving slightly forward to costa; vein R 4+5 mostly straight but distinctly curving backwards at apex; vein M 1 slightly sinuous beyond crossvein, curving slightly forward at apex; crossvein dm-cu about half as long as apical part of vein CuA 1 ; vein CuA 1 not reaching wing margin, slightly bowed beyond crossvein; vein A 1 represented by indistinct thickening along anal margin. Halter brown.
Abdomen. (Fig. 11D, E). Brown, with very sparse, short brown setae. Sternite 3 more strongly sclerotized laterally and less so medially and apically, with minute median armature at hind margin; sternite 4 highly modified, medially divided into pair of C-shaped lobes with a brown setiferous papilla at each hind corner that is projecting posteroventrally. Hypopygium small, brown, capping tip of preabdomen; cerci very small, slightly longer than wide with small marginal hairs; epandrium with thin, nearly transparent, tapering lobes that project forward beneath abdomen and between papillae of sternite 4.
Female
Unknown. reMArks The setiferous papillae on abdominal sternite 4 in males of E. dalensi n. sp. (Fig. 11D, E) appear homologous to the "brown projection composed of a loop of twisted, finely striate chitin" of E. bredini (Robinson 1975: 47).
We have seen an undescribed species from Dominica that belongs to the species group containing E. bredini and E. dalensi n. sp. and possesses a similarly modified fore tarsus, ventral setae on middle femur, and shape of the wing and modifications on abdominal sternites. Enlinia anomalipennis Robinson, 1969 from Mexico, also appears to belong to this group based on the similarly modified fore tarsus, ventral setae on middle femur, and shape of the wing and hypopygium.
other speciMens A female specimen representing a seventh Enlinia species occurring at Mitaraka was taken during the expedition ('sp. GF-007'). This specimen belongs to the distinct E. armata group whose members possess a row of very short ventral setae on the fore femur and strong dorsal setae on the hind tibia in both sexes (Robinson 1969) but cannot be confidently assigned to any of the described species. Although this species is likely new, males will need to be collected before it can be formally described. This specimen is briefly characterized below.
DISCUSSION
Five of the seven species of Enlinia found at Mitaraka were collected on an isolated, rocky outcrop with seeps ('savane roche 2', similar to Fig. 13), and three along the river Alama, at both sites with a single 6 m Malaise trap. Adults of most species of Enlinia occur on rock, and almost always near rivers, streams or seeps (Robinson 1969), thus these habitats are ideal for Enlinia. Although multiple species of Enlinia are documented to co-occur at a single locality (Robinson 1969(Robinson , 1975, the collection of five species on such a remote rocky habitat well away from streams, and in such a short time period (8 days, 13-20.VIII.2015) is surprising. Microhabitat and substrate specialization might in part explain this sympatric diversity -species of Enlinia are known to prefer sun versus shade and different species can even be found on the wet versus dry surfaces of the same rock (Robinson 1969). There can even be an ecological progression among those species that prefer wet rock surfaces, in which some species are found only on slightly moist surfaces and other species found hovering over surfaces constantly washed by running water (Robinson 1975). In fact, Enlinia species were encountered on two of the three "savanes roches" and both inselbergs investigated during the Mitaraka survey. As multiple species and high numbers were only obtained with a Malaise trap that was operational in one of these sites, there is little doubt that Enlinia occurs on most rocky outcrops with seeps in this part of Amazonia.
|
v3-fos-license
|
2024-02-21T16:04:05.813Z
|
2024-02-01T00:00:00.000
|
267762642
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "3b55dab13ace7549f8df074cdf4e9c3f3f7c5778",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45523",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "9379a4a7cc769263f6521aa27e006537d71eb6de",
"year": 2024
}
|
pes2o/s2orc
|
Proposal for the applicability of modified Breslow (measured from the basal membrane) as a predictor of survival and sentinel lymph node outcome in patients with cutaneous melanoma
Background Cutaneous melanoma is a neoplasm with a high mortality rate and risk of metastases to distant organs. The Breslow micrometric measurement is considered the most important factor for evaluating prognosis and management, measured from the granular layer to the deepest portion of the neoplasm. Despite its widespread use, the Breslow thickness measurement has some inaccuracies, such as not considering variations in the thickness of the epidermis in different body locations or when there is ulceration. Objective To evaluate the applicability of a modified Breslow measurement, measured from the basal membrane instead of from the granular layer, in an attempt to predict sentinel lymph node examination outcome and survival of patients with melanoma. Methods A retrospective and cross-sectional analysis was carried out based on the evaluation of slides stained with hematoxylin & eosin from 275 cases of melanoma that underwent sentinel lymph node biopsy from 2008 to 2021 at a reference center in Brazil. Results Analysis of the Cox model to evaluate the impact of the Breslow measurement and the modified Breslow measurement on survival showed that both methods are statistically significant. Logistic regression revealed a significant association between both measurements and the presence of metastasis in sentinel lymph nodes. Conclusion Measuring melanoma depth from the basal membrane (modified Breslow measurement) is capable of predicting survival time and sentinel lymph node outcome, as well as the conventional Breslow measurement.
Introduction
Cutaneous melanoma is a neoplasm that arises from melanocytes and, at an advanced stage, often leads to metastases to distant organs. 1 The incidence of melanoma has increased in recent decades in light-skinned populations, probably related to recreational behavior and sun exposure.It is believed to arise as a consequence of a complex interaction of environmental and constitutional factors. 2 The depth of invasion as a prognostic factor was reported by Alexander Breslow in 1970, who demonstrated a correlation between melanoma thickness and risk of recurrence and metastasis. 3,4The Breslow micrometric measurement has become the most important factor for prognosis and conduct, is widely used and represents the main factor in staging systems, including that of the American Joint Committee on Cancer. 5The Breslow measurement is obtained using a calibrated ocular micrometer to measure, from the most superficial portion of the granular layer to the deepest portion of the tumor. 6,7However, the Breslow measurement has some limitations.For instance, when ulceration is present, the Breslow measurement may be underestimated due to the amount of tumor loss that is not taken into account; and, in the absence of the granular layer, as it occurs in the nail region, the measurement can be challenging. 8Furthermore, the Breslow measurement does not take into account variations in the thickness of the normal epidermis in different anatomical sites and may show differences even when the invasive component has a similar thickness, as it includes the total thickness of the epidermis.
The depth of invasion is also a prognostic factor for squamous cell carcinoma of the cervix, for instance.In this tumor, however, the depth of tumor invasion is measured from the base of the epithelium, eliminating the influence of the thickness of the epithelium or the presence of ulceration on the final measurement. 9,10he aim of this study was to evaluate the applicability of the modified Breslow measurement, measured from the basal membrane instead of the granular layer, in an attempt to predict the sentinel lymph node outcome and the survival of patients with cutaneous melanoma compared to the classic Breslow measurement (Fig. 1).Additionally, the aim is to evaluate the relationship between sentinel lymph node status (positive or negative for metastases), presence of ulceration, and survival time; and the relationships between anatomical site and histopathological subtype with survival and sentinel lymph node outcome.
Materials and methods
This was a retrospective and cross-sectional study that analyzed slides stained with hematoxylin & eosin obtained from formalin-fixed, paraffin-embedded skin samples collected from primary melanoma lesions at Hospital Amaral Carvalho from 2008 to 2021.Clinical and histopathological information (gender, age, location, sentinel lymph node outcome, survival time, and presence or absence of ulceration) were collected from pathology reports stored in the participating institution digital systems.Regarding the topography of the lesion, the cases were divided into: areas not exposed to the sun (anterior chest, posterior chest, abdomen, genital region, and proximal portion of the limbs), areas exposed to the sun (head, neck, and distal portion of the limbs, except the acral region), and acral region (palms of hands, soles of feet and fingers/toes).The microscopic analysis (histopathological type; Breslow measurement; modified Breslow measurement) was performed by two pathologists.
Figure 2
In cases with extensive ulceration, the modified Breslow measurement is taken from the base of the ulcer, as with the conventional Breslow measurement.
The Breslow measurement was measured in the conventional way, from the granular layer to the deepest portion of the neoplasm, while the modified Breslow measurement was measured from the basal membrane to the deepest portion of the tumor, disregarding the ''in situ'' component.In ulcerated cases in which there was an intact epidermis overlying the area of deeper invasion, there was no influence of ulceration in relation to the Breslow measurement.In those cases with extensive ulceration, both measurements were taken from the base of the ulcer (Fig. 2).The exclusion criteria included: in situ or thin melanomas (less than 1.0 mm thick), metastases, diagnostic disagreement between pathologists, and those with missing paraffin blocks or with scarce tissue in the paraffin block.
After the data were obtained, a descriptive analysis was initially carried out with the calculation of mean, standard deviation, minimum, maximum, and median values for the quantitative variables and frequencies and percentages for the categorized variables.Considering survival in months, Kaplan-Meier curves were obtained followed by the log-rank test for the variables of interest.In the case of variables with more than two categories, the curves were compared using the Sidak test.Risk factors for survival considering continuous variables were obtained by adjusting the Cox model.For the lymph node as a response variable, a logistic regression model was adjusted considering the Breslow and modified Breslow values as explanatory variables.Associations between categorized variables were assessed using the Chi-Square test.In all tests, the significance level was set at 5% or the corresponding p-value, and all analyses were performed using the SAS program for Windows, v.9.4.
Results
A total of 275 cases of melanoma diagnosed from 2008 to 2021 and which underwent sentinel lymph node biopsy were analyzed.Of these, a total of 141 (51.27%) were men 134 (48.73%) were women and the age median was 63 years (14-87 years).The micrometric Breslow measurement median was 3.8 mm (1.0---27.5 mm), while the modified Breslow measurement median was 3.7 mm (0.5---26.5 mm).The median survival time was two years (0 to 13 years; Table 1).
Of the 275 cases, a total of 138 (50.18%) showed metastasis for the sentinel lymph node.Regarding the anatomical site of the lesion, 100 cases (36.36%) were located in areas not exposed to the sun, 84 cases (30.54%) in areas exposed to the sun, 78 cases (28.36%) in the acral region and 13 (4.72%) in unspecified areas.Regarding the histopatholog- 1).
The results of the Cox model analysis to evaluate the impact of different methods used to measure melanoma thickness (Breslow measurement and modified Breslow mea-surement) on survival showed that both methods were statistically significant.For the conventional Breslow measurement, a chi-square value of 59.40 was obtained; a p-value < 0.0001; HR of 1.121.For the modified Breslow measurement, a chi-square value of 57.66 was obtained; a pvalue < 0.0001; HR of 1.119 (Table 2).The logistic regression disclosed a significant association between both measurements (conventional Breslow measurement and modified Breslow measurement) and the presence of metastasis in the sentinel lymph nodes.The Breslow measurement showed an Odds Ratio (OR) of 1.189 (95%CI 1.111---1.271;p-value < 0.0001).Likewise, the modified Breslow measurement showed an OR of 1.19 (95%CI 1.112---1.273;p-value < 0.0001; Table 3).
The survival analysis in relation to the anatomical site did not show any significant results (Fig. 5).The Log-Rank Test obtained a Chi-Square of 2.9495 with DF 2, and a pvalue of 0.2288.When adjusting for multiple comparisons using the Sidak Test, none of the pairwise comparisons between anatomical sites reached statistical significance.The comparison between sites exposed to the sun (E) and non-exposed (NE) obtained a p-value of 0.6668 (Chi-Square 1.0447).The comparison between sun-exposed (E) and acral sites resulted in a p-value of 0.2427 (Chi-square 2.9014).Lastly, the comparison between non-exposed (NE) and acral sites resulted in a p-value of 0.9014 (Chi-Square 0.3791).Regarding the anatomical site and its association with sentinel lymph node metastasis, the analysis did not demonstrate any significant differences either (p = 0.1217).
When considering survival in relation to the melanoma histopathological subtypes, the Log-Rank test detected significant differences in the survival rate between different subtypes (Chi-Square 13.11; DF 4 and p-value 0.0107; Fig. 6).Regarding metastases in the sentinel lymph node, there was no statistically significant association between the histopathological subtype (p = 0.0735) and the status of the sentinel lymph node.
Discussion
The results of this study showed that the modified Breslow measurement (measured from the basal membrane instead of the granular layer) was able to predict survival time and sentinel lymph node outcome, as well as the conventional Breslow measurement.
The Breslow measure has some limitations.When the lesion is ulcerated, the measurement starts from the base of the ulcer.In these cases, the thickness may be underestimated, as the amount of tumor lost due to ulceration is not taken into account.The current parameters for melanoma staging also do not take into account the thickness of the epidermis when obtaining the Breslow measure.It is known that, for instance, the acral skin has a thicker epidermis, while some areas of the face, such as the skin behind the ear, have a thinner epidermis. 11,12In this context, cases of lentigo maligna melanoma, of which the epidermis is generally atrophic, may show a lower Breslow measurement than cases of acral melanomas (since the epidermis of the acral region is thick) even though both show involvement of similar strata in the dermis (Fig. 7).Moreover, the Breslow measurement can only be accurately evaluated in sections perpendicular to the epidermis surface; if there is periadnexal extension of the melanoma and this represents the only focus of invasion, the best methodology for this measurement becomes questionable. 13,14The modified Breslow measurement, proposed in the present study, is not affected by variations in epidermis thickness in different anatomical sites, nor by the presence or absence of ulceration (Fig. 7).This approach may show greater reliability, undergoing fewer variations and demonstrating good reproducibility between pathologists.
The results also corroborate that the sentinel lymph node status has a significant effect on the survival of patients with melanoma, and the presence of metastasis in the sentinel lymph node is associated with reduced survival time, in line with other studies such as that of Tejera-Vaquerizo et al., who analyzed 4249 cases of thin melanomas and found that the sentinel lymph node status is the most important prognostic factor for melanoma-specific survival. 15Jafari et al., in a study with 1111 patients, showed that those who did not have sentinel lymph node metastases had longer disease-free survival than those who had them.Additionally, in patients with intermediate-thickness melanoma (1.0 to 4.0 mm), better overall survival was found in those with a Figure 7 Both cases show an invasive component with similar thickness (1.5 mm, coinciding with the modified Breslow measurement).However, the conventional Breslow measurement is 1.6 mm on the left (thin epidermis) and 1.7 mm on the right (thick epidermis).metastasis-negative sentinel lymph node. 16Another study, however, showed that sentinel node status is an independent prognostic factor for disease-free survival, but not for overall survival in a multivariate analysis of 309 cases of melanoma. 17 Lemos et al., in a study with 43 patients with thick melanoma (> 4 mm), did not demonstrate the statistical significance of sentinel lymph node status in overall survival. 18oreover, ulceration had a significant impact on patient survival, indicating lower survival in those patients with ulceration.These results are in agreement with other studies, such as that by Sarpa et al., with 235 patients, which showed a significant correlation between the extent of ulceration and overall survival as well as sentinel lymph node status. 19The study by Hout et al., which showed that both the presence and extent of ulceration are independent predictors of survival. 20egarding the anatomical site of the melanoma, the present study did not demonstrate a significant impact on patient survival, nor any association with the sentinel lymph node outcome.These results are in disagreement with other studies, such as that by Callender et al., with 2500 patients, which demonstrated that the anatomical site is an independent predictor for sentinel lymph node status, as well as survival. 21The study by Howard et al., demonstrated that sites with intermittent or chronic sun exposure had better survival compared to sites rarely exposed to the sun. 22egarding the histopathological subtype, there were significant statistical differences in the survival rate between different histopathological subtypes (Kaplan-Meier curve).There was no statistically significant association between histopathological subtype and sentinel lymph node status.The study by Buja et al. showed that the histopathological subtype is an independent risk factor for death, with the nodular subtype being the one with the worst melanomaspecific survival. 23Sharouni et al., showed in a study with 48,361 patients that the nodular and acral lentiginous subtypes have worse survival than the superficial spreading and lentigo maligna melanoma subtypes. 24The work of Robsahm et al., on the other hand, did not demonstrate histopathological subtype as an independent predictor of melanoma-specific survival. 25ne weakness of the present study is its retrospective design, the limited number of total cases, the number of ulcerated cases, and the high frequency of advanced-stage cases.Due to a characteristic of the service profile, many cases of acral melanoma and nodular melanoma were also observed, compared to other studies; as well as just one case of lentigo maligna melanoma.Nonetheless, it is the first study to propose the possibility of adapting the way the Breslow measurement is obtained.Thus, the present study demonstrated that the modified Breslow measurement, that is, measured from the basal membrane instead of the granular layer, is capable of predicting prognosis, as well as the conventional Breslow measurement.However, more studies are needed to validate this method.
Figure 1
Figure 1 Breslow measurement (in black) compared to the modified Breslow (in red).
Figure 3
Figure 3 Analysis of survival time in relation to sentinel lymph node status.
Figure 4
Figure 4 Analysis of survival time in relation to the presence or absence of ulceration.
Figure 5
Figure 5 Analysis of survival time in relation to the anatomical site (E, Exposed Areas; NE, Non-exposed Areas).
Figure 6
Figure 6 Analysis of survival time in relation to histopathological subtype.
Table 1
Descriptive analysis of the clinicopathological characteristics of cases studied.
Table 2
Survival analysis using the COX model.
Table 3
Logistic regression adjustment for sentinel lymph node outcome for Breslow and modified Breslow.
|
v3-fos-license
|
2022-05-02T15:02:03.068Z
|
2022-01-01T00:00:00.000
|
248483500
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.imu.2022.101055",
"pdf_hash": "14510d4b8ddb4eb138862725771921328634c430",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45527",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c5e99176b879e119213658d1bfcd0c0561a54259",
"year": 2022
}
|
pes2o/s2orc
|
Deep learning applications in myocardial perfusion imaging, a systematic review and meta-analysis
Background Coronary artery disease (CAD) is a leading cause of death worldwide, and the diagnostic process comprises of invasive testing with coronary angiography and non-invasive imaging, in addition to history, clinical examination, and electrocardiography (ECG). A highly accurate assessment of CAD lies in perfusion imaging which is performed by myocardial perfusion scintigraphy (MPS) and magnetic resonance imaging (stress CMR). Recently deep learning has been increasingly applied on perfusion imaging for better understanding of the diagnosis, safety, and outcome of CAD. The aim of this review is to summarise the evidence behind deep learning applications in myocardial perfusion imaging. Methods A systematic search was performed on MEDLINE and EMBASE databases, from database inception until September 29, 2020. This included all clinical studies focusing on deep learning applications and myocardial perfusion imaging, and excluded competition conference papers, simulation and animal studies, and studies which used perfusion imaging as a variable with different focus. This was followed by review of abstracts and full texts. A meta-analysis was performed on a subgroup of studies which looked at perfusion images classification. A summary receiver-operating curve (SROC) was used to compare the performance of different models, and area under the curve (AUC) was reported. Effect size, risk of bias and heterogeneity were tested. Results 46 studies in total were identified, the majority were MPS studies (76%). The most common neural network was convolutional neural network (CNN) (41%). 13 studies (28%) looked at perfusion imaging classification using MPS, the pooled diagnostic accuracy showed AUC = 0.859. The summary receiver operating curve (SROC) comparison showed superior performance of CNN (AUC = 0.894) compared to MLP (AUC = 0.848). The funnel plot was asymmetrical, and the effect size was significantly different with p value < 0.001, indicating small studies effect and possible publication bias. There was no significant heterogeneity amongst studies according to Q test (p = 0.2184). Conclusion Deep learning has shown promise to improve myocardial perfusion imaging diagnostic accuracy, prediction of patients’ events and safety. More research is required in clinical applications, to achieve better care for patients with known or suspected CAD.
Background
Coronary artery disease (CAD) continues to be a major cause of death and hospitalisation worldwide including in high-income countries [1]. The main underlying pathology lies in the progressive nature of coronary atherosclerotic process. Therefore, timely diagnosis to aid management of patients with CAD has significant impact on both morbidity and mortality.
There have been significant advancements in CAD imaging in the last two decades, from anatomical imaging of the coronary tree by means of invasive x-ray coronary angiography and cardiac computed tomography (CCTA), to functional assessment of coronary stenoses and their impact on the myocardium both at rest and stress (physical or pharmacological), using stress echocardiography, nuclear myocardial perfusion scanning (MPS), and stress perfusion cardiac magnetic resonance (CMR). Myocardial perfusion abnormalities are one of the early stages in the ischaemic cascade and ischaemic constellation, which also includes angina symptoms, electrocardiographic (ECG) changes and ventricular wall motion abnormalities [2].
Another exciting advancement has been made in computer vision technology following the revolution of neural networks and artificial intelligence (AI) algorithms. Deep learning is the main subfield of AI which has been the focus of computing in medical imaging, with cardiovascular imaging being one of the common arenas for such novel applications. Cardiac perfusion imaging is one of the main applications which has been studied by many deep learning practitioners and computer vision experts.
One of the key aspects of deep learning is that it allows automation of clinical tasks, and thus reduces dependence on users. This has significant advantage in perfusion imaging interpretation given that the diagnostic accuracy of visual assessment by users is highly dependent on level of training, and previously it has been demonstrated that automated quantitative analysis performed similar to highly trained users (level 3) in interpreting perfusion CMR imaging [3].
Rationale and objectives
There is mounting evidence of the successful applications of deep learning in cardiac perfusion imaging, as demonstrated by the increasing number of publications. Moreover, data derived from medical imaging can be integrated into specific machine learning approaches to provide valuable information for the prediction of different outcomes by exploring new correlations between variables and clinical data to build predictive models.
As a result, it is becoming increasingly important that the current literature and evidence behind deep learning applications in myocardial perfusion imaging needs further evaluation, as well as recommendations of how to fine-tune the research towards more meaningful results for patients.
Therefore, the objective of this review is to determine the diagnostic accuracy of cardiac perfusion imaging using deep learning algorithms, the impact of deep learning on image quality, image safety, and the assessment of its prognostic value.
Design
The umbrella protocol for this systematic review is registered in the International Prospective Register of Systematic Reviews (PROSPERO, CRD42020204164), and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. This review follows the Cochrane Review structure of Diagnostic Test Accuracy (DTA) [4]. All searching activities were performed by two independent authors (EA and UD), with divergences solved after consensus.
The main review question was determined using the PICO approach: • Population: patients with suspected or known coronary artery disease (CAD) • Intervention: deep learning applications in CAD perfusion imaging • Comparison: comparison with conventional CAD imaging • Outcome: improve test accuracy and patient care
Selection criteria
Selection criteria decision was made by one author (EA) and overread by a senior author (AC), with disagreement resolved after consensus. Both prospective and retrospective studies were included with no restrictions based on minimal sample sizes or recruitment process. The analysis focused on participants with known or suspected CAD who had a perfusion imaging modality with the application of deep learning. Comparison was made with the standard imaging tests used in clinical practice to identify the functional significance of coronary artery lesions (index test). A clinical reference standard is used for both techniques (reference test) which is considered the gold standard.
Medical imaging techniques presented in conferences as part of challenges, such as Medical Image Computing and Computer Assisted Intervention (MICCAI), simulation studies and animal studies were not included due to the ambiguity in their direct relation to patient care. Given that the main scope of this review is on the direct application of deep learning on myocardial perfusion imaging, studies which used perfusion data as an input variable for prediction without deep learning image applications were excluded. As there are numerous studies of left ventricular segmentation using deep learning, these studies were not included unless they form the basis for perfusion quantification. Finally, studies of automated perfusion quantification which relied mainly on hand crafted algorithms or non-deep learning algorithms such as principle component analysis (PCA) were not included. September 29, 2020 with no language constraints. Full Ovid search strategy and output is shown in Appendix [1]. No routine use of methodology search filters has been used due to reports of missing relevant studies and inconsistency [4].
Search procedure
To avoid publication bias and give currency to this systematic review with upcoming research, the grey literature also has been searched. This includes: • Web of Science Conference Proceedings.
• Open Grey database.
• Manual searching of references
Data extraction
The extracted summary estimates included imaging modality performance after the application of deep learning (sensitivity, specificity, and area under the curve (AUC)). The sample size of each study with the imaging modality used and deep learning techniques were all reported.
The following is a summary of input data which were reported from each study in the review:
Statistical analysis
The diagnostic accuracy of the imaging modalities was measured mainly with specificity and sensitivity analyses and presented as forest plots. Data were reported as count or percentages.
Given that most studies did not report the values for true positive (TP), false positive (FP), false negative (FN), and true negative (TN), a confusion matrix was generated for each of the included studies in the meta-analysis by taking sample size (S) to calculate FN using sensitivity and FP using specificity. This was followed by subtracting FN from S to calculate TN and FP from S to calculate TP.
Although different studies reported different perfusion interpretation scales in MPS imaging, there were 2 common scales: a binary scale of normal vs abnormal, and an ordinal scale from 0 to 4. Both scaling methods are considered similar given that the ordinal scale would group 0 and 1 as normal, and 2,3 and 4 as abnormal. The ground truth finding from the reference test was considered the threshold for the summary receiver-operating curve (SROC) with bivariate diagnostic random effects meta-analysis with logit-transformed pairs of sensitivities and false positive rates method. Two SROC plots were performed using linear mixed model to compare convolutional neural network (CNN) performance against multi-layer perceptron (MLP). Publication bias and effect size derived from each study accuracy compared to mean accuracy was tested using funnel plot and Egger's test. P value of less than 0.05 was considered significant. Heterogeneity was examined using tau 2 , I 2 and Q tests.
All statistical analysis was performed using RStudio software Version April 1, 1106 using R programming language version 4.0.4, "mada" and "meta" packages were used for meta-analysis.
Search results
715 study entries from the published literature Ovid search, and 432 entries from grey literature were identified. After the screening of titles and duplicate selection, 320 studies were included in the initial analysis for which titles indicated that the study might be of relevance. Following full text review, 46 studies were included in the systematic review, of which 13 studies were included in the meta-analysis. The selection procedure and results with reasons for exclusion in the full text assessment is illustrated in Fig. 1.
Characteristics of studies
The final number of studies included in this systematic review was 46, details of first author, year of publication, model output, sample size, machine learning and deep learning techniques, index test (comparator), and reference test (gold standard) are all given in Table 1.
The majority of the studies were performed on MPS (76%). However, the number of studies in CMR has increased in the last 2 years, as shown in Fig. 2.
The most common neural network architecture in early years was MLP (35%), which has been dominated by CNN (41%) in recent years, as shown in Fig. 3.
Meta-analysis of perfusion classification
There were several studies which applied deep learning directly to segment and classify perfusion imaging maps with various classes, most of those studies were based on MPS imaging.
A meta-analysis was performed on 13 studies where the output of the classifier was based on perfusion maps segmentation and referenced to the presence or absence of significant CAD based on invasive coronary angiography or consensus of expert readers of MPS, a summary of their corresponding sensitivity and specificity is depicted in the coupled forest plot Fig. 4. The plot shows good performance of the neural networks with most studies reporting sensitivity and specificity of over 65%.
When comparing the performance of MLP with CNN across these studies using the summary receiver operating curve (SROC), CNN showed a higher value of SROC (higher sensitivity, lower false positive rate) with area under the curve (AUC) of 0.894, compared to MLP (AUC = 0.848), as showed in Fig. 5. The overall pooled AUC including all 13 studies was averaged at 0.859, showing good performance.
Assessment of heterogeneity
Quantifying heterogeneity showed τ 2 = 0.0037 with confidence interval [0.0000, 0.0295], which contains zero, indicating no significant between-study heterogeneity exists in our data. I 2 was found to be 22.3%, meaning that less than quarter of the variation in our data is estimated to stem from true effect size differences. Using literature "rule of thumb", we can characterize this amount of heterogeneity as mild.
Predictive interval was found to be ranging from [0.8569 to 1.1645], meaning that it is possible that some future studies will likely find positive effect based on the present evidence.
Finally, the reported p value for Q test was found to be above significance level (p = 0.2184), meaning there is no significant heterogeneity.
Assessment of risk of bias
The risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. A modified version was adapted and five main fields were assessed: Taking all the above into consideration, a table of the included studies with their associated risk of bias is shown in Appendix [2]. There were 13 studies (28%) which did not include an index test to compare with the machine learning or deep learning model before comparing with the reference test (ground truth). All studies defined a ground truth test against which they tested the model performance, and the majority of the studies blinded the model reporters from the ground truth results. This indicates the high reliability of the reported results.
Funnel plot in Fig. 6 shows asymmetrical pattern indicating smallstudy effects. Egger's test was significant with p value < 0.001, indicates that the data in the funnel plot are indeed asymmetrical, and possibly related to publication bias.
Deep learning techniques
The application of deep learning models on myocardial perfusion imaging started in the early 1990s, where all the studies were focused on the use of MLP architecture and applied on MPS [5][6][7][8][9][10]. MLPs are composed of three types of layers: input layers taking the raw image data, hidden layers which are connected via weight vectors, and an output layer which takes the weighted sum, applies an output function and returns a prediction [51]. The main output prediction of interest in early studies was perfusion map classification, this has continued in early 2000s, when the performance of MLP was also compared to other traditional linear and non-linear machine learning algorithms, such as K-nearest neighbours (KNN) [13] and support vector machines (SVM) [17]. Most of the networks achieved high performance metrics when There has been a substantial increase in the number of publications on deep learning in general with more focus on CNN over the last few years, as shown in Fig. 3. Due to the high dimensionality of imaging data, the fully connected layers which MLPs are based on put a significant limitation on the size of the model available to learn image features. The CNN overcomes this challenge by using convolutional layers which have significantly fewer parameters and make use of extensive weight sharing. The process of several convolutional layers can be thought in the following steps: detect low level features and edges from raw pixel data in the early layers, use these edges to detect shapes in the later layers, and use these shapes to detect higher-level features for prediction. An additional useful property of CNNs is that they lend themselves well to be used with transfer learning where the majority of the network is kept with its high-level feature extraction ability and only the last output layer is exchanged with a new layer to fit with the purpose of the study [51]. As a result, the majority of deep learning studies on perfusion imaging in the last few years have used CNNs as the main architecture, as shown in Fig. 3. Furthermore, the power and flexibility of CNNs has opened the window for deep learning applications in more challenging image analysis domains such as stress perfusion CMR [23,27,30,31], resting CT perfusion (rCTP), and myocardial contrast echocardiography (MCE) [22].
Summary of main results
The performance of neural networks for the identification of perfusion defects has proven to have a comparative performance to human expert reading, and had a strong overall accuracy in MPS studies (AUC 0.859) regardless of the comparator or reference tests. The metaanalysis presented in this review also shows the superior performance of CNNs compared to MLPs in reading and classification of perfusion maps.
The applications of deep learning on stress perfusion CMR has been increasing in recent years. There are some promising data on the effectiveness of using deep learning with CNNs to the pre-processing stage of perfusion quantification in CMR by automated identification of anatomical landmarks, such as the right ventricle (RV) insertion point into the septum and left ventricle (LV) centre on peak contrast enhancement [23,31,43]. Furthermore, CNN algorithms have been successfully applied to the segmentation of CMR perfusion images [27,30] with high performance. These applications in CMR still require further research. Another exciting application of CNN is on k-space acceleration and reconstruction for faster perfusion images acquisition [33] which is another attractive research application for deep learning. Another novel application on the horizon is using deep learning for predicting myocardial blood flow in perfusion CMR using physics-informed neural networks (PINNs) [52,53].
Other successful applications of deep learning include prediction, whether looking at prediction of death or myocardial infarction [39,47], prediction of revascularisation events [46], or prediction of high-quality full radiation dose MPS images from low dose or short time scans which has significant impact on the radiation dose delivered to the patients [36,40,41], and the identification of acquisition sequence type and image plane in CMR [54].
Furthermore, there are newer deep learning techniques using generative adversarial network (GAN) which have some promising application for image reconstruction, but this is still an active area of research.
Applicability of findings to review question
Given the evidence of multiple successful applications of deep learning on perfusion imaging presented in this review, the value of this evidence, although significant, remains in research applications with limited clinical use. A wider use of such applications based on the evidence presented could have significant impact on patients with known or suspected CAD. Reducing scan time, radiation dose, human resources and increasing diagnostic accuracy can save patients time, and result in better management of their coronary artery disease which has significant mortality and morbidity benefits.
Limitations
There are more applications and techniques which have been used without full publications of clinical studies and were not reported in this review, given that the main scope is clinical applications of deep learning in perfusion imaging.
The published articles included in this review did not report the same performance metrics, which was a challenge on the meta-analysis process, one of the main observations was that the performance metrics were reported in some studies for both stress and rest images, but not in others. As a result, only the highest performance score of the models for the stress images was reported in this meta-analysis.
Implications for practice
In this review the evidence of successful deep learning applications in myocardial perfusion imaging has been presented. Most of the early studies used the standard MLP perceptron architecture on MPS imaging, but more recently CNN architectures gained in popularity given its superior performance in image analysis, and deep learning applications have expanded to other perfusion imaging modalities, mainly stress perfusion CMR. The accuracy of deep learning has proven to be high in perfusion image classification to diagnose CAD compared to human readers and conventional diagnostic procedures performed in routine clinical practice, based on our meta-analysis of the relevant studies.
Implications for research
The successful preliminary applications of deep learning in stress perfusion CMR have opened a wide spectrum of potential applications to improve accuracy, accelerate scan times, and predict outcomes. Despite the high performance of deep learning in MPS image classification, which have shown promise for more than two decades, there is still a lack of wide use in clinical practice.
As a result, the findings of this review would encourage more clinical studies and trials to assess the performance and accuracy of deep learning in cardiac perfusion imaging using the latest techniques, in order to obtain clinical validation and to start to use this technology as clinical applications in perfusion imaging. Furthermore, other perfusion imaging modalities which are still in their infancy, such as rCTP and MCE, can also benefit from deep learning applications.
Funding
This research received no grant from any funding agency in the public, commercial or not-for-profit sectors. This research has received grant from Wellcome Trust [222678/Z/21/Z].
Authors contributions
EA and UD have performed the systematic search, data extraction and writing the manuscript. EA and CS have contributed to data analysis and statistics. EA and AC have contributed to discussion and conclusion. AC has contributed to final proof-reading as a senior author. All authors have reviewed and approved the submission.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
v3-fos-license
|
2019-08-17T19:12:14.668Z
|
2019-01-01T00:00:00.000
|
222286109
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://publications.iodp.org/proceedings/381/EXP_REPT/CHAPTERS/381_106.PDF",
"pdf_hash": "c249b7e2d09e537283fa8f99f9347906437de404",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45531",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "44bdeea6261a580bbb21a7b15fbde918ad1950e5",
"year": 2019
}
|
pes2o/s2orc
|
Site M0080
L.C. McNeill, D.J. Shillington, G.D.O. Carter, J.D. Everest, E. Le Ber, R.E.Ll. Collier, A. Cvetkoska, G. De Gelder, P. Diz, M.-L. Doan, M. Ford, R.L. Gawthorpe, M. Geraga, J. Gillespie, R. Hemelsdaël, E. Herrero-Bervera, M. Ismaiel, L. Janikian, K. Kouli, S. Li, M.L. Machlus, M. Maffione, C. Mahoney, G. Michas, C. Miller, C.W. Nixon, S.A. Oflaz, A.P. Omale, K. Panagiotopoulos, S. Pechlivanidou, M.P. Phillips, S. Sauer, J. Seguin, S. Sergiou, and N.V. Zakharova2
Operations
During International Ocean Discovery Program (IODP) Expedition 381, cores were recovered from one hole at Site M0080 ( Figures F1, F2).
Drilling and coring in Hole M0080A was completed to 534.1 meters below seafloor (mbsf ) in 13 days, achieving an average core recovery of 84% (see Table T1 for details). The Fugro Corer in both push and percussive modes was used to collect the upper 141 m of sediment. The Fugro Extended Marine Core Barrel (FXMCB) was used to complete the lower 393 m of the borehole. Wireline logging operations were then conducted over 2 days.
Transit to Site M0080
Transit to Site M0080 began in the early hours of 2 December 2017. While in transit, four surface seawater samples were collected.
Coring operations
During the positioning process for Hole M0080A on 2 December 2017, a water depth of 348.8 m was established following a sound velocity profile and then the seabed frame (SBF) and pipe were lowered to within 50 m of the seafloor. After positioning was complete, the SBF and pipe were lowered to the seafloor and coring commenced using the Fugro Corer in push mode. A seabed/water interface sample was collected. The corer was dropped through the drill string in free fall and penetrated 1.5 m with a recovery of 1.4 m of sediment. Coring continued uninterrupted for the rest of 2 De-cember and throughout 3 December, with exceptional progress made at a rate of >100 m/day. A switch to the percussive mode of the Fugro Corer occurred at 84 mbsf. The ability to alternate between push and percussive coring modes enabled nonrotary coring to greater depths than would have otherwise been possible. A temperature cone penetration test (CPT) measurement was made close to 100 mbsf to acquire in situ temperature and friction/strength information of the formation. The change to the FXMCB was made on the evening of 3 December at 141 mbsf when the efficacy of the Fugro Corer had dropped and the sediments were deemed firm enough to withstand the effects of rotary coring. A second temperature CPT measurement was taken around 210 mbsf. Coring continued at a very high rate until the morning of 5 December. At that time, the ground conditions became considerably more challenging, encountering alternating beds of sands and gravels, which resulted in a slowing of the rate of advance to 29.6 m/day. These slow rates continued and worsened until 12 December, with the lowest advance of 20.0 m/day in Hole M0080A on 9 December. Coring continued uninterrupted during this period.
The rate of advance improved significantly throughout 12 and 13 December to 85.4 m of advance because of favorable ground conditions. The drilling rate slowed again on 14 December as conditions became more challenging.
The final core from Hole M0080A was recovered at 0330 h (Eastern European Time [EET]) on 15 December, achieving a final depth of 534.1 mbsf. Despite a small number of discrete intervals where recovery was low, in general good recovery (84%) was IODP Proceedings 2 V o l u m e 3 8 1 achieved in Hole M0080A. The borehole was cored 65 m deeper than anticipated because the scientific goal for this borehole (basement) was deeper than the initial estimated target depth. Basement sensu stricto was not reached; however, the deepest cores recovered very coarse grained conglomerates thought to immediately overlie basement.
In general, seawater was used as the drilling medium; however, bentonite was used for core Runs 64-103 and 127-146.
Logging operations
In preparation for logging, Hole M0080A was stabilized by displacement with weighted bentonite mud (8.8 lb/gal). Standalone logging tools were used (because of the loss of stacked tools during Site M0078 operations), and they were systematically run with a sinker bar fitted above each tool to help its descent. Logging the hole started through the pipe with the spectral gamma ray Figure F1. Corinth rift with primary rift-related faults (both active and currently inactive), multibeam bathymetry of the gulf, and Expedition 381 drill sites. Offshore fault traces are derived from Nixon et al. (2016), building on Bell et al. (2009) and Taylor et al. (2011). Onshore fault traces are derived from Ford et al. (2007Ford et al. ( , 2013 and Skourtsos and Kranis (2009). Bathymetry data provided by the Hellenic Centre for Marine Research and collected for R/V Aegaeo cruises (Sakellariou et al., 2007) Figure F2. Site M0080 shown with Maurice Ewing Line 22 (Taylor et al., 2011) and interpretations from Nixon et al. (2016) (colored dotted lines and text). CDP = common depth point, TWT = two-way traveltime. Inset: seismic line and drill site locations. (ASGR512) tool and then continued in three depth stages where the following tools were planned to be deployed in the open hole: magnetic susceptibility and conductivity (EM51), sonic (2PSA-1000), dual induction (DIL45), and ASGR512. All tools were run with the primary winch (GV550). Hole M0080A logging started on 15 December 2017 at 0330 h with the drill bit pulled up to 533.1 m drilling depth below seafloor (DSF) (just above the base of the hole) to log through the pipe. The ASGR512 tool did not encounter any difficulty going down through the bentonite mud and in the pipe; it passed the drill bit to reach the bottom of the hole, and logging up commenced. After recovery of the ASGR512 tool, the drill bit was pulled up to 365 m DSF to log in the open hole for the first depth stage (365-533 m DSF). Bentonite mud (8.8 lb/gal) was circulated to stabilize the hole. The EM51 tool was deployed and passed the drill bit, but difficulties were encountered in the open hole from 410 to 425 m wireline log depth below seafloor (WSF), with several losses of tension. After borehole conditions prevented the tool from passing beyond ~425 m WSF, EM51 data were collected from this depth to the drill bit, and the tool was recovered on deck to perform a wiper trip downhole. The drill string was lowered to the bottom of the hole and then pulled up to 430 m WSF to avoid the interval where losses of tension were observed during the previous run. The EM51 and 2PSA-1000 tools were run successfully, reaching a maximum depth of ~530 m WSF and collecting data from this depth to the drill bit at 430 m WSF. However, when deploying the ASGR512 tool, losses of tension were observed ~10 m below the drill bit. The decision was made to move to the next depth stage, and the drill bit was lowered to 460 m DSF to clean the borehole walls and then pulled to 230 m DSF for the second depth stage. In this depth interval, the EM51 tool was run down the hole but could not pass beyond ~430 m WSF. Data were therefore collected with the EM51 tool from 430 to 230 m WSF. The 2PSA-1000 tool was deployed but could not pass beyond ~370 m WSF; therefore, data were collected up from this depth to the drill bit at 230 m DSF. After recovery of the 2PSA-1000 tool, the DIL45 tool and then the ASGR512 tool were deployed successfully in a similar depth range to the 2PSA-1000 tool, collecting data from 370 to 230 m WSF. The third and final depth stage was logged after the drill bit had been pulled up to 50 m DSF, and the bentonite present in the borehole was displaced with seawater. This change in fluid was intended to improve borehole stability for the expected lithologies in this shallow interval. Once deployed, the first tool (EM51) could not pass beyond ~220 m WSF downhole. The EM51, 2PSA-1000, DIL45, and ASGR512 tools were all deployed with data acquisition from ~220 to 50 m WSF. Logging operations were completed at 1235 h on 17 December.
Demobilization
Following the completion of logging operations in Hole M0080A, the European Consortium for Ocean Research Drilling (ECORD) Science Operator (ESO) team continued demobilizing the containerized laboratories and offices. All operations ceased at 1235 h on 17 December 2017, and the remaining pipe was tripped out of the hole. Transit to Corinth, Greece, took place overnight, with the D/V Fugro Synergy arriving dockside at 0600 h on 18 December.
Lithostratigraphy
Site M0080 is divided into four lithostratigraphic units based on a combination of observed facies associations (FA; see the Expedition 381 facies associations chapter [McNeill et al., 2019a]; Table T2), micropaleontology, seismic facies, and physical properties. In the following sections, we describe the units and subunits at Site M0080 (Table T3).
Unit and subunit description
Site M0080 was drilled in the Alkyonides Gulf to investigate the rift stratigraphy and evolutionary history in the eastern part of the Corinth rift ( Figure F1). The succession encountered in Hole M0080A is divided into four main lithostratigraphic units ( Figure F3). Unit 1 has similar characteristics to those of Unit 1 at Sites M0078 and M0079 and is divided into 11 subunits based on alternations between dominantly bioturbated homogeneous and bedded greenish gray and gray mud (FA1 and FA6) and bedded and laminated mud (FA2, FA3, FA4, and FA5). Unit 2 is divided into five subunits; the upper three subunits are dominated by light gray bioturbated mud (FA12), and the lower two units have greater variability of facies and grain size, including ophiolitic-rich conglomerates, paleosols, and highly bioturbated mudstone with shallowwater foraminifer assemblages. Unit 2 is probably partly timeequivalent to the lower part of Unit 1 at Sites M0078 and M0079. Two main facies associations dominate Unit 3. The upper half has a high proportion of red-brown coarse clastic sediment (FA7) that passes downhole into distinctive red-brown mud and silt (FA8). The base of Unit 3 comprises a range of facies associations, including shelly, bioturbated mudstone (FA17). Unit 4 is dominated by shelly laminated to bedded carbonates (FA15 and FA16), with the lowermost two cores consisting of pebble-cobble conglomerates with abundant limestone clasts. Composition information was deduced from X-ray diffraction (XRD) analysis and smear slide observations. Units 1 and 2 are dominated by moderately sorted, silt-to clay-grade carbonate minerals including low-and high-Mg calcite and dolomite with occasional intervals dominated by aragonite ( Figure F4). Minor terrigenous components include quartz, mica, and feldspar mineral grains. Ophiolite-related serpentinite minerals, biogenic components, and framboidal pyrite are present throughout the units. Units 3 and 4 include ophiolitic-derived material in all grain size fractions but most obviously in the coarser grain size range (sand to pebble grade). Serpentine minerals (represented mainly by chrysotile) dominate the fine grain size fraction in the upper half of Unit 3 and decrease below this interval. Heavy minerals, including zeolites and amphiboles, are sparsely present. Carbonate minerals are less abundant but maintain a relatively constant proportion with respect to quartz and mica. In Unit 3, clast compositions in conglomerates and pebbly sandstone are dominated by mafic/ultramafic lithologies with minor micritic limestone and red chert (from Section 381-M0080A-68R-2 to Section 98R-2CC; 256.2-350.7 mbsf ). The very fine sand to mud fraction commonly contains abundant clay-sized Fe oxides that may be responsible for the sediment's reddish brown to pale yellow color. Unit 4 is dominated by shelly and silty limestone, and the base of the hole has conglomerates in which limestone clasts dominate (90%-95% of the clasts), with minor amounts of red chert and mafic/ultramafic clasts.
Tephra and cryptotephra intervals were found in Site M0080 cores. These intervals were identified by a combination of visual inspection and physical properties. An increase in the Multi-Sensor Core Logger (MSCL) natural gamma radiation (NGR) intensity was usually observed in association with both visible tephra layers and with cryptotephra intervals, and this relationship was used as the primary method for targeting further investigation. Distinct and visible tephra layers (e.g., Section 381-M0080A-56R-1, 131.0-133.5 cm; 210.310-210.335 mbsf ) were usually a different color from the surrounding sediment, with highly reflective character resulting from high concentrations of bubble wall shards in these layers. Distinct tephra layers were composed of very well sorted silt size grains and commonly preserved an increase in grain size toward a coarser (very fine sand size) basal layer. The majority of the tephra identified in the cores were cryptic tephra intervals and were identifiable only through methodical sampling of NGR intensity peaks and subsequent visual examination of sampled material using optical microscopy. Cryptotephra were also identified incidentally during routine micropaleontological work by observation of glass shards.
Subunit 1-1 consists entirely of FA1 homogeneous mud of olive gray color and a high degree of bioturbation (bioturbation intensity [BI] = 4-6). The subunit contains sparse organic layers and millimeter-to centimeter-scale organic fragments, as well as a few shell fragments from Section 381-M0080A-3P-1 downhole. Only one interval has a silt to very fine sand grain size (Section 2P-2, 105-107 cm; 4.05-4.07 mbsf ). Close to the basal boundary, around the transition from FA1 to FA5, are distinct Teichichnus burrows. Greenish gray mud with dark gray to black mud to sand beds and laminations FA3 Light gray to white laminations alternating with mud and silt beds FA4 Laminated greenish gray to gray mud with mud beds FA5 Greenish gray mud with homogeneous centimeter-thick gray mud beds FA6 Green bedded partly bioturbated mud, silt, and sand FA7 Clast-supported sandy conglomerates and pebbly reddish brown sand with silt FA8 Reddish brown to brownish gray mud and/or silt, including mottled textures and rootlets FA9 Green-gray, often pebbly sandstone/siltstone FA10 Interbedded mud/silt and decimeter-thick sand beds FA11 Interbedded mud/silt and centimeter-thick sand beds FA12 Light gray to buff homogeneous to weakly stratified bioturbated mud FA13 Contorted bedding and mud-supported sand and conglomerates FA14 Greenish gray pebbly silt and clast-supported fining-upward conglomerates FA15 Greenish to buff bioclastic laminated siltstone to bedded fine sandstone, including bioturbation, ostracods, and rootlets FA16 Greenish to buff bedded and bioturbated bioclastic sandstone to mudstone FA17 Greenish laminated to faintly bedded/homogeneous fossiliferous mudstone The top of Subunit 1-2 is marked by a change from FA1 (above) to FA5 (below) and a corresponding downhole decrease in NGR and increase in the variability of magnetic susceptibility values ( Figure F3). The lower boundary occurs at a sharp change from FA3 (above) to FA1 (below) (Figure F5). One sand-homogeneous mud couplet >10 cm thick occurs in Subunit 1-2.
Subunit 1-3
Interval: 381-M0080A-7P-2, 13 cm, to 12P-1, 50 cm Depth: 22.83-35.80 mbsf (12.97 m thick with 0.2 m of missing core) The top of Subunit 1-3 is marked by a sharp change from finely laminated FA3 (above) to green bioturbated FA1 mud (below) (Figure F5). The bottom of the subunit is marked by a change from FA1 to FA4 with a diffuse zone of increasing bedding definition.
Subunit 1-3 is divided into three parts with boundaries in Sections 381-M0080A-8P-1, 4 cm (25.94 mbsf ), and 8P-3, 109 cm (29.99 mbsf ). The upper part is composed of greenish gray homogeneous mud (FA1) with rare shell fragments. The middle part is composed of bedded mud (FA5) above creamy white laminated mud (FA3) with some thin silt beds. The lower part is composed of homogeneous mud (FA1) with common shell fragments and rare discrete burrows, as well as some faintly bedded greenish mud (FA6). The homogeneous parts of the subunit are completely bioturbated, whereas the middle part is sparsely bioturbated.
Subunit 1-4
Interval: 381-M0080A-12P-1, 50 cm, to 17P-2, 100 cm Depth: 35.80-55.70 mbsf (19.90 m thick including 0.63 m of missing core) The top of Subunit 1-4 is characterized by a transition from homogeneous mud (FA1; above) to bedded and laminated mud (FA5). The bottom boundary is set at the base of a short interval (12 cm thick) of laminated sediment (FA3) that lies above FA1 mud in Subunit 1-5. Subunit 1-4 contains three parts. The upper part (Sections 381-M0080A-12P-1, 50 cm, to 13P-3, 24 cm; 35.80-43.16 mbsf ) is composed of greenish gray to gray (GLEY 1 6/5GY-6/N) bedded and laminated mud intervals that correspond to FA5. The sediment includes occasional black organic-rich silty laminations and is sparsely bioturbated. The sporadic presence of shell debris is also noted. The middle part is a transition to FA2 sediment that continues to Section 16P-1, 37 cm (52.07 mbsf ). This middle part consists of mud and centimeter-thick fining-upward silt with organic-rich layers and is also marked by bioclasts and mottled pyritized features. The lower part is composed of slightly bioturbated alternating greenish gray to light greenish gray (GLEY 1 6/5GY to 7/10Y) FA11 and FA5 sediment with generally higher sand proportions (as much as 20%). The basal sedimentary interval of this subunit (below Section 17P-2, 88 cm; 55.58 mbsf ) consists of a thin interval of welllaminated FA3 sediment, including millimeter-scale pale gray or white laminations, which are interbedded with moderately bioturbated mud beds with common discrete burrows. The top of Subunit 1-5 is marked by a sharp transition from FA3 white thin laminations (above) to FA1 green bioturbated mud (below) and is characterized by the appearance of marine microfossils and shell fragments (see Micropaleontology). The base of this subunit is strongly bioturbated, and the boundary with Subunit 1-6 is marked by a transition from FA1 bioturbated mud (above) to FA4 laminated mud (below).
Subunit 1-6
Interval: 381-M0080A-19P-3, 56 cm, to 25P-2, 61 cm Depth: 66.16-83.11 mbsf (16.95 m thick including 1.64 m of missing core) The top of Subunit 1-6 represents a change from homogeneous mud (FA1; above) to laminated mud (FA4; below) and is marked by the corresponding appearance of diffuse mud laminations that are initially intensely bioturbated. The lower boundary of the subunit is marked by a change from bedded mud and very fine sand (FA11; above) to homogeneous mud (FA1; below).
The upper part of Subunit 1-6 is composed of poorly laminated and bedded (centimeter-scale) light gray to greenish gray mud of 56 cm,. The mud is intensely bioturbated near the top boundary of the subunit, and pyrite is scattered throughout. The middle of the subunit consists of gray to greenish gray mud (FA5) with centimeter-to decimeter-scale bedding in Sections 20P-2, 78 cm, to 24P-2, 90 cm (69.44-78.70 mbsf). Pyrite particles are scattered through this middle section.
The lower part includes an interval of light gray to buff weakly laminated mud with abundant bioturbation (FA12) (5Y 7/1 to GLEY 1 7/10Y) in Sections 381-M0080A-24P-2, 90 cm, to 24P-3, 87 cm (77.20-80.18 mbsf ). The mud includes individual gastropods and other shell fragments, together with discrete burrows superimposed on background burrow mottling. Below this interval, centimeterscale silt to very fine sand beds interbedded in the greenish gray mud of FA5, FA2, and FA11 occur. Numerous black (organic-rich) laminations and beds were observed in the lower part of the subunit, as well as abundant shell fragments, pyrite, and discrete burrowing.
Subunit 1-7
Interval: 381-M0080A-25P-2, 61 cm, to 28V-1, 70 cm Depth: 83.11-89.20 mbsf (6.09 m thick including 0.10 m of missing core) The top boundary of Subunit 1-7 is marked by a transition from FA11 (above) to FA1 (below). The base of the subunit is marked by a relatively sharp color change in a bioturbated contact between darker FA1 greenish gray mud (above) and FA12 light gray mud (below).
Subunit 1-8
Interval: 381-M0080A-28V-1, 70 cm, to 31V-1, 140 cm Depth: 89.20-103.10 mbsf (13.9 m thick including 0.42 m of missing core) The top of Subunit 1-8 appears sharp and is marked by a change from FA1 (above) to FA12 (below) and a corresponding decrease in NGR. The lower boundary occurs at a sharp change from FA2 (above) to FA6 (below) and is marked by a downhole increase in bioturbation and a decrease in magnetic susceptibility.
Subunit 1-8 is composed of FA12 mud to Section 381-M0080A-30V-2, 92 cm (99.42 mbsf ). This interval is characterized by thick light gray to buff homogeneous to weakly stratified mud with moderate to intense bioturbation (BI = 3-5; low to moderate diversity, including Teichichnus and Planolites) ( Figure F6A). The lower part Figure F5. Facies change from FA3 (above) to FA1 (below) (22.83 mbsf ) at the boundary between Subunits 1-2 and 1-3, Hole M0080A. This illustrates the change in facies between an isolated/semi-isolated interval (Subunit 1-2; above) and a marine interval (Subunit 1-3; below). Top of core image is at 22.38 mbsf. of the subunit shows a downhole increase in bedding and then lamination definition with greenish gray mud with homogeneous centimeter-thick mud beds (FA5) with moderate levels of bioturbation to more organic-rich greenish gray laminated mud (FA2). The latter exhibits relatively sparse levels of bioturbation.
Subunit 1-9
Interval: 381-M0080A-31V-1, 140 cm, to 34V-1, 0 cm Depth: 103.10-110.40 mbsf (7.3 m thick with 0.3 m of missing core) The top of the subunit is marked by a change from FA2 to FA6, with the uppermost 38 cm composed of centimeter-scale bedded mud with uncommon bioturbation. The base is marked by a change from FA1 to FA12 between cores.
Subunit 1-9 is composed of homogeneous bioturbated mud, which is completely bioturbated (FA1) beneath the upper 38 cm, with common to abundant scattered shell fragments and pyrite. Discrete bioturbation includes vertical, inclined, and horizontal burrows. Macrofossils include intact gastropods and oyster fragments.
Subunit 1-10
Interval: 381-M0080A-34V-1, 0 cm, to 38V-2, 112 cm Depth: 110.40-130.92 mbsf (20.52 m thick including 0.39 m of missing core) The upper boundary of this subunit marks a change from FA1 mud (above) to FA12 homogeneous to weakly stratified mud (below). The bottom boundary marks a change from FA12 mud containing decimeter-thick fining-upward sand to mud beds to greenish gray FA6 mud of the underlying subunit.
Subunit 1-10 is composed of FA12 homogeneous to weakly stratified mud with variations in color. Colors range from pale greenish gray (typically between GLEY 1 7/10Y and 6/5GY) to more buff. Bioturbation is pervasive with extensive mottling, but superimposed on these features are discrete ichnofabrics including 0.5-1.0 cm diameter vertical, inclined, and horizontal burrows of Teichichnus, Palaeophycus, and a Chondrites-like fabric ( Figure F6B). Plant and woody fragments also occur. Pyrite is scattered throughout. Figure F6. A. FA12 light gray to buff bedded and bioturbated mud from Subunit 1-8 showing millimeter-to centimeter-scale burrows, Hole M0080A. Inset shows Teichichnus burrows with internal concave-up, concentric laminae. Top of core image is at 93.60 mbsf. B. FA12 highly bioturbated mud from Subunit 1-10 showing Chondrites burrow system with few visible branching tunnels. Top of core image is at 126.60 mbsf. The top of Subunit 1-11 is marked by a shell-rich bed and a change from FA12 mud (above) to FA6 mud (below). The lower boundary with Unit 2 is marked by an irregular, probably erosive contact between FA1 bioclastic greenish gray mud (above) and FA12 light greenish gray mud (below) (Figure F7).
Subunit 1-11 consists mainly of FA1 greenish gray homogeneous mud. The majority of the succession is completely bioturbated (BI = 6) and is characterized by abundant scattered shells such as gastropods and oysters that are occasionally preserved intact but also as fragments (<1 cm). Centimeter-thick shell-rich beds are found in the homogeneous mud (e.g., Sections 381-M0080A-38V-3, 90 cm; 39V-1, 120 cm; 39V-1, 132 cm; and 39V-2, 108 cm [132.2, 134.20, 134.32, and 135.58 mbsf, respectively]). In general, pervasive bioturbation results in a mottled texture, with relatively few discrete burrow forms seen. Recognized discrete burrows include subvertical burrows and irregular feeding trace patches. The top of Subunit 2-1 is marked by a sharp erosive surface separating FA1 bioclastic greenish gray mud (above) from FA12 light greenish gray faintly bedded mud (below) ( Figure F7). The lower boundary of this subunit is marked by the appearance of abundant shell fragments and shallow-marine foraminifers in underlying Subunit 2-2, although the general facies continues to be FA12.
Subunit 2-2
Interval: 381-M0080A-47R-1, 5 cm, to 48R-1, 13 cm Depth: 164.05-169.13 mbsf (5.08 m thick including 1.35 m of missing core) The top of Subunit 2-2 is associated with a slight color change (gray to greenish gray) in FA12 mud and the occurrence of shell beds and a shallow-marine foraminifer assemblage (see Micropaleontology). The lower boundary also occurs at a gradual color change (from greenish gray to gray) in FA12 mud and is marked by sharp-based fine sand containing shell fragments.
Subunit 2-2 is composed entirely of light gray to buff homogeneous to weakly stratified mud (FA12). It is highly bioturbated (BI = 5-6) and has a moderately diverse trace fossil assemblage including Teichichnus, Planolites, and Palaeophycus. Shell fragments and pyrite are scattered in small amounts throughout the unit.
Subunit 2-3
Interval: 381-M0080A-48R-1, 13 cm, to 58R-1, 72 cm Depth: 169.13-218.22 mbsf (49.91 m thick including 7.01 m of missing core) The top of Subunit 2-3 coincides with the base of a shelly, very fine sand in FA12. The lower boundary corresponds to a sharp color and facies change between FA3 light greenish gray mud (above) and FA5 dark greenish gray mud (below). Figure F7. Erosive boundary between Units 1 and 2 at 27 cm (136.96 mbsf ), Hole M0080A. Boundary separates FA1 greenish gray mud in Unit 1 (above) from FA12 white/light gray bioturbated mud in Unit 2 (below). Top of core image is at 136.80 mbsf. Subunit 2-3 is dominated by FA12 and is divided into three different parts. The top part (169.13-200.63 mbsf ) consists almost completely of FA12, with the exception of a short FA5 interval (195.95-197.00 mbsf ) that contains abundant to completely bioturbated light gray to greenish gray mud with discrete traces of Chondrites, Skolithos, Teichichnus, and Nereites(?). Sparse scattered shell fragments including gastropods also occur in this part of the subunit. Visible scattered pyrite occurs between 174.00 and 178.6 mbsf.
The middle part of the subunit (200.63-206.03 mbsf ) contains whitish finely laminated mud (FA3) at its top and base, whereas the central section is characterized by alternation of FA4, FA5, and FA11, with very fine to medium sand and scattered shell fragments. The bottom part of the subunit (209.00-218.22 mbsf ) is composed of highly to completely bioturbated buff-colored mud (FA12). A short interval of brownish silt (210.31-210.335 mbsf ) contains abundant tephra glass shards. At the base of the subunit is 15 cm of FA3 whitish laminated mud above a change to FA5 bedded greenish mud in underlying Subunit 2-4. The top boundary of Subunit 2-4 is sharp and is marked by a change from FA3 (above) to FA5 (below). The lower boundary occurs at a sharp change from FA15 (above) to greenish gray shelly mud and fine sand of FA12 (below).
More than half of Subunit 2-4 consists of FA11 and FA5, and the remainder of the subunit comprises FA10 and FA12. This subunit is divided into two main parts with a boundary at the top of Section 381-M0080A-61R-2 (229.18 mbsf). The upper part is characterized by greenish gray mud with homogeneous centimeter-thick mud beds (FA5) alternating with bedded mud/silt and centimeter-to decimeter-thick sand beds (FA10 and FA11) and light gray to buff homogeneous to weakly stratified mud (FA12). The lower part is characterized by bedded mud, silt, and centimeter-to decimeterthick sand and conglomerate beds (FA10/FA11) interbedded with light gray to buff homogeneous to weakly stratified mud (FA12). The dark bioturbated mud at the base of the subunit contains a shallow-marine foraminifer assemblage. The top boundary of Subunit 2-4 is sharp and marked by a change from FA15 (above) to FA12 (below). The lower boundary is also sharp and marked by a facies change from FA15 beige/greenish laminated silt (above) to FA7 conglomerates in Unit 3 (below) (Figure F8).
FA14 and FA8 contribute more than 50% of Subunit 2-5, together with lesser, approximately equal amounts of FA12, FA10, and FA7. The upper part of the subunit is characterized by light gray to buff homogeneous to weakly stratified mud (FA12) that overlies interbedded mud/silt and decimeter-thick sand beds (FA10). Below the top of Core 381-M0080A-65R (246.70 mbsf ), the subunit is dominated by greenish gray pebbly silt and clast-supported finingupward conglomerates rich in ophiolite-derived clasts (FA7 and FA14) that are intercalated with FA8 gray to brown mud and silt that include mottled textures and rootlets. The lowest part of the subunit comprises clast-supported sandy conglomerates and pebbly sand (FA7) rich in ophiolite clasts that overlie bioturbated to paral- Figure F8. Unit 2/3 boundary (256.82 mbsf ) marked by an abrupt change from FA15 beige/greenish laminated silt in Unit 2 (above) to FA7 red/brown pebble conglomerate in Unit 3 (below), Hole M0080A. Top of core image is at 256.50 mbsf. lel-laminated shelly carbonate silt and sand that are possibly rooted in Section 68R-1.
Unit 3
Interval: The top of Subunit 3-1 is marked by an abrupt grain size change from FA15 beige/greenish laminated silt (above) to FA7 pebble conglomerates (below) ( Figure F8). The lower boundary lies in a core gap of more than 4 m with conglomerates (FA7) above and redbrown silt (FA8) below.
Subunit 3-1 consists predominantly of clast-supported poorly to moderately sorted pebble conglomerates (FA7) alternating with minor FA8 poorly stratified sand and silt. The thickness of FA7 conglomerate intervals reaches as thick as 6 m in some places, but this value is likely to be higher because of poor core recovery; around half of the cores are incomplete because of poor core recovery. The lower part of Subunit 3-1 (the last 18 m) is more dominated by FA8 sand and silt than the conglomerates that dominate the upper part of the subunit; only a few tens of centimeter-thick conglomerates are present in this lower part. The lower boundary of Subunit 3-1 corresponds to the last appearance of a submeter-thick FA7 conglomerate bed (350.00-350.79 mbsf ). This interval contains some cobbles of limestone.
Subunit 3-2 consists predominantly of FA8 mud, silt, and fine sand with scattered pebbles and granules and small (centimeter diameter) calcrete nodules. This largely homogeneous succession of red-brown mud, silt, and fine sand is punctuated by rare 30 cm thick FA7 conglomerate beds with a mean clast size of fine to medium pebbles. The mud and silt are commonly highly bioturbated with large (millimeter-to centimeter-scale) horizontal, vertical, or subvertical burrows. Rootlets, calcretes, and mottled textures and color variations indicate various degrees of pedogenesis. Some subvertical infilled fissures were also observed in Section 381-M0080A-117R-3, 31 cm (422.00 mbsf ). The reddish brown mudstone in the lower part of Subunit 3-2 is interrupted by intervals of homogeneous gray mud containing scattered shell fragments and carbonate nodules.
Subunit 3-3 is divided into two parts with a boundary at Section 381-M0080A-125R-3, 66 cm (453.39 mbsf ). The upper part comprises homogeneous dark greenish gray highly bioturbated mudstone with abundant shell debris throughout and rare millimeterthick siltstone (FA1). The uppermost 50 cm of the lower part of Subunit 3-3 is characterized by centimeter-thick beds of dark gray mudstone interbedded with centimeter-thick beds of laminated very fine sandstone that fines upward into silty mudstone with millimeter-scale ripples (FA17). In places, these beds are highly bioturbated. The lower part of this subunit consists of dark gray bioturbated fine sandstone and siltstone with abundant bioclasts, limestone nodules (1 cm average diameter), and centimeter-to decimeter-thick graded sandstone beds (FA17).
Subunit 4-1
Interval: 381-M0080A-126R-3, 44 cm, to 137R-1, 73 cm Depth: 458.40-502.01 mbsf (43.61 m thick with 11.96 m of missing core) The top of Subunit 4-1 is marked by a decrease in bioturbation intensity and a marked color and facies change from FA17 dark gray (GLEY 1 5/10Y) sandstone and mudstone (above) to FA16 very pale brown (10YR 6/4) very fine sandstone and siltstone (below). The base of the subunit is marked by 1 m of fossiliferous homogeneous bioturbated mudstone and a change from FA17 to FA16.
Subunit 4-1 consists of fully lithified fossiliferous mudstone, siltstone, and sandstone with a distinctive very pale brown color associated with minor intervals of dark gray-green mudstone. This subunit is divided into two main parts with a boundary at Section 381-M0080A-130R-3, 76 cm (478.76 mbsf ). The upper part is predominantly composed of centimeter-scale bedded to millimeterscale laminated very pale brown calcareous siltstone (FA15) with a small amount of centimeter-to decimeter-bedded calcareous siltstone and sandstone (FA16) and some greenish fossiliferous bedded mudstone (FA17). Ostracods and scaphopods are commonly found in this part of the subunit, with some isolated ostracod grainstone beds. Centimeter-thick scaphopod beds also occur throughout.
The lower part of the subunit is composed of both thinly bedded/laminated calcareous siltstone and sandstone (FA15) and centimeter-to decimeter-bedded calcareous siltstone and sandstone (FA16). These two facies associations are present throughout in approximately equal amounts and are interspersed on a meter scale. An interval (~1.5 m) of dark gray brecciated mudstone and siltstone occurs in the upper part of the lower subunit (FA13). This slumped/brecciated interval contains mud intraclasts, abundant shelly fragments, and some limestone granules. The basal ~1.5 m of the unit is characterized by FA17 mudstone, siltstone, and sandstone. Shell fragments are common throughout the lower part of IODP Proceedings 15 Volume 381 the subunit, most commonly subcentimeter-sized gastropods, bivalves, and some ostracods.
Subunit 4-2
Interval: 381-M0080A-137R-1, 73 cm, to 143R-1, 134 cm Depth: 502.01-525.14 mbsf (23.12 m thick with 7.01 m of missing core) The top of Subunit 4-2 is marked by a change from FA17 to FA16 with a gradual color change and an increase downhole in fossil content. The base is placed at the appearance of red-brown mottling as the rock changes downhole from FA17 green nodular mudstone (above) to FA8 red-brown mudstone (below).
Subunit 4-2 is composed of disturbed and variably bedded calcareous sandstone and siltstone (FA16) that varies from light brown to dark gray-green. Small intervals of laminated siltstone (FA15) and a 1.3 m interval of contorted calcareous siltstone and sandstone (FA13) also occur. The rapid and chaotic changes in bedding dip suggest that this entire interval may be slumped. Subunit 4-2 is also affected by natural faulting and has a significant drilling-induced deformation (DID) overprint (see Structural geology). In the upper part, the sediment is commonly to abundantly bioturbated (BI = 4-5). The lower part of Subunit 4-2 is characterized by relatively undisturbed FA17 greenish gray homogeneous mudstone with common carbonate concretions (average diameter of approximately 1 cm) and rare bioturbation (BI = 0-2 and occasionally 3). Minor zones of brown faintly bedded siltstone (FA16) also occur. Shell fragments, gastropods, and bivalves are common throughout the subunit.
Subunit 4-3
Interval: 381-M0080A-143R-1, 134 cm, to base of Hole M0080A Depth: 525.14-534.20 mbsf (9.06 m thick including 1.00 m of missing core) The top of Subunit 4-3 marks a transition from FA17 green nodular mudstone (above) to FA8 red-brown mudstone (below). The bottom boundary corresponds to the base of Hole M0080A and is set in deposits of FA7.
Subunit 4-3 contains three distinct intervals whose boundaries are not well defined because of poor core recovery. The upper part to Section 381-M0080A-144R-1, 0 cm (527.3 mbsf), comprises green to reddish brown mudstone (FA8) that becomes progressively more reddish brown and increasingly mottled downhole. Bioturbation increases downhole, and limestone nodules (calcrete nodules?) also become more numerous and larger, even merging to form a complete carbonate layer in Section 143R-2, 103 cm (526.33 mbsf). Below this layer to Section 145R-1, 144 cm (530.84 mbsf ), siltstone to very coarse gray to gray-green pebbly sandstone (FA9) dominate with intervals of pale brown siltstone (FA16). FA9 siltstone and sandstone are homogeneous to poorly bedded, with some centimeter-to decimeter-thick fining-upward beds. No fossils or bioturbation were seen. Finally, the deepest part of Subunit 4-3, below Section 145R-1, 144 cm (530.84 mbsf ), comprises well-lithified pebble-cobble conglomerates (FA7) that are clast supported and poorly sorted. Clasts are dominated by subangular to subrounded limestone with highly altered (deep rusty brown) ultrabasic rocks and probable intraclasts of well-lithified coarse-grained pebbly sandstone ( Figure F9).
Interpretation of Hole M0080A
In Hole M0080A, deep-water turbiditic and hemipelagic deposits dominate Unit 1. The character of alternating facies associations in Unit 1 is similar to that observed in Unit 1 at Sites M0078 and Figure F9. Clast-supported pebble to cobble conglomerate (FA7) at base of Hole M0080A (Subunit 4-3). Clast lithologies include various limestones, red chert, mafic/ultramafic rocks, and coarse pebbly sandstone. Top of core image is at 533.50 mbsf. M0079 and is interpreted in a similar manner. The alternating subunits are thus provisionally identified as representing marine and isolated/semi-isolated basinal environments with good correlation to micropaleontology results (see Micropaleontology). The upper part of Unit 2 has a similar character to that at Sites M0078 and M0079 in that it is dominated by highly bioturbated mud (FA12) but is likely to be diachronous between the sites, with at least some of the upper part of Unit 2 at Site M0080 being time-equivalent to the lower parts of Unit 1 at Sites M0078 and M0079. In contrast to the upper part of Hole M0080A (Unit 1 and the upper part of Unit 2), the lower part of Unit 2 (Subunits 2-4 and 2-5) and Units 3 and 4 have markedly different facies associations and composition from those observed at the other sites. Subunits 2-4 and 2-5 contain ophiolitic-rich conglomerates, paleosols, and highly bioturbated mudstone with shallow-water foraminifer assemblages that are interpreted to have been deposited in a nearshore to coastal plain setting subject to repeated progradation and transgression.
Unit 3 is predominantly composed of red-brown siltstone, sandstone, and conglomerates that contain rooted horizons and calcretes, suggesting an overall alluvial-fluvial depositional environment that contrasts markedly with the predominantly subaqueous deposition higher in the hole. This unit displays a coarsening-upward stacking pattern with siltstone in the lower part giving way to sand and conglomerates in the upper section, suggesting progradation of the depositional system. Clast compositions fluctuate in the unit, suggesting a mixed limestone and ophiolite source area. The lowermost part of Unit 3 contains shelly and bioturbated mudstone, siltstone, and sandstone deposited in an overall subaqueous, nearshore setting.
The upper part of Unit 4 in Hole M0080A is carbonate dominated with thinly bedded and laminated shelly carbonates interpreted to be deposited in an overall low-energy, shallow-water environment with limited clastic input. Phases of subaerial exposure and incipient soil formation are indicated by weakly rooted horizons and immature pedogenic calcretes, suggesting seasonally wet and dry conditions. Conglomerate beds and coarser sand represent flood events in nearby alluvial-fluvial depositional system, with thinner sandstone representing distal crevasse and sheet-flood deposits, and organic-rich, lignitic beds forming in coastal marshy areas. The pebble to cobble conglomerates at the very base of the hole reflect a high-energy alluvial fan-fan delta environment with a mixed limestone and ophiolite source area.
Structural geology
In Hole M0080A, DID and tectonic deformation were systematically recorded during core logging. The east-west seismic reflection profile through Site M0080 ( Figure F2) shows the hole positioned at the eastern base of a high onto which the upper subhorizontal succession onlaps. The succession below this onlap surface has a moderate apparent dip to the east. No faults are imaged seismically through the drilled section. Four major lithostratigraphic units are defined at this site (see Lithostratigraphy); each unit has distinct rheological properties (see Physical properties) and structural characteristics. Units 1 and 2 are predominantly composed of mud, progressively compacted with depth. The upper part of Unit 3 is dominated by unconsolidated granule to pebble conglomerates with predominantly mafic/ultramafic ophiolitic and limestone clasts, whereas the lower part of Unit 3 and Unit 4 comprise a fully consolidated succession of siltstones and sandstones with some mudstones and conglomerates.
Observed tectonic structures
Bedding attitudes in the core are generally horizontal to subhorizontal. Subhorizontal bedding in cores persists at depth despite a very gentle increase in dip of seismic reflection horizons around Site M0080 ( Figure F2).
Small-scale natural faulting is concentrated in specific depth intervals in this hole ( Figure F10). Drilling-induced normal faulting was also observed but is not as well developed as at Site M0079 (see below). Natural faults were distinguished relatively easily from drilling-induced faulting using criteria described for Site M0078 (see Structural geology in the Site M0078 chapter [McNeill et al., 2019c]). The shallowest observed natural fault is in Core 381-M0080A-14P (45.30 mbsf ).
Faults in Unit 1 have apparent normal displacements that commonly range from 2 mm to 3.2 cm. Offsets on faults in Units 3 and 4 could not be measured because of a general absence of bedding traces in the consolidated siltstones, sandstones, and conglomerates ( Figure F10). However, many of the faults in Units 3 and 4 are thought to have displacements greater than the length of the fault trace observed in the core. Some have millimeter-to centimeterthick fault gouges and mineralized surfaces that often show slickenlines (e.g., Figure F11). Slickenline orientations indicate a range of motion senses from dip-slip to oblique-slip to strike-slip ( Figures F11, F12). A significant number of faults show strike-slip or oblique displacement.
In total, 82 faults were sampled for orientation analysis. The faults in the soft sediment of Unit 1 were measured using the same technique as for Sites M0078 and M0079 (see the Expedition 381 methods chapter [McNeill et al., 2019b]). The normal faults in Unit 1 appear to mainly fall into a conjugate set with an average NNE-SSW strike in the core reference frame, as illustrated by the stereographic projection plots ( Figure F12A). As observed at Sites M0078 and M0079, the consistent fault orientations throughout Unit 1 suggest that our sampling is biased, and we mainly detect faults that are trending approximately perpendicular to the split-core surface. We may thus be undersampling faults that strike obliquely or subparallel to the core split face.
Fault orientations were measured in a different way in the consolidated rocks of Units 3 and 4 at Site M0080. Here, the working core tended to break cleanly along fault planes so that blocks of core could be temporarily lifted out, thus exposing the fault surfaces in three dimensions and allowing a more robust orientation analysis of the surface and measurement of any slickenlines ( Figure F11B). We were thus able to sample faults even if the core split face is orientated obliquely or subparallel to the fault strikes. The results from these intervals therefore include a more complete and complex fault orientation distribution in Units 3 and 4 that is not biased by orientation relative to the core split face. We note in particular that fault orientations can vary between cores. In Unit 3, sampled faults have similar strikes in the same core and sometimes form conjugate sets of either strike-slip (e.g., Core 381-M0080A-103R) or oblique-slip IODP Proceedings 17 Volume 381 faults (e.g., Core 108R) ( Figure F12). In Unit 4, fault orientations are often more variable within each core, sometimes forming conjugate as well as bimodal fault sets. Fault orientations in Cores 129R, 139R, and 141R are highly variable and complex, probably due to rotation and development of intense biscuiting. Therefore, further filtering of the fault data will be needed before restoring the fault orientations to geographic north using the core paleomagnetic data. The sampled normal faults show true dips ranging from 28° to 85° but with a clear modal average dip of 50°-60° and a mean dip of 61° ( Figure F13). The steeply dipping to subvertical faults are often linked with oblique-slip or strike-slip oriented slickensides and are therefore considered natural ( Figure F12). Overall, the abundance and geometry of small normal faults in Unit 1 are consistent with those observed at Sites M0078 and M0079 and in agreement with the overall extensional nature of the rift deformation. However, the range in fault displacements and geometries in the stratigraphically older Units 3 and 4 suggests a more complex history of faulting possibly involving multiple fault generations and/or fault reactivation.
Observed drilling-induced deformation
A wide range of drilling-induced features was observed at Site M0080. The most common features were arching bedding, biscuiting, sediment flow/smearing along the core liner, voids, and open and shear fractures (Figures F14, F15). Other features that were less commonly observed are listed in Table T1 in the Expedition 381 methods chapter (McNeill et al., 2019b).
Hydraulic piston coring was not used in this borehole; only push (P), percussive (V), and rotary (R) coring were used ( Figure F14). Overall, DID intensity is highly variable, with significant lengths of the core having little or no DID. We provisionally relate this variation to a combination of drilling technique and core material strength, with correlations to lithostratigraphic units. In the push cores deeper than 15 mbsf, DID is pervasive ( Figure F14). This lowto high-intensity DID in the mud-dominant lithostratigraphic Unit 1 is expressed principally by arching bedding, lensing, flow along the core liner with some local soupy texture, axial flow, brecciation, voids, and open fractures ( Figure F15). Sections 381-M0080A-6P-1, 0 cm, through 6P-2, 73 cm (16.5-18.72 mbsf ), are notable for complete mobilization and destruction of bedding. DID intensity remains low to absent in percussive cores in the mud-dominant lower lithostratigraphic Unit 1. Rotary coring started at Core 42R (141 mbsf ) with a clear change in DID intensity and type. Biscuiting is dominant in Sections 45R-1 through 64R-2 (156-244 mbsf ) and Sections 88R-1 through 144R-2 (314-530 mbsf) (Figures F14, F15D) and is associated mainly with some shear fractures and open fractures. DID intensity is low to high through most of the muddominant lithostratigraphic Unit 2 and drops to low or absent in the lowest part of Unit 2 and through most of the conglomerate-dominated Subunit 3-1. DID intensity again becomes moderate to high through the finer grained lower Unit 3 and through most of the fully lithified Unit 4, although there are lengths of core with little or no DID (Sections 135R-1 through 139R-1; 497-510 mbsf; Figure F14). In Unit 4, biscuiting and brecciation are dominant with regular occurrences of voids, open fractures, and shear fractures ( Figure F15E-F15F). Overall, drilling-induced shear fracturing is less common at Site M0080 than at Site M0079. Figure F11. Examples of tectonic faults observed in Hole M0080A cores. A. Normal faults in Unit 4 sandstones (FA16; 138R-3). Lower fault has a 2 cm thick fault gouge of black mud. B. Two sampled fault planes from Unit 3 with slickenlines on polished mirror surfaces indicating dip-slip (ss2) and oblique-slip (ss1) (Unit 3; FA9; 113R-2). C. Sampled fault surface (blackened) from Unit 4 with slickenlines indicating oblique-slip (ss3) (133R-2).
Up ss1 ss2 ss3 Up A B C Figure F12. Lower hemisphere equal-area stereographic projections showing fault plane (great circles) and slickenline (red dots) orientations measured in the core reference frame from cores that have three or more sampled faults, Hole M0080A. Lithostratigraphic (A) Unit 1, (B) Subunit 3-2, and (C) Unit 4 cores. No faults were recorded in Unit 2 and Subunit 3-1.
Micropaleontology
Hole M0080A is located in the Alkyonides Gulf, east of and connected to the Gulf of Corinth ( Figure F1). Hole M0080A is divided into four major units, each of which is divided into different subunits by integration of lithologic and physical properties characteristics (see Lithostratigraphy) and observed microfossil assemblages. Unit 1 is characterized by alternating marine and isolated/semi-isolated intervals, Unit 2 is characterized by dominantly isolated/semiisolated conditions, Unit 3 is characterized by terrestrial deposits, and Unit 4 is primarily characterized by carbonate-rich sand, silt, and mudstone interpreted to represent a shallow-water to intermittently subaerial environment.
Calcareous nannofossils
Calcareous nannofossils were observed in all Unit 1 marine intervals/subunits to 136.7 mbsf. The calcareous nannofossil assemblages observed in Hole M0080A are similar to those observed in Holes M0078A, M0078B, and M0079A, with the most commonly observed species belonging to Emiliania huxleyi, Gephyrocapsa spp., Helicosphaera carteri, Reticulofenestra spp., and Syracosphaera spp. (Table T4).
In contrast to Sites M0078 and M0079, a significantly higher abundance of a large morphotype of E. huxleyi (>3 μm) was observed, especially in Subunit 1-1. These large morphotypes are thought to be the result of colder surface waters (Young and Westbroek, 1991;Flores et al., 2010).
E. huxleyi is the dominant taxa in the youngest marine interval, Subunit 1-1. The last downhole occurrence (LDO) is noted at the base of Subunit 1-3 in Sample 381-M0080A-12P-1, 15-16 cm (35.45 mbsf ). Because of the irregularity of the depositional environment here, this occurrence likely does not represent the true first appearance datum (FAD) that marks 0.29 Ma (Backman et al., 2012) and biostratigraphic application should be conservative.
A crossover in dominance between E. huxleyi and Gephyrocapsa "small" (<4 μm) was observed at Site M0080, as at the other sites. This crossover occurs here in Subunit 1-2 between 8.87 and 21.15 mbsf. Below this depth, starting in Subunit 1-3, Gephyrocapsa spp. is the dominant species. The timing of this crossover in dominance has been documented and discussed by Thierstein et al. (1977), Raffi et al. (2006), and Anthonissen and Ogg (2012), among others. The crossover appears to be time transgressive depending on latitude and is not well calibrated (Thierstein et al., 1977), so one should proceed with caution when applying this datum. Anthonissen and Ogg (2012) document this crossover occurring at 0.07 Ma in the Mediterranean Sea, which corresponds to early marine isotope Stage (MIS) 4 (Lisiecki and Raymo, 2005). If this datum is applied to Hole M0080A in the Alkyonides Gulf, then MIS 4 is represented by the Subunit 1-2 isolated/semi-isolated interval. Unfortunately, this crossover cannot be better characterized here because of the isolation of the Alkyonides Gulf during this period.
Calcareous nannofossils were not observed in Units 2 or 3, except for specimens that were obviously reworked (e.g., poorly preserved Paleogene or Cretaceous species).
Unit 4 is devoid of calcareous nannofossils, both in situ and reworked, with the exception of Subunit 4-1 where two species were observed in three samples: 381-M0080A-128R-1, 143-150 cm (466.83 mbsf ); 128R-3CC, 12-13 cm (467.73 mbsf ); and 136R-1, 98-99 cm (501.28 mbsf ). The specimens were observed in low quantities and are of moderate preservation, and these samples were otherwise devoid of reworked microfossil material. Given this and what we currently know about the sedimentology in Hole M0080A, these fossils are interpreted to be in situ. The two species, Amaurolithus primus and Isolithus semenenko, have a concurrent range zone of 7.39-4.58 Ma (late Miocene-early Pliocene) based on the FAD and last appearance datum (LAD) of A. primus, both of which are geologic age markers (Backman et al., 2012).
Marine diatoms
Marine diatoms are virtually absent from Site M0080 with the exception of two samples, 381-M0080A-8P-4CC, 25-26 cm (30.58 mbsf), in Subunit 1-3 and 38V-3, 7-8 cm (131.37 mbsf ), in Subunit 1-11, where they co-occur with calcareous nannofossils (Table T4). In Subunit 1-3, the observed marine diatoms belong to a single genus, Rhizosolenia. In Subunit 1-11, the dominant species was Paralia sulcata, a brackish/marine species. The significantly lower abundance of marine diatoms at this site relative to the other sites indicates a critical lack of nutrients and/or niche space. At this time, the marine diatom record is only appropriate for making simple interpretations regarding nutrient levels in the surface waters and to contribute to preliminary interpretation of the depositional environment.
Nonmarine diatoms
Nonmarine diatoms were primarily examined in smear slides made from core catcher samples offshore. Further onshore examination of the nonmarine diatom species sought to improve species identification and increase the understanding of ecology (Table T5).
From offshore sample analysis, nonmarine diatoms were observed in Units 1 and 2, and an additional 39 samples from Units 1, 2, and 3 were analyzed onshore to provide preliminary information about the depositional environment at Site M0080 (Tables T4, T6). In both cases, the nonmarine diatom assemblages at this site show better preservation and are composed of a greater abundance and diversity of benthic species, indicating a shallower environment than at the other two sites. A total of 46 taxa (including morphological varieties) were identified, 35 of them with benthic life habitat ( Figure F16).
The samples from Subunits 1-2 and 1-3 appeared barren or sometimes contained very few broken valves of benthic species that could not be identified to the species level. The only exception was Sample 381-M0080A-5P-1, 81 cm (14.31 mbsf), where a tephra layer was also visually identified. The diatom assemblage here is composed of small (~5 μm) valves of Pantocsekiella ocellata and some benthic nonmarine species like Caloneis lancetula, Campylodiscus cf. hibernicus, and so on. The dominance of small-sized planktonic diatoms associated with tephra deposition has been observed at other sites (Cvetkoska et al., 2012(Cvetkoska et al., , 2014Jovanovska et al., 2016) and is related to nutrient enrichment-especially silica (SiO 2 ). In contrast, the poor preservation and/or absence of diatoms in some of the subunits in Unit 1 (without tephra) can be related to the bicarbonate composition of local bedrock. The assemblages from Subunit 1-10, an isolated/semi-isolated interval, contain diverse benthic diatom communities. Sample 34V-3, 25 cm (113.65 mbsf), is dominated by Campylodiscus echeneis, a benthic taxon primarily related to coastal and brackish environments but also reported from some nonmarine environments (e.g., the Great Lakes, USA [Stoermer et al., 1999]). Subunit 1-10 features a shift between samples dominated by nonmarine (freshwater) taxa that indicate lower nutrient levels (Samples 35V-3, 1 cm, and 37V-2, 6 cm; 118-125.7 mbsf) and an assemblage with increased presence of brackish taxa that are considered as eutrophic indicators (Sample 38V-2, 60 cm; 130.4 mbsf ).
The samples analyzed from Subunit 2-1 show a change from nonmarine planktonic-dominated assemblages toward nonmarine assemblages dominated by benthic taxa at the boundary with Subunit 2-2. Pantocsekiella cf. rossii and P. ocellata dominate in Sample 40V-1, 86 cm (137.56 mbsf). The presence and diversity of benthic taxa increase between Samples 41V-1, 66 cm, and 43R-2, 75 cm Table T5. Most common nonmarine diatom taxa observed at Site M0080 and their environmental preferences. NA = not applicable/not known. Diatom preferences are according to the information available online at http://www.algaebase.org, http://www.marinespecies.org, and https://westerndiatoms.colorado.edu and from Krammer and Lange-Bertalot (1991), Houk et al. (2010), Reed et al. (2010), and Cvetkoska et al. (2012Cvetkoska et al. ( , 2016 is dominated by Ellerbeckia arenaria, which along with the great abundance of ostracods found in the same sample indicates that this probably represents an interval of shallow, nonmarine (freshwater), oligotrophic-mesotrophic environment. The upper part of Subunit 2-2 includes Sample 47R-1, 94 cm (164.94 mbsf ), which is characterized by a mixed planktonic-benthic diatom assemblage. Cyclotella litoralis and C. echeneis dominate the assemblage, whereas Diploneis bombus and Diploneis cf. subovalis occur at lower counts. The overall assemblage indicates a brackish depositional paleoenvironment.
The samples analyzed from Subunit 2-3 contain well-preserved diatom assemblages that show a trend of increasing marine influence toward the lower part of this subunit. Samples 48R-1, 20 cm, and 50R-2, 63 cm (169.2 and 181.13 mbsf), contain a higher proportion of benthic taxa than planktonic taxa. These taxa have been primarily reported from nonmarine environments but also from some brackish environments (e.g., Diploneis aff. smithii var. dilatata, D. cf. subovalis, and E. goeppertiana). Samples 52R-2, 78 cm, and 53R-3, 86 cm (191.28 and 197.86 mbsf ), show a change toward an increased proportion of planktonic taxa and increased taxa that can tolerate higher salinity levels, like C. litoralis, P. sulcata, D. bombus, and D. cf. subovalis.
Seven samples in total were analyzed from Unit 3 between Samples 95R-1, 35 cm, and 123R-2, 49 cm (336.55-443.92 mbsf), but no diatoms were found. The only exception was Sample 117R-2, 32 cm (420.68 mbsf ), where a single valve was observed, but identification was not possible because of its poor preservation.
Overall, the diatom taxa identified in Hole M0080A samples were divided into several categories according to their environmental preferences. In addition to the general environment, here we adopted the salinity classification available from Van Dam et al. (1994) ( Table T5). Both approaches seem to reflect the changes described above, leading to a general impression that the major shifts occur between oligotrophic-mesotrophic assemblages with low-salinity preferences and mesotrophic-eutrophic benthic assemblages tolerant to higher salinity. This implies that further detailed diatom analyses from Hole M0080A can provide insights into climate and sea level change driving environment at this site.
Nonmarine diatoms and calcareous nannofossils were not observed together in mixed microfossil assemblages as frequently as they were at the other two sites. At Site M0080, only four samples are currently described as having a mixed microfossil assemblage.
Foraminifers
A total of 183 samples were examined for foraminifer microfossils (145 samples from core catchers taken offshore and 38 additional core samples from split-core sections taken during the onshore phase of the expedition). Foraminifer specimens were observed only in specific intervals in Units 1 and 2 and always showed large variations in their abundances. Foraminifers were absent from Sample 381-M0080A-68R-3, 11-12 cm (257.81 mbsf), at the top of Unit 3 through Sample 144R-2, 46-47 cm (529.26 mbsf), near the base of Unit 4, except for Sample 139R-4, 9-11 cm (513.37 mbsf), which contained only very few specimens of Ammonia tepida (Table T7).
In the remaining parts of the hole, foraminifers are rare or absent.
When present, foraminifers are well preserved. Benthic foraminifers are almost always more abundant than planktonic foraminifers. This trend is most obvious in Subunit 1-11, where planktonic foraminifers are absent. The three most commonly abundant species in Subunits 1-1, 1-3, 1-5, and 1-7 are Hyalinea balthica, Cassidulina carinata, Bulimina marginata, and Bulimina aculeata with a relatively minor contribution of Melonis barleeanus (Table T8). Ecological studies relate a high abundance of these species to the presence of high inputs of organic carbon to the seafloor in the form of fresh phytodetritus (Goineau et al., 2015). Assemblages in Subunit 1-9 contain an abundance of the benthic foraminifer species C. carinata, and they also contain a relatively high contribution of species characteristic of nearshore marine environments such as A. tepida and Elphidium excavatum (Debenay, 2000). In intervals where planktonic foraminifers are abundant (in Subunits 1-1, 1-3, 1-5, and 1-9), the planktonic assemblages show relatively low diversities. These assemblages are usually dominated by neogloboquadrinids, suggesting the development of a deep chlorophyll maximum layer, or by Turborotalita quinqueloba, suggesting the prevalence of surficial water of relatively low salinity and low temperature and/or enhanced fertility (Rohling and Gieskes, 1989;Rohling et al., 1993). The only exception is Sample 8P-4, 25-26 cm (30.58 mbsf ), where the dominant planktonic species is Globorotalia inflata, which may indicate the development of a cool and deep mixed layer (Pujol and Vergnaud Grazzini, 1995) (Table T9).
In these intervals, foraminifers are represented only by benthic species and are dominated by A. tepida, a shallow-marine species that represents as much as 90% of the assemblage. Faunal assemblages from 255.1 to 256.13 mbsf are likely influenced by postdepositional processes involving carbonate chemical reactions.
Palynology
A total of 15 core catcher samples from Hole M0080A were analyzed for palynomorphs (Table T10): 3 samples from Unit 1, 5 samples from Unit 2, 4 samples from Unit 3, and 3 samples from Unit 4. All samples examined from Unit 4 were either barren or contained badly preserved palynomorphs that did not allow for further analysis. In Unit 3, only Sample 124R-4, 11 cm (451.19 mbsf ), yielded well-preserved palynomorphs and is included in the analyses. In Unit 2, all samples are included in the analyses with the exception of Sample 67R-4, 9 cm (255.1 mbsf ), which is barren. In Unit 1, Sample 24P-2, 146 cm (79.26 mbsf ), yielded very low concentrations of palynomorphs and is excluded from the analyses.
A total of seven samples yielded good palynomorph preservation and are presented with a mean concentration of corroded pollen grains of 314 grains/g and a maximum of 460 grains/g in Sample 124R-4, 11 cm (451.19 mbsf ). The mean concentration of fungal remains is 23 per gram, and the mean concentration of charred microscopic particles is 4027 per gram with a maximum of 7690 per gram in Sample 9P-1, 0 cm (30.6 mbsf). Terrestrial pollen concentrations have a mean value of 6,716 grains/g that is higher than at the other two sites, with a maximum of 14,163 grains/g recorded in Sample 50R-5, 0 cm (184.04 mbsf ).
Biostratigraphy summary
Age control is provided solely by the calcareous nannofossils in Hole M0080A, and it should be applied cautiously given the complexity of the depositional environment (Table T11). Two biohorizons were recognized at Site M0080. The first age datum that is considered here is that of the crossover in dominance between E. huxleyi and Gephyrocapsa "small. " This crossover has been observed in multiple locations (Thierstein et al., 1977;Raffi et al., In three samples from Subunit 4-1, two calcareous nannofossil species were observed. I. semenenko and A. primus were both observed in Sample 128R-1, 143-150 cm (466.83 mbsf), and only I. semenenko was observed in Samples 128R-3CC, 12-13 cm (467.73 mbsf ), and 136R-1, 98-99 cm (501.28 mbsf ). I. semenenko first occurs in the Tortonian (11.63-7.25 Ma), and its last occurrence is in the Zanclean (5.33-3.6 Ma); it is not a biostratigraphic marker species. A. primus is a marker species and both its FAD and LAD are biostratigraphic age markers at 7.39 and 4.58 Ma, respectively. These two species, therefore, have a concurrent range zone of 7.39-4.58 Ma (late Miocene-early Pliocene) (Backman et al., 2012;Ogg et al., 2016), and this is the age range that is tentatively applied to this interval. It is important to consider that these species are not abundant and are moderately preserved, but they are currently interpreted to be in situ because no other reworked specimens were observed.
Micropaleontology summary
Micropaleontology at Site M0080 maintains a high level of complexity both in individual microfossil groups and collectively, as it did at Sites M0078 and M0079. Unit 1 alternates primarily between marine and undetermined/barren assemblages but also includes a few mixed and nonmarine assemblages toward the base of the unit. Pollen assemblages examined in this unit suggest the occurrence of a forested landscape in the surroundings of the Alkyonides Gulf. Unit 2 alternates between nonmarine, undetermined/barren, and marine/brackish assemblages. Unit 3 is devoid of microfossils, except palynomorphs, and is almost entirely terrestrial. Unit 4 is nearly devoid of microfossils with the exception of three samples that contain calcareous nannofossils in low concentrations. See Figure F17 for a summary of the microfossil assemblages by subunit, and refer to individual data sets for details. Table T11. Calcareous nannofossil biohorizons (low to middle latitudes) used to constrain age in Unit 1, Hole M0080A. NA = not available. FAD = first appearance datum, LDO = last downhole occurrence, X = dominance crossover. Download
Interstitial water
At Site M0080, 62 interstitial water samples were collected from 0.90 to 502.13 mbsf. Rhizon sampling acquired pore water to 20.50 mbsf, and whole-round squeeze cakes were used to sample pore water from 32.05 to 502.13 mbsf. Both methods successfully produced the required water volume to 365.33 mbsf. Subsequently, squeeze cakes only produced limited water, and not all pore water splits could be collected. Additionally, drilling mud fluid samples were taken from discrete depths to evaluate any potential drilling mud fluid contamination (see Geochemistry in the Expedition 381 methods chapter [McNeill et al., 2019b]). Pore water compositions at Site M0080 can be grouped into four distinct geochemical regions that correspond to the four lithostratigraphic units present at the site: Unit 1 (possible alternations between marine and isolated/semi-isolated intervals), Unit 2 (mostly isolated/semi-isolated), Unit 3 (principally terrestrial deposits), and Unit 4 (primarily characterized by carbonate-rich sand, silt, and mudstone; see Lithostratigraphy). The pore water geochemistry of these units is described in detail below.
Salinity variations: salinity, sodium, and chloride
Salinity decreases from approximate modern Gulf of Corinth seawater values (37.47) at the seafloor to 12.38 at 49.45 mbsf. Deeper than 49.45 mbsf, salinity gradually increases to a maximum of 37.82 in the deepest sample at 502.13 mbsf ( Figure F18A). Chloride (Cl − ) concentrations follow salinity downhole throughout Site M0080 ( Figure F18B). Concentrations decrease from 593.11 mM at the sediment/water interface to 199.95 mM at 86.95 mbsf and then increase to 522.46 mM at the base of the hole. Cl − concentrations in pore water samples at 264.63, 291.06, 322.90, and 439.31 mbsf are markedly lower than surrounding samples, suggesting possible contamination with drilling mud (bentonite + freshwater).
Organic matter degradation: alkalinity, ammonium, boron, bromide, iron, manganese, pH, phosphate, sulfate, and dissolved inorganic carbon
Organic matter degradation in sediments leads to distinct changes in pore water geochemistry, generally revealing redox reactions (Berner, 1980). These shallow redox reactions were observed in the pore water profiles from the upper 14.70 mbsf in Hole M0080A ( Figure F19). Comparable to Sites M0078 and M0079, the Figure F17. Summary of micropaleontology assemblages by subunit, Hole M0080A. Blue = marine microfossil assemblages, green = mixed microfossil assemblages, gray = undetermined assemblages. ) concentrations decrease from 27.40 mM near the sediment/water interface to 0.30 mM at 10.20 mbsf ( Figure F19C). Deeper than 10.20 mbsf, SO 4 2− concentrations are low until an abrupt increase at 238.64 mbsf (18.2 m above the Unit 2/3 boundary) continuing to the base of the hole (maximum of 27.53 mM at 346.44 mbsf ). SO 4 2− concentrations in pore water samples at 264.63, 291.06, 322.90, and 439.31 mbsf (the same depths where contamination is suggested in the Cl − profile; Figure F18B) are markedly lower than in surrounding samples (Figure F19C), also suggesting possible contamination with drilling mud (bentonite + freshwater).
Ammonium (NH 4 + ), phosphate (PO 4 3− ), and bicarbonate (HCO 3 − ), an important component of dissolved inorganic carbon (DIC) at this pH range, are common products of microbial organic matter degradation. Each of these parameters have the same downhole increasing trend in the upper 20.50 mbsf at Site M0080 ( Figure F20). However, depths of major changes in concentrations are not directly comparable deeper than 20.50 mbsf. NH 4 + concentrations increase from seafloor values of 0.2 mM to a maximum of 5.3 mM at 138.15 mbsf. NH 4 + concentrations then decrease from 160.44 to 281.63 mbsf (immediately below the Unit 2/3 boundary), where concentrations remain low (less than 0.3 mM) to the base of the hole. In contrast to NH 4 + , PO 4 3− concentrations reach a maximum (21.82 μM) at a shallower depth and then decrease in a stepwise manner to 0.54 μM at 201.94 mbsf. Further downhole, PO 4 3− concentration remains low to the base of the hole.
Alkalinity, a parameter that measures the sum of all bases (including HCO 3 − ), increases in the top 12.90 m from 4.25 mM to a maximum of 13.87 mM ( Figure F20C). Similar to PO 4 3− concentra- tions, alkalinity then decreases, reaching a zone of much lower concentrations (less than 1.5 mM) from 256.13 mbsf to the base of the hole. Alkalinity values are similar to DIC concentrations, suggesting HCO 3 − is the dominant fraction of alkalinity. Pore water pH at Site M0080 varies between 7.4 and 8.2 ( Figure F21A). pH generally decreases downhole from a maximum at shallow depths (8.2 at 10.20 mbsf ) to a minimum of 7.4 at 400.44 mbsf. Boron (B) and bromide (Br − ) accumulate in organic matter. A common method to determine the relative contributions of seawater and organic carbon oxidation to pore water chemistry is to plot the ratio of B and Br − to Cl − (Figure F21B-F21C). Disparity of B/Cl − and Br − /Cl − from seawater values can indicate production or removal of ions from solution. Both B/Cl − and Br − /Cl − values at Site M0080 broadly follow the PO 4 3− profile with depth, including a shift to lower values around the depth of the Unit 2/3 boundary. Below the Unit 2/3 boundary, alkalinity, DIC, NH 4 + , Mn 2+ , and Fe 2+ concentrations are much lower. These values are in the same range as concentrations in the drilling mud (Table T12); hence, potential drilling mud contamination, as mentioned above for SO 4 2− and Cl − , would have less effect on the alkalinity, DIC, NH 4 + , Mn 2+ , and Fe 2+ concentrations.
Mineral reactions
Barium, calcium, magnesium, potassium, sodium, and strontium Calcium (Ca 2+ ), magnesium (Mg 2+ ), potassium (K + ), and sodium (Na + ) are all major ions in seawater, and barium (Ba 2+ ) and strontium (Sr 2+ ) are minor components. Their concentrations in pore water may be altered by processes such as ion exchange, mineral weathering, and formation of new minerals. The Na + and K + depth profiles ( Figure F22) resemble those of salinity and Cl − , indicating that salinity changes govern the overall pattern. The Na + /Cl − ratio stays very close to the seawater value in the uppermost 350 m.
Deeper in the hole, a decrease in the Na + /Cl − ratio suggests a removal of Na + . The observed K + /Cl − ratio profile shows that K + concentrations are influenced by reactions with the sediment because the ratios differ downhole and do not reflect that of modern Gulf of Corinth seawater composition (vertical dashed line, Figure F22E). K + /Cl − values remain lower than in seawater throughout the hole, with the lowest values in Unit 2 and the lower part of Unit 3.
Ba 2+ and salinity have opposite trends in the shallow sediment. Ba 2+ concentration increases from 0.28 to 49.45 μM at 49.45 mbsf and then continues to increase to a maximum of 36.43 μM at the Unit 1/2 boundary at 136.96 mbsf (Figure F22C). This increase suggests a release of Ba 2+ at this unit transition. Deeper in the hole, Ba 2+ concentrations decrease again and remain below 2.82 μM to the base of the hole. The dissolved Ca 2+ and Mg 2+ depth profiles largely follow the Cl − and salinity profiles (Figures F18, F23A-F23B). Ca 2+ concentration decreases in the top 8.50 m from 8.39 to 1.58 mM, and Mg 2+ decreases from 52.16 mM at the sediment/water interface to 16.51 mM at 79.26 mbsf. In Unit 1, both Ca 2+ and Mg 2+ decrease more significantly than Cl − as indicated by the Ca 2+ /Cl − and Mg 2+ /Cl − ratios, which are lower than seawater values in this part of the hole (Figure F23D-F23E). In Unit 2, both the Ca 2+ /Cl − and Mg 2+ /Cl − ratios stay close to seawater values, whereas in Unit 3, especially in the lower part, they are considerably higher, suggesting the release of Ca 2+ and Mg 2+ to pore waters or diffusion of these ions from below. Although the Sr 2+ trends at Sites M0078 and M0079 largely follow the Ca 2+ profiles, they differ considerably at Site M0080 ( Figure F23C). Sr 2+ shows a decrease in the uppermost 10.20 m of Hole M0080A from 88.79 to 51.32 μM, similar to Sites M0078 and M0079, followed by a strong downhole increase to a peak of 334.99 μM at 211.74 mbsf in Unit 2. The interval between 334.63 and 391.44 mbsf (Unit 3) is marked by stable Sr 2+ concentrations around 130 μM before increasing again to a maximum of 359.97 μM at the bottom of the hole, suggesting a release of Sr 2+ or diffusion from below, as for Mg 2+ and Ca 2+ . Although the Ca 2+ , Sr 2+ , and Mg 2+ profiles differ throughout the three sites as described above, they all show an increase at the bottom of the hole, suggesting an underlying carbonate source for all sites.
The samples suggesting drilling mud contamination (bentonite + freshwater; see Salinity variations: salinity, sodium, and chloride) in the SO 4 2− and Cl − profiles (at 264.63, 291.06, 322.90, and 439.31 mbsf ), show lower Mg 2+ , Ca 2+ , and Sr 2+ concentrations than surrounding samples. Na + , Mg 2+ , and Ca 2+ (Table T12) are present in much lower concentrations in the drilling mud compared with the pore water samples, supporting the suggestion that drilling mud contamination is responsible for the "outliers" mentioned above.
Silica and lithium
Silica accumulates in sediments as silicate minerals and the remnants of siliceous organisms (predominantly diatoms and radiolarians). Therefore, dissolved silica (H 4 SiO 4 ) is typically released to pore water through dissolution of these sediment components. Measured Si (referred to as H 4 SiO 4 ) concentrations increase from 247.07 μM below the seafloor to 583.03 μM at 12.90 mbsf and then generally continue to increase to a maximum of 1134.11 μM at 201.94 mbsf (55 m above the Unit 2/3 boundary) ( Figure F24A). Deeper than 201.94 mbsf, H 4 SiO 4 shifts to lower concentrations, ranging from 209.43 to 346.44 μM, and remains in this range throughout Unit 3.
Lithium (Li + ) concentrations in pore waters decrease from 24.76 μM close to the seafloor to 13.28 μM at 8.50 mbsf ( Figure F24B). Deeper than 8.50 mbsf, concentrations increase to a maximum of 49.80 μM at 171.94 mbsf (just below Unit 1/2 boundary) and then decrease to 10.3 μM at 264.63 mbsf (the Unit 2/3 boundary). Downhole and throughout Unit 3, Li + concentrations stay low, between 8.08 and 25.03 μM. Higher Li + concentrations may suggest chemical weathering of sediment contributing to increased pore water values. Li + and H 4 SiO 4 concentrations in the drilling mud samples are lower than the concentrations in pore water samples (Table T12). Contrary to other elements mentioned above, no indication of contamination is shown in the Li + and Si profiles.
Carbon content
Total carbon (TC) and total organic carbon (TOC) values were measured and total inorganic carbon (TIC) was calculated (the difference between TC and TOC) for 68 ground samples from Hole M0080A. The results are presented in Table T13 and plotted in Figure F25. TC ranges from 1.00 to 12.72 wt%, with the highest overall values in Unit 2 and 4 (averaging 8.79 and 9.89 wt%, respectively). Low TC values are reported for Unit 3 with an average of 3.09 wt%. 30, 52.74, 120.30, 137.63, 199.72, 242.90, 316.00, and 475.74 mbsf, implying higher organic matter burial at these depths.
ED-XRF
Energy dispersive X-ray fluorescence (ED-XRF) was used to quantify 24 elements from the 68 ground solid sediment samples from Hole M0080A. All elements are included in Table T13; however, for some elements, measurements were affected by low yield or interference and could not be accurately quantified (marked with asterisks and daggers, Table T13). The 10 elements (Al, Ca, Fe, K, Mg, Mn, Si, Sr, Zr, and Rb) that could be quantified are plotted with depth in Figures F26, F27, F28, and F29. Final postexpedition processing of all ED-XRF data determined that Ni concentrations have low accuracy because of low concentrations; therefore, the results are not plotted for Site M0080. Most of the elements plotted highlight the compositional differences in the sediments in the four different units at Site M0080 (see Lithostratigraphy).
As for Sites M0078 and M0079, calcium is the dominant element in the majority of samples, with concentrations ranging from 22.6 to 389 g/kg ( Figure F26). The highest Ca concentrations are present between 466.08 and 491.92 mbsf (corresponding to Unit 4) and between 141.70 and 209.69 mbsf (in Unit 2). Silica is the second most common element, with concentrations ranging from 2.5 to 209.69 g/kg (Figure F27A). Ca and Si correlate negatively (r 2 = 0.95), so samples with lower Ca have higher Si content. This correlation is evident in samples between 242.90 and 456.20 mbsf, corresponding to Unit 3, where the dominant element switches from Ca to Si. Samples from Unit 3 also contain higher Al, K, Rb, Zr, Mn, and Fe concentrations (Figures F27, F28, F29). All of these elements are typically associated with terrigenous materials. Strontium, often associated with carbonates, follows similar trends with depth to Ca but shows exceptionally low values in Unit 3. (Figure F26). Unlike Sites M0078 and M0079, elements other than Ca do not correlate well with Si, likely due to the more variable lithology at Site M0080.
Physical properties
This section summarizes the physical properties results from Site M0080, where one hole was drilled (Hole M0080A) to 534.20 mbsf. Most data sets were collected at the sampling rates defined in Physical properties in the Expedition 381 methods chapter (Mc-Neill et al., 2019b), except for P-wave velocity, thermal conductivity, and shear strength, where the nature of the sediment limited data acquisition. Overall, the data sets show good correlations with interpreted paleoenvironments and lithologic changes, especially magnetic susceptibility, NGR, density, porosity, and color reflectance. A synthesis of physical properties for Hole M0080A is presented in Figures F30 and F31.
Shear strength
Sediment strength for Hole M0080A was measured offshore using a handheld penetrometer and CPT and onshore using a fall cone and shear vane. Penetrometer and CPT measurements were taken approximately every 20 and 100 m, respectively, whereas fall cone and shear vane measurements were taken one per core section and one per core, respectively.
Strength values for Hole M0080A consistently show an increase with depth ( Figure F32). However, strength values obtained using the four different methods differ significantly. In particular, strength data derived from handheld penetrometer measurements offshore are consistently higher than the shear strength from fall cone and shear vane measurements in the upper ~60 mbsf, showing an increase from 150 kPa at ~3 mbsf to 790 kPa at ~64 mbsf. Deeper than ~82 mbsf, handheld penetrometer values approximately coincide with fall cone values, increasing from 630 kPa at ~98 mbsf to 5880 kPa at ~284 mbsf. Deeper than ~284 mbsf, offshore handheld penetrometer measurements were not taken because of the change in lithology (mud to conglomerates) and formation stiffness.
Fall cone measurements are consistently higher than the shear vane measurements deeper than ~100 mbsf. Fall cone strength values increase from 0 to ~10 mbsf and then remain relatively constant at ~70 kPa to ~55 mbsf. From ~55 to ~245 mbsf, fall cone values increase almost exponentially and reach values >250,000 kPa deeper than ~200 mbsf, where strength values saturate because of the limits of the fall cone apparatus. Although the maximum fall cone values are not considered realistic, they indicate an overall strength increase downhole in Hole M0080A.
Shear vane measurements follow a similar increasing trend with depth. Shear strength increases linearly from ~10 to ~235 mbsf, with strength values ranging between ~20 and ~313 kPa. Shear vane measurements were taken to ~235 mbsf (Section 381-M0080A-62R-2). Deeper than ~235 mbsf, sediment is either too consolidated, exceeding the maximum applied force capacity of the shear vane apparatus, or the lithology is characterized by conglomerates (Unit 3), sand (Units 3 and 4), and carbonates (Unit 4), making shear vane measurements inefficient because (1) the equipment is designed for cohesive soils and could not penetrate the core material without damaging the tool and (2) the sample size tested is too small in these materials with pluricentrimetric heterogeneities.
The two in situ CPT measurements at ~100 and ~213 mbsf indicate a similar shear strength depth increase from 193 to 603 kPa (maximum values), respectively, suggesting increased consolidation with depth ( Figure F32).
Overall, the four types of strength measurements indicate shear strength increases with depth for Hole M0080A, suggesting in- creased consolidation. The increase in fall cone values at ~140 mbsf coincides with the switch from vibrocoring to rotary coring (Section 381-M0080A-42R-1) but also with the lithologic transition from Unit 1 (marine sediment; FA1/FA6) to Unit 2 (thick dominantly isolated/semi-isolated interval; FA12) (Section 40V-1). Deeper than ~257 mbsf, the lithologic transition from Unit 2 (mainly mud) to Unit 3 (conglomerates and silt) limits shear strength measurements, which are designed for soils.
Natural gamma ray
Overall, the low NGR values at Site M0080 indicate that K, Th, and U concentrations are depleted throughout much of the hole but show increases in parts of Unit 3. Gamma ray data acquired by downhole logging (see Downhole measurements) correlate well with trends measured in the same intervals of the hole, suggesting these data faithfully capture NGR trends in the hole. Note that the negative values result from the removal of the background during data processing, which was determined at the beginning of the expedition. Despite these negative values, the overall trend reflects changes in the nature of the sediment recovered.
In interpreted marine subunits in lithostratigraphic Unit 1 (0-136.96 mbsf ), NGR values vary from −3.94 to 13.41 counts/s with a mean of 2.78 counts/s (Figures F30, F33). In interpreted isolated/semi-isolated subunits, NGR values vary from −4.78 to 24.05 counts/s with a mean of 0.91 counts/s (lower than marine environments). In Unit 2 (characterized as mostly isolated/semi-isolated), NGR values vary from −5.68 to 14.35 counts/s with a mean of 0.19 counts/s ( Figure F33). In Unit 3 (256-458 mbsf; characterized as terrestrial/aquatic), the NGR values exhibit greater variability than in the marine and isolated subunits in Units 1 and 2, averaging 0.54 counts/s with a fluctuating range of −9.21-11.85 counts/s. A significant decrease in NGR occurs at the top of Unit 3 ( Figure F30). In the Unit 4 carbonate-rich sand/silt/mud (458-534 mbsf ), NGR values vary from −8.91 to 12.01 counts/s with a mean of −4.30 counts/s, significantly lower than any of the other lithostratigraphic units at this site or other sites.
Significant drops in NGR values occur at 136.96 mbsf (Unit 1/2 boundary), 256.85 mbsf (Unit 2/3 boundary), and 458.40 mbsf (Unit 3/4 boundary), and an increase was observed at approximately 350 mbsf (boundary between Subunits 3-1 and 3-2) (Figures F30, F31). NGR values are distinctly higher and exhibit more scatter in Subunit 3-2 than in other units. The distinct downhole decrease in NGR values at 458.40 mbsf is associated with a major facies change from FA17 (greenish to buff laminated fossiliferous mud) to FA15 (greenish to buff laminated siltstone to bedded fine sandstone; may include bioturbation, ostracods, and rootlets). NGR values decrease when the facies association changes from FA1 (homogeneous mud) to FA12 (light gray to buff homogeneous to weakly stratified mud) in the 0-150 mbsf interval ( Figure F31).
Average values in isolated/semi-isolated intervals are higher than in marine intervals and exhibit slightly more variability. The scattered magnetic susceptibility behavior in interpreted isolated/semi-isolated sections may be due to changes in sedimentary inputs, fluctuations in the quantity of fine detrital particles, diagenetic processes, and/or preservation of original layering. Paleomagnetic studies of the discrete samples from Site M0080 suggest that the variation in the magnetic susceptibility signal is mainly controlled by concentrations of different magnetic minerals/phases (see Paleomagnetism). Figure F31. Physical properties with facies associations (see Lithostratigraphy), Hole M0080A. Red lines = unit boundaries. Elec. res. = electrical resistivity. Thermal conductivity data are not corrected to in situ conditions. Magnetic susceptibility values sometimes drop below 0 SI. These values reflect sensor drift on some sections, and these values are likely closer to 0. P-wave velocity P-wave velocity measurements for Hole M0080A were collected offshore using a Geotek MSCL and onshore on split cores using the MSCL track and on discrete samples with a Geotek Pwave logger for discrete samples (PWL-D) (see Physical properties in the Expedition 381 methods chapter [McNeill et al., 2019b]) ( Figure F35).
Offshore MSCL measurements provided good P-wave signals and realistic P-wave velocity values (>1500 m/s) deeper than ~320 mbsf. The MSCL measurement quality in the upper part of Hole M0080A was strongly affected by either the unconsolidated nature of sediment, coring disturbances, and/or variable fill space between the sediment and the core liner. Between ~250 and ~320 mbsf, the lithologic transition to Unit 3 (mainly conglomerates and silt) also affected the MSCL measurement quality. In this depth interval and because of the nature of the sediments in Unit 3, discrete samples for P-wave velocity measurements were not collected. Similar issues regarding the quality of the P-wave velocity measurements emerged during onshore MSCL measurements on split cores. In the upper part of Hole M0080A, P-wave velocity values were particularly low (<1500 m/s), but more realistic velocity values were measured deeper than ~320 mbsf. P-wave velocity (V P ) values, either collected on the MSCL or on discrete samples, show an overall increase with depth ( Figure F35). Between 0 and ~250 mbsf (Units 1 and 2), P-wave velocity values from discrete samples are relatively low and range between 1500 and 1763 m/s. Deeper than ~350 mbsf, P-wave velocity values range between 1500 and 3500 m/s, and V P both from MSCL and discrete samples show an overall increase with depth. Lower MSCL velocity values at ~340 mbsf correspond to the bottom of Subunit 3-1 (FA 8). P-wave velocity values increase between ~350 and ~430 mbsf (Subunit 3-2), with values between 1593 and 2509 m/s. The velocity decrease at the bottom of Unit 3 (~450 mbsf ) corresponds to a lithologic transition to FA1. Deeper than ~460 mbsf (Unit 4), P-wave velocity values increase for both MSCL and discrete sample measurements, with maximum values of ~3400 m/s. P-wave velocity measured on the MSCL onshore is consistently lower than the velocity collected either on the MSCL offshore or on discrete samples.
The common trend of increasing P-wave velocity values between MSCL, discrete samples, and downhole log sonic velocity data (see Figures F47, F48) in the lower part of the hole indicates the transition to more consolidated sediment with depth (see also Core-log-seismic integration). Density Gamma ray attenuation (GRA) bulk density values from the MSCL offshore range from 1.33 to 2.77 g/cm 3 with an average of 1.98 g/cm 3 , a similar average to Holes M0078A and M0079A. Approximately 97% of the bulk density values are greater than 1.7 g/cm 3 (2% less than the average for Hole M0079A), and only about 13% are greater than 2.2 g/cm 3 (11% higher than Hole M0079A). Bulk density increases with depth with some relatively sharp increases in density compared with Holes M0078A and M0079A (Figure F30; also see Figure F29 in the Site M0078 chapter and Figure F25 in the Site M0079 chapter [McNeill et al., 2019c[McNeill et al., , 2019d). Notable changes in bulk density correlate with the four major lithostratigraphic unit boundaries; however, density variations also occur within each lithostratigraphic unit ( Figure F30).
The top of Unit 2 is marked by an increase in bulk density. In this unit, another increase in density occurs at approximately 203 mbsf, marking a change in average bulk density from 1.90 g/cm 3 (136.96 to ~203 mbsf ) to 2.0 g/cm 3 (~203-256.85 mbsf).
An increase in density occurs in Unit 3 at approximately 350 mbsf, correlating with a lithology change from dominantly redbrown conglomerates and silt to dominantly silt with calcretes, marking an increase in average density from 2.03 g/cm 3 (256.85-350.79 mbsf ) to 2.16 g/cm 3 (350-458 mbsf ) (see Lithostratigraphy).
Increases in bulk density occur in Unit 4 at 502.01 and 525.14 mbsf. At 525.14 mbsf, the average bulk density increases from 2.19 g/cm 3 (458-525 mbsf) to 2.33 g/cm 3 (525-535 mbsf ), marking the top of the basal conglomerate subunit.
Bulk density from moisture and density (MAD) analysis displays similar values and shows a similar trend to GRA bulk density (Figure F30). The exception to this occurs between 218.22 and 350.79 mbsf, where MAD bulk density values are lower than GRA bulk density, most likely due to the bias created by preferential sampling of matrix rather than clastic material for MAD measurements. MAD bulk density values range from 1.38 to 2.39 g/cm 3 with an average of 1.93 g/cm 3 . Approximately 97% of the MAD bulk density values are greater than 1.7 g/cm 3 with only 6% of the values greater than 2.2 g/cm 3 . Figure F34. MSCL magnetic susceptibility box and whisker plots grouped by subunits, Expedition 381. Top and bottom of boxes correspond to 1st and 3rd quartiles, solid line in middle of box shows the median, dashed line shows the mean. Ends of whiskers indicate minimum and maximum values. I/SI = isolated/semi-isolated, slump = slumped subunit. Sediment grain density ranges between 2.2 and 2.9 g/cm 3 with an average of 2.66 g/cm 3 and 87% of the values between 2.6 and 2.8 g/cm 3 . These average and percentage values are lower than those measured in Hole M0079A (0.34 g/cm 3 and 10%, respectively). Calculated grain density values are mostly between pure sandstone and pure limestone (Kennedy, 2015). Grain density generally increases with increasing bulk density and decreasing porosity as observed at previous sites, except between 218.22 and 350.79 mbsf. This exception may be due to MAD bulk density being underestimated because of the sampling bias described above, and thus may not reflect the many mafic gravel-sized grains in the conglomerates.
Porosity
Porosity ranges from 18% to 79% with an average of 45%. Porosity shows a general decrease with depth in Hole M0080A, punctuated by peaks of higher porosity (Figures F30, F36). The average porosity decreases from 54% to 48% to 40% to 30% from lithostratigraphic Units 1 to 4, respectively (Figure F30), and higher porosity generally correlates with lower density.
Resistivity
In Hole M0080A, electrical resistivity was measured offshore with the MSCL and in situ with two induction tools during downhole logging (EM51 and DIL45 tools) (see Downhole measurements; Figure F36).
As at Sites M0078 and M0079, electrical resistivity from the MSCL shows low values ranging from 0.34 to 13.3 Ωm and a median value of 1.23 Ωm (Figure F30). The electrical resistivity predicted by the downhole logs differs in amplitude but is similar in trend ( Figure F36). Borehole sonde responses are sensitive to downhole conditions, as visible in their different behaviors in Unit 2 (borehole filled with seawater) and Units 3 and 4 (borehole filled with drilling mud). The MSCL values can be explained by the high porosity of the sediment because it is compatible with the value derived from the Archie equation from MAD porosity and salinity measured from pore water. The following discussion, therefore, focuses on electrical resistivity derived from the MSCL (Figure F30).
In the upper ~25 mbsf of the hole, resistivity increases linearly with depth from 0.5 to 2 Ωm, which corresponds to a decrease in pore fluid salinity ( Figure F36). Resistivity then slowly decreases to 1.2 Ωm until ~100 mbsf, which also corresponds to a gradual increase in pore water salinity, and then remains constant to the bottom of Unit 3, although resistivity still has a tendency to increase in each core. It should be noted that these excursions are more limited in Subunit 3-1. Electrical resistivity increases in Unit 4 with a median value of 2 Ωm (Figure F37), and the scatter of electrical resistivity values is diminished. Instead, distinctive low resistivity zones at the top and bottom of Subunit 4-2 and a surge to high resistivity in the deepest subunit of the hole (Subunit 4-3) occur. In the absence of pore water data at a high depth resolution, it is difficult to assess whether the origin of this sharp increase in resistivity measurements is due to a change in fluid conductivity or a change in lithology and/or in the porous network of the rock.
Thermal conductivity
Similar to Site M0079, with the exception of a few results, data acquisition in the 0-200 mbsf interval was not successful, possibly due to water content, compaction effects, cracks, or small-scale changes in lithology that affected the measurement quality. The measured laboratory thermal conductivity values range from 0.9 to 2.03 W/(m·K) with an average of 1.42 W/(m·K) (Figure F30). Laboratory values were corrected to in situ conditions because thermal conductivity is affected by temperature and pressure. The corrected thermal conductivity values range from 0.9 to 2.09 W/(m·K) with an average of 1.46 W/(m·K) and show a similar trend to the original laboratory measurements (Figures F30, F38). The correction to in situ conditions was undertaken following Hyndman et al. (1974).
Formation temperature and heat flow
Temperature measurements at three depths (two temperature CPT measurements and one seafloor measurement from a sound velocity profile) were collected offshore and plotted against depth to estimate the geothermal gradient ( Figure F38). Temperature decreases with depth, resulting in a negative geothermal gradient. This result is unexpected for a continental setting, particularly in a tectonically active region, and suggests possible errors with the temperature measurements and should be further investigated.
The temperature data are combined with thermal conductivity to estimate heat flow using the Bullard method, which plots thermal resistance against temperature ( Figure F38) and calculates heat flow from the slope of the best fitting line (see Physical properties in the Expedition 381 methods chapter [McNeill et al., 2019b]). The resultant derived heat flow is also negative and likewise is probably erroneous.
Color reflectance
The mean values ± standard deviations and minimum/maximum values of all depths in Hole M0080A are 49 ± 6 and 9/71, respectively, for L*, −2.1 ± 0.8 and −9.3/0.6, respectively, for a*, and 3.6 ± 2.7 and −15/15, respectively, for b*. Overall, variations in color reflectance values change with different environments, units, and subunits (Figures F30, F39) and follow the changes in facies associations ( Figure F31) with much more pronounced changes compared with Holes M0078A and M0079A.
Color reflectance values change with depth in several intervals ( Figure F30). The most notable change is the transition from Unit 2 to Unit 3, where a change was observed in both the mean values and the amount of scatter (Figures F30, F39). The second most notable change was observed between Units 3 and 4, where the mean of L*, a*, and b* values shift ( Figure F30). Both major changes follow the differences in color of the dominant facies associations between three intervals: the combined Units 1 and 2, Unit 3, and Unit 4.
Changes in average values and the intensity of scattering in color reflectance values are also associated with facies changes (see Lithostratigraphy; Figure F31). Scattering of the data is more pronounced in the heterogeneous material of Units 3 and 4 than in the mud-dominated facies associations described in Units 1 and 2. This observation is verified in FA7, FA8, and FA9, especially for a* and b* values, exhibiting more scattering than in the facies encountered in Units 1 and 2. Figure F37. MSCL electrical resistivity box and whisker plots grouped by subunits, Expedition 381. Top and bottom of boxes correspond to 1st and 3rd quartiles, solid line in middle of box shows the median, dashed line shows the mean. Ends of whiskers indicate minimum and maximum values. I/SI = isolated/semiisolated, slump = slumped subunit.
Magnetic susceptibility
A total of 379 discrete samples were analyzed from Hole M0080A. Frequency distribution diagrams of magnetic susceptibility (k) from discrete samples, together with shipboard MSCL continuous data (2 cm interval) from whole core sections, are shown in Figure F40. Magnetic susceptibility in Hole M0080A shows a unimodal distribution peaking between 0 and 50 × 10 −6 SI, with values ranging from negative to positive approximately between −100 × 10 −6 and 4000 × 10 −6 SI ( Figure F41B, F41D). A similar trend is observed from shipboard MSCL susceptibility data, where the unimodal distribution peaks approximately between 100 and 150 × 10 −6 SI, and values range between −100 × 10 −6 and 3500 × 10 −6 SI (Figures F40B, F41E). MSCL and discrete sample magnetic susceptibility data show a good match overall. Downhole distribution of magnetic susceptibility in Hole M0080A ( Figure F41) shows a low variability to ~250 mbsf, which is the bottom of lithostratigraphic Unit 2 (see Lithostratigraphy), with values around ~100 × 10 −6 SI. Much higher variability and overall higher susceptibility values occur between ~250 and ~460 mbsf (coinciding with lithostratigraphic Unit 3), where susceptibility ranges between ~100 × 10 −6 and ~4000 × 10 −6 SI. This trend is much clearer when discrete sample and MSCL magnetic susceptibility data are plotted together against depth (Figure F41E). The scatter in susceptibility values between ~250 and ~460 mbsf is likely due to the occurrence of mafic ophiolitic clasts in Unit 3, which are produced by the erosion of a nearby ophiolite exposed at the southeastern margin of the Gulf of Corinth (Sakellariou et al., 2007). The good match between the downhole variation in magnetic susceptibility and natural remanent magnetization (NRM) (Figure F41) suggest that the susceptibility variation is mainly (but perhaps not exclusively) controlled by the change in concentration of the magnetic phases in the sediment. The direct relationship between NRM and susceptibility in Hole M0080A ( Figure F42) is linear, suggesting the predominance of one type of magnetic mineral contributing to both NRM and magnetic Figure F39. L*a*b* color reflectance data box and whisker plots grouped by units, Expedition 381. Top and bottom of boxes correspond to 1st and 3rd quartiles, solid line in middle of box shows the median, dashed line shows the mean. Ends of whiskers indicate minimum and maximum values. I/SI = isolated/semiisolated, slump = slumped subunit. susceptibility. The downcore increase or decrease in both NRM and susceptibility may therefore just be controlled by the dilution of the magnetic minerals in the sediment.
Magnetic mineralogy
Thermal variation of the low-field magnetic susceptibility was determined for six representative samples from Hole M0080A prior to the Onshore Science Party (OSP) (see Paleomagnetism in the Expedition 381 methods chapter [McNeill et al., 2019b] for more details). Figure F43 shows the results of the thermomagnetic experiments, revealing a wide range of Curie temperatures ranging between 116° and ~646°C. However, most of the samples show a primary Curie temperature between 116° and 310°C, corresponding to Ti-rich magnetite, and between 502° and 585°C, which is typical of Ti-poor titanomagnetite or stoichiometric magnetite. Other Curie temperatures between 365° and 486°C suggest the occurrence of Ti-rich titanomagnetite. Oxidized (titano)magnetite (i.e., titanomaghemite) may also be present, with Curie temperatures ranging between 580° and 646°C.
Natural remanent magnetization
NRM direction and intensity of discrete samples were measured using the superconducting rock magnetometer 755-4000 cryogenic magnetometer (2G-Enterprise) at the University of Bremen (Germany). A total of 379 discrete cubic samples were analyzed from Hole M0080A. All samples were stepwise demagnetized by alternating fields (AFs) to determine the stability of the NRM during sequential demagnetization. Magnetization was measured after each demagnetization step, and in most of the samples it decays steadily up to nearly complete removal at the maximum AF step of 100 mT ( Figure F44B). This decay indicates the occurrence of low-coercivity magnetic minerals, such as magnetite or titanomagnetite. The shapes of the magnetization decay curves are typical of pseudosingle-domain grains. About ~10% of the samples acquired a gyroremanent magnetization at AFs higher than 60-70 mT ( Figure F44A). This spurious magnetization is typically acquired because of the occurrence of greigite (Fe 3 S 4 ). The magnetization of samples showing this behavior is therefore likely to be carried by a mixture of greigite, magnetite, and titanomagnetite, in agreement with what was inferred from the rock magnetic experiments shown in Figure F43.
Orthogonal demagnetization diagrams ( Figure F44) show that demagnetizing fields of 15-20 mT are sufficient to remove the weak viscous remanent magnetization and other secondary components of magnetization. After the application of a 30 mT demagnetizing field, 144 of the 379 samples show an inclination that is higher than the expected inclination (57.5°) for the site latitude (i.e., dots inside red dashed circle, Figure F45). Magnetization of these samples might have either partially or totally been overprinted by a vertical drilling-induced magnetization. The overall scatter in the remanence directions after demagnetization at 30 mT compared with that after 40 mT is reduced, with the concentration parameter (K) being slightly smaller for the 30 mT data set (K = 9.0) than for the 40 mT data set (K = 7.5). For this reason, the inclination data after the 30 mT demagnetizing step were used to build the Hole M0080A magnetostratigraphy. Figure F46 shows the inclination data at 30 mT AF step (interpreted to be indicative of the characteristic remanent magnetization component) from Hole M0080A, together with a preliminary magnetostratigraphy. This magnetostratigraphy is then tentatively correlated with the geomagnetic instability timescale (GITS) of Singer (2014). Downhole, the polarity is normal to 324.97 mbsf (Sample 91R-3, 50-52 cm) and then is reversed below 328.29 mbsf (Sample 92R-2, 39-41 cm). This transition occurs in the red-brown siltstone, sandstone, and conglomerates of Subunit 3-1 (see Lithostratigraphy) and might correspond to the Brunhes-Matuyama reversal (0.773 Ma; Singer, 2014), although this hypothesis remains speculative because of the paucity of biostratigraphic constraints (see Micropaleontology). Also, considering the predominant continental nature of Subunit 3-1, the occurrence of hiatuses and unconformities in Subunit 3-1 cannot be excluded; hence, the nature and age of the polarity transition occurring at 324.97-328.29 mbsf should be interpreted cautiously.
Magnetostratigraphy
In this uppermost normal polarity interval above 324.97 mbsf, several samples with reversed polarity occur, only a few of which reach the expected inclination values of −57.5° and are marked with a white line in the magnetostratigraphic log of Figure F46C. It is possible that these samples recorded some of the excursions of the Brunhes normal chron. Additional rock magnetic experiments and chronostratigraphic constraints, however, will be necessary to establish whether and which one of these reversed polarity samples represent true magnetic field excursions.
Below 328.29 mbsf, the polarity is predominantly reversed to 458.18 mbsf and then frequently variable from normal to reversed to 492.93 mbsf. Data from the interval between 458.18 and 492.93 mbsf need to be interpreted carefully, and for now we prefer not to assign any polarity to this interval (gray shaded area in Figure F46C). Below this interval, samples show a consistent normal polarity to 527.53 mbsf. Under the assumption that the transition at 328.29 mbsf is the Brunhes/Matuyama boundary and the sedimentary sequence is continuous, this normal polarity interval (492.93-527.53 mbsf ) might represent the Jaramillo Subchron (1.001-1.076 Figure F43. Low-field susceptibility vs. temperature (k-T) experiment results for six samples obtained before the OSP, Hole M0080A. Red = heating path from room temperature to 700°C, blue = cooling path from 700°C back to room temperature. Ma), although this hypothesis remains highly speculative at the time this report was prepared. A concurrent range (I. semenenko and A. primus; 4.58-7.39 Ma) was identified in Hole M0080A between 466.83 and 501.28 mbsf (Figure F46; see also Micropaleontology). This biostratigraphic constraint, if reliable, excludes the hypothesis that the normal polarity interval at 492.93-527.53 mbsf coincides with the Jaramillo Subchron, and would rather suggest that it corresponds to some older normal polarity chron.
The absence of firm biostratigraphic constraints from the deeper part of Hole M0080A (lithostratigraphic Units 3 and 4) and the possibility of reworking of material prevents us from drawing conclusive correlations between the deeper borehole magnetostratigraphy and the GITS. Further analysis and integration of magnetostratigraphic and biostratigraphic data and other age constraints will be a focus of postexpedition research.
Downhole measurements
A nearly complete set of logging data were collected throughout most of Hole M0080A, including spectral gamma ray through pipe and resistivity, sonic, magnetic susceptibility, conductivity, and spectral gamma ray in the open hole. The deployment of wireline logging tools in Hole M0080A was changed relative to the logging plan because of several factors. The variation in formation downhole, in particular the presence of sand, gravel, and pebbles in the lower half of the borehole, meant that hole instability needed to be considered. In addition, the dominance of mud facies in the upper part of the borehole meant that swelling of the formation was a possibility. Although absolute values of physical properties differ between core and log measurements, the logging data trends compare well with the core data (e.g., compare Figures F47, F30).
Spectral gamma ray
Spectral gamma ray log data were acquired both through pipe and in the open hole. Open hole logs were acquired in three stages and cover the following depth intervals: 440-430, 370-230, and 220-50 m WSF. Through pipe data acquisition took place from the base of the hole to the seafloor in one run. The two data sets compare well (Figure F47).
The upper ~135 m WSF (lithostratigraphic Unit 1) consists of increasing gamma ray values with depth, starting at <50 counts/s close to the seafloor (through pipe) and reaching 50 counts/s (through pipe) or 70-100 counts/s (open hole) at 135 m WSF (Figures F47, F48). Superimposed on this background trend, the gamma ray log shows local variations, with relatively higher gamma ray values (50-100 counts/s) associated with marine subunits and relatively lower gamma ray values (40-70 counts/s) associated with isolated/semi-isolated subunits. In addition, spectral gamma ray logs indicate a relatively higher uranium and thorium content in the marine subunits (e.g., 55-65, 83-89, and 103-100 m WSF).
Below 135 m WSF in Unit 2, gamma ray values decrease sharply to <50 counts/s and then increase progressively to ~220 m WSF, reaching values >100 counts/s ( Figure F47). The overall increase is not steady, and variability in the total count is associated with changes in the potassium and uranium logs. Below 215-220 m WSF, gamma ray values show multiple 20-50 m thick decreasing and increasing trends, fluctuating between <50 and 100 counts/s. At 355 m WSF, gamma ray values sharply increase at the transition to Subunit 3-2. From 355 to 435 m WSF (Subunit 3-2), the signal becomes more stable and remains around 50 counts/s (through pipe data; minimal open hole data collected in this interval) followed by a sharp decrease to <50 counts/s at 435 m WSF, corresponding to Figure F46. A. Inclination of the remanence after demagnetization at 30 mT, Hole M0080A. Red dashed lines = expected magnetic field inclination at the site latitude (i.e., +57.5° and −57.5°). B. Lithostratigraphic unit/subunit boundaries (see Lithostratigraphy). Unit 1: blue = marine, green = isolated/semi-isolated. Unit 2-4 colors do not have any paleoenvironmental meaning. C. Preliminary magnetostratigraphy. Black = normal polarity, white = reversed polarity. See Micropaleontology for details of available biostratigraphy data (nannofossils I. semenenko and A. primus) in Unit 4. D. GITS after Singer (2014), with possible correlation (dashed line with question mark) to Hole M0080A magnetostratigraphy. Dashed line with question mark indicates the only correlation between site magnetostratigraphy and GITS, and it is speculative at the time report was prepared. M/B = Matuyama/Brunhes. the tool measuring through the thick collars of the drill string, attenuating the signal received from the formation. Another decrease (<20 counts/s) occurs at ~460 m WSF and below (Unit 4) and is associated with changes in other logs, reflecting changes in the formation.
Magnetic susceptibility
Values recorded by the magnetic susceptibility tool are questionable because they are <0, but the trends mirror those observed in the core MSCL data ( Figure F47). Therefore, when describing this log, only trends are described and no values are given. Comparing the log and core MSCL data, the trends in magnetic susceptibility with depth are similar, but some distinct differences in variability and scatter occur in each lithostratigraphic unit.
The EM51 tool was sent downhole first at each stage and was therefore more likely to collect data with hole conditions at their best. Data were collected in three stages: 532-430, 428-230, and 223-50 m WSF. From 50 to 255 m WSF in Units 1 and 2, the mag- Figure F47. Wireline log data and comparison with data from cores, Hole M0080A. Wireline logs are on WSF depth scale; MSCL data are on mbsf depth scale. Unit 1: blue = marine, green = isolated/semi-isolated (see Lithostratigraphy). Unit 2-4 colors do not have paleoenvironmental meaning. SGR = spectral gamma ray. Velocity: red line = smoothed V P log data.
Resistivity
Resistivity logs were derived from the induction log of the EM51 tool and from the induction logs of the DIL45 tool. Resistivity derived from the EM51 tool covers most of the borehole depth. Deep and shallow resistivity logs derived from the dual induction tool were recorded from 368 to 230 m WSF and from 220 to 50 m WSF. Note that in the upper part of the hole, deep and shallow resistivity values do not overlap (deep resistivity is ~4-5 Ωm higher than shallow resistivity), in contrast to the lower part of the hole. This overlap is due to the use of bentonite to stabilize the hole for the deeper logging Stages 1 and 2 (535-230 m WSF), whereas seawater was used for the shallower Stage 3 (230-50 m WSF). All resistivity logs compare very well ( Figure F47). From 50 to 350 m WSF, resistivity values are typically <10 Ωm, with local variability from 50 to ~135 m WSF (typically ±2 Ωm excursions; Unit 1) and little variability from 135 to 250 m WSF (<1 Ωm variations; Unit 2). A clear increase in resistivity occurs below 250 m WSF (into Unit 3), also associated with more variability on a <5 m depth scale until reaching 340 m WSF. This interval (~250-340 m WSF) is also characterized by high and variable magnetic susceptibility measurements. Both data sets may be explained by the heterogeneity of the coarse-grained terrestrial material in this interval (see Lithostratigraphy). At 350 m WSF (boundary between Subunits 3-1 and 3-2), resistivity increases sharply and then increases from 5 to >10 Ωm from 350 m WSF to the bottom of the hole, with local sharp increases >10 Ωm from 350 to 420 m WSF. Figure F48. Wireline log data showing detail of Unit 1, Hole M0080A. Wireline logs are on WSF depth scale; MSCL data are on mbsf depth scale. Subunits: blue = marine, green = isolated/semi-isolated (see Lithostratigraphy). SGR = spectral gamma ray. Velocity: black line = V P log, red line = smoothed V P log data, dots = discrete sample measurements (see Physical properties). Sonic Sonic data were collected in three stages: 530-430, 372-230, and 220-50 m WSF. Data processing was made challenging by uncertainty in picking the first arrivals, especially in the deeper parts of the borehole (below 230 m WSF; see results from picking compared with semblance analysis, Figure F49). Despite these difficulties, the sonic trend derived from the raw data compares well with MSCL and discrete P-wave data from cores, where collected (see Physical properties; Figure F47). From 50 to 135 m WSF (Unit 1), sonic velocity values are relatively low (1500-1600 m/s). The values increase to 1700-2000 m/s from 135 to 255 m WSF (Unit 2) and reach values >2000 m/s from 255 to 340 m WSF (Subunit 3-1). Just above 350 m WSF, a prominent decrease occurs for ~10 m to 1700 m/s. From 350 to 370 m WSF (below the boundary between Subunits 3-1 and 3-2), velocity values stay above 2000 m/s. From 430 to 460 m WSF, sonic velocity values decrease and increase again from 2500 to 1600 and back to 2500 m/s. Below 460 m WSF (Unit 4), average velocity values are typically >2500 m/s, with pronounced scattering between 2000 and 4000 m/s. Note that the trends between sonic and resistivity logs compare well, reflecting relatively good data quality (a notable example is from 350 to 371 m WSF). The local decreases in sonic velocity values at 155 and 175 m WSF are probably real because they match with local, small-scale scattering in the resistivity data from the DIL45 tool. Comparison with lithostratigraphic units Borehole logs recorded in Hole M0080A compare well with the lithostratigraphic units and subunits defined from core descriptions (see Lithostratigraphy). Lithostratigraphic Unit 1 (0-136.96 mbsf ) is characterized by relatively high gamma ray counts with locally higher counts in marine subunits, locally higher magnetic susceptibility values in isolated/semi-isolated subunits, and local resistivity variations. Sonic velocity values are 1500-1600 m/s in this unit. In Unit 2 (136.96-256.85 mbsf ), gamma ray count markedly decreases, magnetic susceptibility and resistivity log data have little variation, and sonic velocity increases to close to 2000 m/s. Subunit 2-5 can be identified in the magnetic susceptibility log, with local positive excursions. Unit 3 (256.85-458.40 mbsf ) is distinct in the magnetic susceptibility and resistivity logs, with higher values and more scattering in both data sets. The boundary between Subunits 3-1 and 3-2 can be identified from these same logs, with a clear change in pattern at ~350 m WSF where magnetic susceptibility decreases and resistivity increases sharply. Unit 4 (458.40-534.20 mbsf ) is very distinct in the magnetic susceptibility log, with low values exhibiting very little variability, except for the deepest part of the borehole, compared with Unit 3 above. Unit 4 is also characterized by a clear increase in sonic velocity values, typically >2000 m/s. Finally, the gamma ray through pipe log, although attenuated by the drill string collars, shows a sharp decrease at the Unit 3/4 boundary, with low values in Unit 4 thought to reflect the carbonate-rich lithologies.
Core-log-seismic integration
Core-log-seismic integration (CLSI) at Site M0080 utilized MSCL density data and velocity information from downhole logging and MSCL measurements on discrete samples to link core data to coincident seismic data ( Figure F2). Out of the three sites drilled during Expedition 381, Site M0080 had the largest amount of information on P-wave velocity, potentially making it possible to establish the most accurate time-depth relationship (TDR) for seismic horizons. However, each of the velocity data sets had quality limitations and thus required significant effort with filtering and quality control.
Velocity data integration
A downhole sonic log was acquired in almost the full depth range of Hole M0080A (50-530 m WSF). However, it exhibits large velocity fluctuations, suggesting poor borehole conditions in some parts of the hole (Figures F50, F51). The sonic log has a 60 m long gap at 370-420 m WSF, and relatively high values (>3000 m/s) occur below 460 m WSF ( Figure F51). The overall trend, however, agrees well with core MSCL V P data acquired offshore. V P measured on discrete samples, on average, exceeds downhole values to ~110 mbsf (Figure F50). At greater depths, velocity values acquired on discrete samples onshore mostly fall between the MSCL and downhole log values ( Figure F51). After testing multiple approaches to combining and filtering V P data from various methods and testing the sensitivity of synthetics to the input velocity information, the two scenarios described below were selected, which represent two end-member versions of the P-wave velocity profile in the deepest part of the hole (Figure F52).
Version 1 of input velocity for synthetics represents the high V P end-member scenario. After removing V P values less than 1550 m/s, a 10-point (~15 m) running average of V P measured on discrete samples between 0 and 136 mbsf was combined with a 150-point running average (~7.5 m) of the downhole sonic log below 136 mbsf. For Version 2, which constitutes a low V P end-member, the same approach was used in the upper 320 mbsf, but instead of using the sonic log to the total depth of the borehole, core MSCL data were used below 320 mbsf. The MSCL data were also filtered to remove values under 1550 m/s and then smoothed with a 150-point (~3 m) running average. Figure F52 illustrates the two resulting V P profiles used for synthetics generation (Tables T14, T15). The density profile for synthetics generation was produced from filtered and smoothed MSCL data using the approach developed offshore and depth shifted to sea level (Table T16) (see Core-log-seismic integration in the Expedition 381 methods chapter [McNeill et al., 2019b]).
Impact of input velocity on reflection coefficient series and time-depth conversion
The input V P profile influences two key parameters of synthetics generation: the reflection coefficient series (RC), which determines the appearance (shape and intensity) of reflections in the synthetic seismogram, and the TDR function, which determines how those reflectors are positioned in the time and depth domains. By utilizing different input velocity values, it was possible to assess the robustness of resulting synthetics and TDR functions and their sensitivity to the input parameters. Three input velocity profile versions were tested for Site M0080 for both the impact on the RC and the time- Figure F50. V P data sets available for 0-300 mbsf, Site M0080. Comparison of pre-expedition linear V P model (see Core-log-seismic integration in the Expedition 381 methods chapter [McNeill et al., 2019b] for details), filtered (discrete sample V P (gray = low values that were filtered out, blue = retained data, blue line = 10-point average smoothed), downhole sonic V P log, and 7.5 m average smoothed downhole log. The comparison suggests that discrete sample V P measurements yielded more realistic velocity values than the downhole log to about 140 mbsf. depth conversion: a linear velocity model derived from seismic data before the expedition (see Core-log-seismic integration in the Expedition 381 methods chapter [McNeill et al., 2019b) and the two versions of core-and log-based combinations described above (Versions 1 and 2, Figure F52).
Impact on reflection coefficient series
Three versions of the synthetic seismograms shown in Figure F53 were generated with three different inputs for the RC computation: (1) the original (pre-expedition) linear velocity profile derived from seismic data, (2) a combination of discrete and downhole velocity values (the high V P end-member version described above; Version 1), and (3) a combination of discrete, downhole, and MSCL velocity values (the low V P end-member version described above; Version 2). To facilitate comparison, all three versions were based on the linear velocity profile as a starting point for time-depth conversion, adjusted slightly to tie to seismic data. Using core-and logbased V P for RC computation improves the appearance of synthetic seismograms and the reproducibility of reflections in coincident seismic data compared with the linear velocity model, but the observed difference is surprisingly small. In fact, all three versions reproduce the main reflectors, suggesting that in this geologic setting, the main reflectors in the seismic record can be predicted by density variation (Table T17), increasing our confidence in the CLSI results at Sites M0078 and M0079, where very limited V P data were available.
The largest difference in the appearance of synthetic seismograms at Site M0080 occurred in the deepest part of Hole M0080A, where using core-and log-based input velocity values increase the synthetics resolution. Judging by the appearance of the synthetics, the input velocity that is based on MSCL data provides the best fit to seismic data ( Figure F53C). The output interval velocity values are slightly different in all three scenarios, but all suggest an increase in V P at around 400 mbsf (~750 meters below sea level [mbsl]). Figure F51. V P data sets available for 300-530 mbsf, Site M0080. Comparison of pre-expedition linear V P model (see Core-log-seismic integration in the Expedition 381 methods chapter [McNeill et al., 2019b] for details), offshore MSCL V P , 3 m average smoothed MSCL V P , discrete sample V P , downhole sonic V P log, and 7.5 m average smoothed downhole log. The comparison suggests similar trends in downhole and MSCL records but potentially overestimated V P in the downhole log or underestimated V P in the MSCL data. Figure F52. Two versions of composite V P data used for synthetics generation with original data sets in gray (see Figures F50 and F51 for details), Site M0080. Both versions are based on smoothed V P from discrete samples measurements in the upper 140 mbsf and smoothed downhole log from 140 to 310 mbsf. The difference is in the deeper section: smoothed downhole sonic log was used in Version 1 and smoothed MSCL data were used in Version 2. Table T14. Composite velocity profile used for synthetics generation derived from a combination of V P measurements on discrete samples and downhole sonic log, Site M0080. Download table in CSV format. Table T15. Composite velocity profile used for synthetics generation derived from a combination of V P measurements on discrete samples, downhole sonic log, and offshore MSCL data, Site M0080. Download table in CSV format. Figure F53. Comparison of synthetic seismograms based on reflection coefficient series generated from three different input velocity profiles. A. Linear velocity model. B. Velocity derived from discrete and logging data (Version 1, Figure F52). C. Velocity derived from discrete, logging, and core MSCL data (Version 2, Figure F52). All three versions used the linear velocity model as a starting point for time-depth conversion, which was slightly adjusted to achieve ties between major horizons. Only the deeper part of the section is shown, where the differences are most pronounced. Tracks display true vertical depth (TVD), TWT, input density and velocity, computed reflection coefficient series, ten traces of Maurice Ewing Line 22 west-east profile crossing Site M0080 (see Figure F2), synthetic seismogram, ten more traces of the same seismic line, and final interval velocity profile resulting from tying the synthetic to the seismic data. Areas under interval velocity curves are filled to emphasize the differences, with color scaled by interval velocity values (cold colors = low values, warm colors = high values). Impact on time-depth relationship An important effect of input velocity on core-seismic data integration is its influence on the output interval velocity and timedepth conversion function. Therefore, the next step consists of comparing cases in which both reflection coefficients and timedepth conversions are based on velocity values from discrete samples and logging data and on discrete samples, downhole logging, and core MSCL data (Figures F54, F55). In all of the scenarios, the output interval velocity from tying synthetics to seismic data in Hole M0080A shows similar large-scale features at <450 mbsf. Very low velocity values (1500-1600 m/s) are needed for the tie in the upper 120-130 mbsf, which could be explained by either very high porosity (Figure F30) or by unaccounted hole deviation. No other evidence supports the latter, and therefore, CLSI is performed assuming a vertical hole. Other consistent features in the output interval velocity function include a stepwise increase at 120-130 mbsf, another at ~250 mbsf, and a very sharp increase at ~360-370 mbsf. Slight differences can be seen in the finer details, but overall, output Figure F54. Synthetic seismogram based on input velocity profile generated from discrete and downhole logging data (Version 1, Figure F52) showing TVD, TWT, input density and velocity, computed reflection coefficient series, ten traces of Maurice Ewing Line 22 west-east profile crossing Site M0080 (see Figure F2), synthetic seismogram, ten more traces of the same seismic line, and final interval velocity profile resulting from tying the synthetic to the seismic data. Area under interval velocity curve is filled to emphasize the differences, with color scaled by interval velocity values (cold colors = low velocity, warm colors = high velocity). A big difference occurs at the base of the hole, where downhole logging data (used to build the high V P end-member model) suggest velocity values close to 3500 m/s, which strongly compresses the lower portion of the synthetic seismogram and shifts synthetics upward with respect to the seismic data ( Figure F54). For the lower V P end-member (with velocity values based on MSCL data; Version 2), a good match between synthetic and seismic data is achieved, but the resulting velocity profile suggests very low values for such depth, ranging from as low as 1700 m/s to about 2600 m/s ( Figure F55). The output velocity profile and TDRs are shown in Figure F56 (full data files are available in M0080_TDR1.xlsx and M0080_TDR2.xlsx in CLSI in Supplementary material).
Core-seismic integration
Synthetic seismograms at Site M0080 reproduce the reflectors in the seismic data remarkably well, and the reflectivity pattern is similar for all versions of the input velocity profiles (e.g., Figures F53, F54, F55). The only point of ambiguity occurs in matching the lowermost part of the hole and the transition into basal conglomerates (Figures F54, F55), which is described in greater detail below.
The four lithostratigraphic units identified in the core have visibly different expressions in seismic R/V Maurice Ewing Line 22, crossing the site in the east-west direction (Figures F57, F2). Lithostratigraphic Unit 1, characterized by a succession of relatively thin marine and isolated/semi-isolated intervals, exhibits a series of continuous reflectors with pronounced positive peaks. These reflectors are likely caused by the change to lower density in marine intervals seen in MSCL and MAD data (e.g., Figure F30). The resolution of the seismic data does not make it possible to resolve all of the marine and isolated/semi-isolated subunits, but at least a couple of the subunit boundaries can be reliably traced from core to seismic data ( Figure F57). Lithostratigraphic Unit 1 is characterized by very low velocity, likely caused by high porosity in this unit. The lithostratigraphic Unit 1/2 boundary corresponds to an increase in velocity and coincides with a clear change in the seismic reflectivity pattern.
Lithostratigraphic Unit 2 is also characterized by a series of continuous reflectors in the seismic record, but in contrast to Unit 1, the reflectors have a different character; the positive peaks and neg- Figure F55. Synthetic seismogram based on input velocity profile generated from discrete, downhole logging, and core MSCL data (Version 2, Figure F52) showing TVD, TWT, input density and velocity, computed reflection coefficient series, ten traces of Maurice Ewing Line 22 west-east profile crossing Site M0080 (see Figure F2), synthetic seismogram, ten more traces of the same seismic line, and final interval velocity profile resulting from tying the synthetic to the seismic data. Area under interval velocity curve is filled to emphasize the differences, with color scaled by interval velocity values (cold colors = low values, warm colors = high values). ative troughs of the reflectors have similar amplitudes ( Figure F57). The reflectors identified as the bases of marine intervals by Nixon et al. (2016) (H5 and H6) correspond to subunit boundaries in this predominantly isolated/semi-isolated unit. The unit is characterized by slightly higher velocity values, with some variability within the unit. Another increase in velocity occurs at the lithostratigraphic Unit 2/3 boundary, which corresponds to a low-amplitude reflector one cycle shallower than previously interpreted by Nixon et al. (2016).
Unit 3 exhibits a more complex reflectivity signature, characterized by less continuous reflectors and some gaps in reflectivity (Figure F57), apparently corresponding to conglomerates in Subunit 3-1. A relatively bright reflection is predicted at the boundary between Subunits 3-1 and 3-2, which appears to have an equivalent in the seismic reflection data. Subunit 3-2 (dominated by silt) is characterized by higher velocity values that decrease again in Subunit 3-3, which corresponds to the transition into gray mud (see Lithostratigraphy). Figure F56. Comparison of final (top) interval velocity values and (bottom) TDRs for two "end-member" synthetics generated from MSCL-based velocity profile (Version 2) and downhole log-based profile (Version 1) showing the main differences in velocity structure and TDR in the lower ~100 m of Hole M0080A. The link between lithostratigraphic Unit 4 and the seismic data varies depending on the input velocity model (Figure F57). For the case of lower input velocity values derived from MSCL data (Version 2), the top and bottom of Unit 4 are marked by two distinct reflectors, the lower of which was previously attributed to the top basement unconformity by Nixon et al. (2016). This reflector would correspond to the boundary between Subunits 4-1 and 4-2, and the top of basal conglomerates would correspond to a reflection below the originally interpreted "basement" reflection from Nixon et al. (2016).
For the case of higher velocity values derived from the downhole log (Version 1), the entirety of Unit 4 maps onto the top reflector and the subunit boundaries cannot be resolved ( Figure F57B). In this case, the reflector interpreted to be the basement boundary would be intersected at the very bottom of Hole M0080A. Unit 4 is dominated by consolidated carbonates and therefore can be expected to have velocity values above 3000 m/s. In addition, the MSCL and discrete sample velocity measurements are conducted ex situ and are likely to underestimate in situ velocity values. The higher velocity interpretation (Version 1) is thus favored. Given the significant difference in the seismic correlation for Unit 4, P-wave velocity values in the base of the hole need to be carefully reevaluated by additional well log processing and confined V P measurements, if possible.
|
v3-fos-license
|
2019-05-21T13:05:40.503Z
|
2019-04-01T00:00:00.000
|
159421742
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://rbgn.fecap.br/RBGN/article/download/3979/pdf",
"pdf_hash": "72bdd45aa28444eaaff24311ac4259fcb1c9bb57",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45532",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"sha1": "679a712be2a72dbab2dc2a82bb6bfcb8cb15a2b3",
"year": 2019
}
|
pes2o/s2orc
|
The accounting reform in Spain. An analysis form the point of view of time and degree of knowledge of accountants
Purpose – The goal of this paper is to prove if there is a change in the Spanish professionals ́ perception about the accounting reform in Spain due to the adaptation to IFRS, depending on when it is analysed, and also if this opinion depends on their degree of knowledge. Design/methodology/approach – We have used a survey done to the Spanish accountants in different four times after the accounting reform. Findings – It is obtained that the degree of knowledge of accountants increases after the first application of the new requirements and it affects their opinion about the accounting reform. It is important to highlight that the perceptions are not the same about each different aspect of accounting areas. Originality/value – Accountants play an important role in the accounting development and so it is valuable to know their opinion after an accounting reform when trying to get comparability with IFRS. Their perceptions are less critical when they have got experience about the new rules and prefer changing the national regulation to requiring directly IFRS in order to get this comparability.
Introduction
For more than a decade we have been witnessing the international implementation of International Financial Reporting Standards (IFRS). The adoption of these standards is directly through the European Regulation in the case of listed European business groups. Elsewhere, the decision lies with each Member State.
In Spain, it was decided to adapt our regulations to IFRS, so all financial information published by Spanish companies, whatever their type and size, is homogeneous, comparable and in accordance with IFRS. Thus, in 2007, a new General Accounting Plan (GAP) was published (in fact, there were two, as there was also one adapted to SMEs), in which our accounting regulations are reformed and the requirements contained in IFRS are introduced for all our companies. This large regulatory change in Europe has been carried out through an adoption mechanism in which all the potential actors take part -one of them the professionals -and in which they play an important role (Mourik and Walton, 2018). In Spain, we are also involved in our own adoption process, for the type of business to which the successive reforms of our accounting system are directed, and which is carried out by our regulator, although it seems that the capacity of influence of professionals is more limited (Mora, 2017). This is why we are addressing accountancy professionals in Spain following this regulatory change that had to be applied for the first time in the financial statements on business information as of 1 January 2008, and which entailed important changes in some accounting concepts and problems that had not previously been included in our legal system. Some research has analyzed different aspects of the adoption of IFRS, whether mandatory or voluntary, in Europe or in specific countries, studying the effects on the choice of auditor (Wieczynska, 2016); on the information analyzed by analysts (Kim, Kim, and Kwon, 2016); directly on the relevance of the information published by companies (Kouki, 2018); the costs and benefits of its implementation (Fox, Hannah, Helliar, and Veneziani, 2013) and only in some cases the opinion of accounting professionals (Fox et al. (2013), Lang and Martin (2016) and Lang and Martin (2017)), which is the focus in this paper. To this we add the answers of a questionnaire carried out among Spanish accounting professionals at four different moments, starting from the first year of application, such that this study develops and expands over time. There are precedents that distinguish between different time periods in this process of adoption and adaptation to IFRS, e.g., Kim, Kim, and Kwon (2016), Wieczynska (2016), or Kouki (2018), but none with four different time periods that range from the first application of the new regulation to eight years later, as is our case.
Hence, we analyze the results obtained in order to check whether the moment influences the perceptions of accounting professionals and, at the same time, whether their degree of knowledge about this new regulation relates positively with the moment at which they evaluate it or whether it depends on the specific accounting problem in question. This should mean that the results of this research are useful for regulators, accounting professionals or any other stakeholders.
The paper is structured as follows. After this introduction, section two offers a review of the literature on this subject, followed by a complete empirical study and analysis of the main results, before ending with the most relevant conclusions.
Literature Review
There are antecedents on how accounting professionals have approached the normative changes, and their opinion of them, since in our subject the analysis of the interrelation between theory and practice is fundamental, and determinant in the evolution and the passage of time (García Benau, 1997, pp. 263-276). Professionals play a vital role in accounting harmonization processes, as Ding, Ole-Kristian, Jeanjean and Stolowy (2007) argue, since even The accounting reform in Spain. An analysis form the point of view of time and degree of knowledge of accountants the reduction in differences between domestic regulations and IFRS is associated with the level of economic development and the importance of the accounting profession. Those who apply IFRS in their daily work must be directly involved in these processes, as they are users of the standard and will benefit from the advantages of its adoption (Hoogendoorn, 2006). Accounting professionals are one of the most important stakeholders, both in the process of preparing IFRS and in the process of their adoption by the European Union, and their voice is listened to in the mechanisms established for this purpose (Mourik and Walton, 2018). Hence, it is their opinions that we gather through a questionnaire, specifically those on the huge change that the adaptation of our regulations to IFRS has meant in Spain as a consequence of this European harmonizing process.
There are studies analyzing the costs and benefits of this accounting reform, as there are studies analyzing the effects of the adoption of IFRS (Preiato, Brown, and Tarca 2015) and the implications for the accounting profession (Carmona and Trombetta, 2008), concluding that there are more advantages than disadvantages (in the case of Spain we would highlight Callao, Jarne, and Laínez (2007), Castillo-Merino, Menéndez-Plans, and Orgaz-Guerrero (2014), Gonzalo (2014) and Doadrio, Alvarado, and Carrera (2015)), or that the costbenefit function of applying IFRS continues to be generally positive, even when considering the cost of training professionals, since IFRS have increased the complexity of preparing financial statements (European Commission, 2015). There are, however, also works like Fox et al. (2013), which through interviews with accountancy professionals from several European countries verify that the costs for these interest groups have exceeded the benefits and that the regulators need to be aware of them.
Faced with a regulatory reform of this magnitude, the degree of knowledge of professionals is a variable that will initially come from training, and will subsequently evolve with the practice and application over time of the new regulations. In this sense, there are precedents that focus on the importance of training accounting experts, e.g., Arquero (2000) who analyses the deficiencies that can be found in training for the practice of this subject. Milanés and Texeira (2006) relate the training of entrepreneurs with the value they give to financial information, concluding that their training is necessary to obtain returns from accounting. In an earlier study these same authors (Milanés and Texeira, 2006) point to managers as being responsible in part for the non-compliance with the objectives of accounting information in SMEs, since they consider accounting an expense and, therefore, so is training in this field. Marín, Antón, and Palacios (2008) conclude that Spanish economists evaluated as important or very important the knowledge acquired in accounting and finance for the development of their profession and subsequent performance. Kouki (2018) does refer specifically to the fact that professionals have had to improve their training and knowledge in the face of the normative change implied by IFRS. Implementing a new standard that is foreign to the accounting system in some concepts, may lead to the question of whether the process has been overhasty (Markelevich, Shaw, and Weihs 2011). Such an adoption may pose difficulties from the point of view of the cultural idiosyncrasy of each accounting system (Mukoro and Ojeka, 2011). So, finally, we must introduce the moment at which we analyze the degree of knowledge of professionals about this change in accounting regulation.
In the case of Spain, and with regard to the degree of knowledge of professionals about the requirements of IFRS, the White Paper for the accounting reform by ICAC (Spanish Institute of Accounting and Account Auditing) (2002), shows that 10.51% of those surveyed have a high level of knowledge; 42.99% a good knowledge; 38.55%low knowledge, and 7.95% none at all. Condor et al. (2006) also conclude that 71.95% of the companies surveyed claim to know International Accounting Standards (IAS) (4.88% in detail). In Navarro, Sánchez, and Lorenzo (2007), 30% of the financial managers of the companies and 92% of the auditors acknowledge knowing the international standards. Millán (2007) points out, with regard to the report, how with the entry into force of the Spanish PGC of 2007, its contents would be substantially expanded in view of the obligations derived from IAS/IFRS; and in Gonzalo Angulo (2014), it is indicated that the accounting reform carried out in Spain has changed the cardinal rules of the existing regulations and has shown that the accounting profession can successfully assume these changes and quality requirements in financial information. Therefore, although there are studies that analyze the effects of a change in accounting standards, they focus on short-term changes (ICAEW, 2015) and do not check in practice how it works over long periods. One of the main characteristics of any transition is that professionals learn with time (ICAEW, 2015), even though they were initially trained and, in addition, IFRS are not static, and therefore early results on the implementation of the standard may not be sustained over time, since, in the face of change, behavior does not adjust so quickly (Brown, 2011). Estima and Mota (2015) point out that the consequences of the adoption of IFRS will probably begin to be detected after many years of their application. This progress is linked to the degree of knowledge that professionals have about all aspects of the new standards. It is not surprising that although the degree of knowledge of the new regulations advances over time, there are certain problems whose theoretical acceptance begins to decrease and translates into a need for new regulations (Navarro et al., 2007). Two recent studies which have taken into account the passage of time in the perspective of professionals after the imposition of IFRS in Europe through the corresponding Regulations and Directives are those carried out within the European Federation of Accountants and Auditors for SMEs, EFAA), in which the aim is first to ascertain how the 2013 Accounting Directive has been transposed in the Member States (Lang and Martin, 2016), and, second, to study whether there has been a trickledown effect from large companies to European SMEs as regards the requirements of the European Regulations imposed by IFRS (Lang and Martin, 2017).
Specifically, the changes that our adoption of IFRS has entailed in Spanish regulations for those companies to which they are not obligatorily applied, and the fact that we can draw on responses at different times from the first application of this new regulation, has led to our first hypotheses:
H1:
The degree of knowledge that Spanish professionals have about the new accounting regulations is positively associated with the moment in time in which it is assessed from its first application and its development.
H2:
The degree of knowledge that Spanish professionals have about the new accounting regulations is determined by the type of accounting problem they face. Kim, Kim, and Kwon (2016) also analyze the effect of the application of IFRS, but in the case of Korea, where they are mandatory. Their baseline hypothesis also establishes a positive relationship, but they focus on the effect on analysts and their predictions. Kouki (2018) also introduces the temporal differentiation in the effect of the adoption or not of IFRS, but in his case adoption by companies is voluntarily, comparing two moments: 5 years before and 6 years after, and focusing on the relevance of financial information. Wieczynska (2016) also analyses the consequences of the normative change due to the adoption of IFRS but for the change of the audit firm, finding that there is clearly a change from small firms to large auditors in the first year of adoption. Therefore, the initial moments after the regulatory change are decisive, as we have stated in our hypotheses, and these The accounting reform in Spain. An analysis form the point of view of time and degree of knowledge of accountants extend to subsequent moments. The second hypothesis and the specific type of accounting problem is closley linked to the sector or the activity carried out by the companies in which the accounting professional operates and, hence, the antecedents, although they focus on the work of financial analysts, prove clearly that the sector, and thus the accounting problem they face, determine the effects that the adoption of IFRS has on the information they use to make their forecasts (Bae, Tan, and Welker, 2008);Byard, Li and Yu, 2011;Horton, Serafeim, and Serafeim, 2013;Beuselinck, Joos, Khurana, and Meulen, 2017).
Methodology and Sample
In order to obtain the results and conclusions of this work, a questionnaire was used addressed to the members of the specialized body of the General Council of Economists of Spain, whose members are accounting economists, that is, those who are professionally dedicated to financial information in general and to accounting in particular. Therefore, the various antecedents of the aforementioned works were taken into account for the design of the questionnaire, while always approaching their study from the point of view of the professionals, and a pre-test and a control test in the initial process of elaboration of the first survey with the members of the Board of Directors of Economist Accountants was carried out. This specialized body has changed its name over time, going from being the Economists Experts in Accounting and Financial Information (ECIF) to the current Accounting Economists -General Council of Economists (EC-CGE), which includes the Register of Accounting Experts (REC). The questionnaire was administered via the Internet at four different times : 2008, 2009, 2013, and 2015, treating the answers in an aggregate and anonymous manner in at all times. There may be some difference with respect to the results of previous years due to our having refined the number of statistically valid questionnaires for our analysis. The global population of the survey was all the economists who are members of the General Council of Economists EC-CGE. This global population is close to 2,000 members and with a presence throughout the national territory. The responses received in each of the four years have allowed us to make estimates with a confidence level of at least 90% and with a maximum sampling error of ±4.7%. The response rate obtained in all years is above 15% of the population, which is high for tasks using the Internet survey as a basic tool of empirical methodology (Couper, 2000) (In 2008 395 responses were received out of a total of 1,700 members, and the rest of the years, 2009, 2013 and 2015, there were 297 responses out of a total of 1,750; 300 out of a total of 1,900 and 331 out of a total of 1,995 members, respectively).
Statistical analysis was performed with SPSS 23.0 for Windows. The differences considered statistically significant are those whose p< .05. The number of cases present in each category and the corresponding percentage have been obtained for qualitative variables, and the minimum, maximum, mean and standard deviation values for quantitative variables. Since we have answers to the same questions at four different times: 2008 (as soon as the new accounting standard was applied), 2009, 2013, and recently 2015, eight years after this new standard came into force, we plan to take into account the passage of time when assessing the responses of professionals. The statistical treatment to be applied will be the appropriate one for two independent samples, since although they are the same questions, their being asked at different times means that neither the number of responses nor, therefore, the interviewees coincide. When subjects are randomly assigned to each of the samples, we can statistically guarantee that they are independent samples, Molinero (2001). For all these reasons, the comparison between groups for the qualitative variables was carried out using the Chi-square test and the Z test for equal proportions of the columns. For the comparison between two groups, the Mann-Whitney U test and the Kruskal-Wallis test were used for more than two groups.
In addition, we used a multiple regression model to determine which variables have a significant effect on the degree of knowledge. The methodology followed in the statistical analysis of the calculated model was: (1) Point estimate of the model parameters; (2) Individual significance of the variables and the model constant; (3) Regression contrast (ANOVA) to study the overall validity of the model and verify that (jointly) the explanatory variables provide information in the explanation of the response variable. Evaluation of the goodness of fit of the model through the determination coefficient (R2) and (4). Verification of the hypotheses of the model through the analysis of the residues (Hair, Anderson, Tatham, and Black 1999).
Analysis of the overall results and the effect of the passage of time
In this first section we focus on the overall results obtained in the responses of the professionals, following the same order as in the questionnaires carried out.
Regarding the degree of knowledge that professionals believe they have about the new regulations, we can highlight that in the first year of their application they thought they knew them well. This perception changed after the first experience, although with time it has increased such that after the first application of the reform, the professionals estimate that they have gained more knowledge and have a high degree of knowledge of the new GAP, reaching a level similar to the optimistic data obtained from the first survey carried out (ANOVA: F(3,1185)= 50,92, p<0,001) (data that we have included in Image 1). With regard to the competitive and informative costs and improvements implied by this new regulation, the perceptions of professionals are varied (Table 1). However, the results obtained do show us that in the responses chosen most (A, B and C) in the first year of implementation it was perceived that it was going to mean mainly few costs and few competitive and informative advantages for the companies (40% of the responses obtained), while the perception changes after experience to a higher cost for subsequent years, while the competitive and informative improvement continues to be valued as scarce (the percentages and the significance of the differences can be seen in Table 1). Note. a-b: different letters indicate statistically significant differences of p < .05 in the equality test for proportions in the columns (Contingency and chi-squared tables).
Image 1. Degree of knowledge of the new GAP
Of the areas in which the new regulations have introduced greater complexity for these professionals, equity is where they believe the complexity is greatest (the highest affirmative percentages are found in this area) (we include the results obtained in Table 2, without taking into account the first year of the questionnaire, as the results are not statistically significant).
However, the passage of time conditions their responses in this sense, since statistically significant differences are obtained between the results of the survey up to the second year after the entry into force of the new regulations and the subsequent results, in the sense that once the first two years have elapsed since the entry into force of the new GAP, the perception of this complexity increases. Esther Ortiz-Martínez / Marcos Antón-Renart / Salvador Marín-Hernández We now refer to the concepts that have presented the greatest operational complications for adaptation to the new standards (including the results in Table 3). The results are very diverse a priori. There are concepts in which the opinion on their complexity is maintained throughout the years, regardless of whether it is the first year of application of the new regulations, or whether more time has elapsed, as is the case with financial investments in hybrids ( Table 3 shows that there are no significant differences between the different years and the median of responses that are maintained at values of 4 or 5 within the same interquartile range, and therefore quite complex). While at the other extreme we may find concepts such as sectoral adaptations, which do not follow any pattern, presenting significant differences in the responses between all the years analyzed. In the first case commented -hybrids -it is true that they imply a very high complexity and that their use is not generalized, which does not happen in the case of sectoral adaptations, which, although very specific, are mainly used in those sectors/ fields to which they refer.
The remaining concepts, according to the opinion of the professionals, can be said to have a complexity determined by the passage of time. In some cases, the determining factor in the assessment of their complexity is the first year of application of the new regulations, as happens with Groups 8 and 9 and with provisions (the first year is statistically different from the other three years of the survey, according to the results of Table 3). In both cases the perception of complexity also increases after the first application (groups 8 and 9 go from a median of 4 to the same with higher interquartile ranges; and provisions from a median of 3 to the same with higher interquartile ranges). In this line of results, there are also other concepts in which it is not only the first year of application of the new regulation that marks the differences with respect to subsequent years, but also the first two years of use of the new regulation, which appear with statistically significant results in subsequent years, both for the case of increasing its complexity, and for the opposite, which is reduced. This happens with the first application of this regulation, which seems to reach its maximum complexity in its second year of implementation and then descends in subsequent years, given that the issues raised in a first application are then resolved with practice and the passage of time (the maximum interquartile rank is obtained in the second questionnaire). There is also the example of the amortized cost, in which it is the years after the second implementation of the new regulation that imply a perception of its lesser complexity. In other cases the opposite occurs, as in the case of the clear example of the effective interest rate (EIR), it is not the first years of application of the new regulation that determine the appreciation of its complexity, but as time goes by and they are studied in greater detail, or these concepts have to be applied to more cases, the complexity is greater (with the same median but greater interquartile ranges).
Image 2. The ICAC should make the effects of the new regulation on Sectoral Adaptations and Resolutions public
Note. χ2(3) = 129.69, p < 0.001.
The majority opinion of professionals is that the Institute of Accounting and Auditing (ICAC) should report on the changes that the new GAP includes in the sectoral adaptations and resolutions, although with the passage of time the professionals believe it to be less and less necessary, a consequence of the fact that the ICAC has been carrying out this work throughout the years that have passed since the first application of the new GAP (in Image 2 these results are included and the significant differences between the opinions of the first two years and the following are clearly appreciated, as is the fall in the percentages of positive responses) (χ2(3) = 129.69, p < 0.001).
Since this normative change has resulted from the application of IAS/IFRS in Europe, professionals were asked whether they would have preferred to apply these international standards directly, with the prevailing opinion at all times being that they prefer this accounting reform (Image 3). In this case, no statistically significant patterns of response behavior have been found depending on when the survey of professionals was carried out (χ2(3) = 5.79, p = 0.122).
Image 4. Enough time was available
Note. χ2(3) = 22.54, p < 0.001. The next question refers to whether professionals consider that they had enough time to comply with the deadlines set by the ICAC in the face of the changes in the regulations (Image 4). The responses of the professionals are conditioned by the passage of time, because when the new regulations have some history most believe that they have had enough time, unlike the perception in the previous years. This change of opinion is statistically verified by seeing the differences according to the time at which the survey was carried out (Image 4), since there are significant differences between the last year of the questionnaire, when more time has passed since the new regulation, and the first six years of its implementation (χ2(3) = 22.54, p < 0.001).
We will then try to check whether there is any relationship between the responses in the first year after applying the new regulations (survey carried out in 2008) and eight years later (survey carried out in 2015), for the questionnaire questions whose response was a dichotomous variable. In this way, we can combine these results with those of the tests carried out, taking into account the four different time periods at which the survey was conducted. In this sense we obtain that when professionals already have a greater knowledge of what the accounting reform has implied, they consider that the time periods established by the ICAC are sufficient in the face of new regulations or clarifications issued ( Table 4). This is the only hypothesis of independence that we can reject between the answers at these two moments of time (χ2(1) = 4.02, p < 0.045). That is to say, in the answer to this question there is a determining role in the moment at which the opinion is sought, whether in the first year in force of the new legislation, or after a sufficiently long period of adaptation. On the other hand, this result on the effect of the passage of time in the first application of this accounting regulation leads professionals to have a greater knowledge of it and so they may relativize it. From the results in the contingency table we can highlight that 67.1% of the professionals who thought that the deadlines foreseen by the ICAC were not sufficient now think that they are, so that with the passage of time it is considered that the deadlines foreseen by the ICAC are going to be sufficient.
Statistical analysis of the relationships between professionals' responses to the new regulations and their degree of knowledge of them
In this second section we are going to study the relationships between the responses of professionals to the different questionnaires, and the degree of knowledge they claim to have about the new regulations, although at all times the effect of the passage of time prevails.
The accounting reform in Spain. An analysis form the point of view of time and degree of knowledge of accountants The first statistically significant result that we obtain (included in Table 5), tells us that professionals who seem to have a little more knowledge do not consider it necessary to update the value of assets (the median degree of knowledge is 4 within the highest interquartile range (4-5)) in order not to consider reasonable value necessary, and within a somewhat lower interquartile range (3)(4) for those who consider it necessary). There is also a relationship between the degree of knowledge that professionals claim to have, and whether or not they would have preferred to apply IAS/IFRS directly. Those professionals who claim to have a little more knowledge of the new regulations prefer the route that has been used: the adaptation of their own regulations and not the direct application of IAS/ NIFF (the median degree of knowledge is 4 for not applying directly and somewhat lower, 3, for those who do advocate direct application). Table 6 includes the results of the multiple regression models carried out for the years 2013 and 2015 in order to determine which variables influence the degree of knowledge and Table 7 includes the correlation matrix of all the variables included in the regression model for the same years, which has also served to test the validity of the scale used to exploit the results of this questionnaire. We have previously calculated the Variance Inflation Factor (VIF) to rule out that there is no multicollinearity between the independent variables for each of the models proposed.
For 2013 (Table 6), as we have seen, the new regulations according to professionals introduce the greatest quantitative changes in equity, but this perception is linked to those surveyed who have a lower level of knowledge. This same relationship is obtained for the quantitative changes implied by the new regulation on assets, which is associated with those professionals who have a lower level of knowledge. In the regression to 2015 (Table 6) this significant relationship disappears, with which once again it is verified that the passage of time, and therefore the repeated application of this regulation, lead to a lesser sensation of complexity, as well as to a greater knowledge of it. In Table 7, the correlations obtained show that with the passage of time the perception of complexity by economists is reduced (there are fewer concepts related to the time available in 2015, and these are problems that persist today, such as the need to simplify for SMEs).
From the concepts that generated the greatest complications in 2013, as a result of the introduction of the new accounting regulations, it can be seen that those most related to the new accounting treatment of financial instruments, such as the calculation of amortized cost, financial assets and the effective interest rate, generate the greatest operational complications, but are associated with a lower level of knowledge of those surveyed (Table 6). This problem lies outside the scope of operations of many SMEs and, therefore, of professionals. Hence, a priori, this first difficulty is associated with the lower level of knowledge. In addition, these are the only concepts that maintain the significant relationship in the 2015 regression (Table 6), associating it again with a lower level of knowledge (the sign of the relationship is again negative). Eight years after the first application of this accounting regulation, its complexity or knowledge does not depend on the day-to-day operations in this case, but rather on the type of operations carried out by the company, which is not confronted with financial instruments, and hence professionals do not know its accounting treatment. In Table 7 again, the greatest correlations are obtained between all the concepts derived from the new accounting treatment of financial instruments, and also the greatest number of significant relationships between variables (such as the relationship, both in 2013 and 2015, between the complications introduced by the derivatives and the hybrid financial instruments, which at both times is the greatest).
The same reasoning can be used for the significance obtained in 2013 (Table 6) as regards the operational complications introduced by related parties, which are again associated with a lower level of knowledge. The same type of relationship is obtained in 2013 between the opinion of those surveyed as to whether it is necessary for the ICAC to analyze the effects of the new regulations by publishing the corresponding adaptations and resolutions, linked to a lower degree of knowledge of the same. After two more years, in the 2015 regression, this variable no longer appears as significant, verifying how the passage of time has gradually led to a better knowledge of the new regulation and therefore this need is no longer manifest as such. The significant relationship obtained between the time available to assume the new regulation and the need for ICAC adaptations and resolutions based on the results of the correlations between both variables in 2015 (Table 7) again supports the results obtained previously.
Finally we ascertain the opinion of those surveyed as to whether it would have been better to apply IFRS directly without reforming our legislation. In this case, in both 2013 and 2015 an affirmative response to the direct adoption of IFRS is associated with a lower degree of professional knowledge. These results are also verified in the correlations, since in 2015 it is obtained that there is a significant relationship expressly between these two variables, the degree of knowledge and our having carried out an adaptation of our GAP. These results are consistent with what we have already highlighted above: that professionals prefer the solution chosen in Spain, i.e., the reform of our legal system.
The accounting reform in Spain. An analysis form the point of view of time and degree of knowledge of accountants Table 6 Multiple
Conclusions
From the reading and analysis of this work we can conclude, in a general way, that the two hypotheses that we wished to test are confirmed. Specifically we confirm, coinciding with Brown (2011), Estima and Mota (2015), ICAEW (2015), and Kim et al. (2016), among others, that the degree of knowledge that Spanish professionals have about the new accounting regulations increases in general with the passage of time and the development of the same regulations. And, on the other hand, and in this case coinciding with Milanés and Texeira (2006), Marín et al. (2008), and Beusenlinck et al. (2017), among others, that the degree of knowledge of professionals about the new accounting regulations in Spain, although increasing with the passage of time, is determined by the type of problem they face.
The specific conclusions also allow us to point out that: • Having a series of responses at different times after the implementation of the new regulations allows us to conclude that perceptions effectively change depending on when the professional assesses the changes he or she has to apply in practice. Thus, the professional may change from the opinion that the new regulations imply few costs and few competitive and informative advantages for companies to conclude that they have meant a higher cost and with the same scarcity of competitive and informational advantages. • Specifically by areas, it is equity that has been considered the most complex according to the new requirements. In terms of concepts, the perception of its complexity is also determined by the passage of time in the application of its new treatment. The complexity perceived by professionals increases in, for example, the case of groups 8 and 9, the amortized cost, or the calculation of the EIR, either once the first application has elapsed, or after the first two years of this first implementation. However, these more complex perceptions, both in terms of equity and in terms of the concepts related to the new accounting treatment of financial instruments, are linked to professionals with less knowledge of the changes introduced by the new regulations. • Professionals clearly prefer the reform of Spanish regulations to the direct application of IFRS in Spain. While the latter opinion is independent of the moment in which professionals are asked, it is not independent of their degree of knowledge, since it is those with the greatest degree of knowledge who prefer the route that has been used: the reform of our legal system to adapt it to IFRS. • Another clear and statistically significant conclusion is the effect of the passage of time on professionals' own perception of whether they have had sufficient time to adopt the new standards. If 67.1% of the professionals thought that the deadlines set by the ICAC were not sufficient, after eight years of their application they thought that they were, so professionals are increasing their degree of knowledge and may be relativizing.
To conclude, as limitations we could highlight those of any study based on a questionnaire. However, in our case we have overcome the main limitation of the size of the sample, as we have a very large target population committed to the professional exercise of accounting. There is also the comparative advantage of having a sufficiently long historical series to be able to draw significant conclusions. This, in turn, may raise possibilities in terms of future work, in which we can, with an even longer time horizon, re-launch the questionnaire, and check how this time perspective further removed from the first moments of application of a GAP adapted to the requirements of IFRS affects the responses of professionals, and even introduce new variables that provide information on the application of the new regulations by applicable business sectors.
|
v3-fos-license
|
2022-07-07T05:10:27.734Z
|
2022-05-19T00:00:00.000
|
250310951
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "ded4c4e6998d67b5007d70d9c34c7a07e42dba84",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45533",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "dd4c790b9802c605a2f4b1c9009f964f900f8f5e",
"year": 2022
}
|
pes2o/s2orc
|
Investigating the role of G-quadruplexes at Saccharomyces cerevisiae telomeres
The G-quadruplex consensus motif G≥3NxG≥3NxG≥3NxG≥3 is found at telomeres of many species, ranging from yeast to plants to humans, but the biological significance of this fact remains largely unknown. In this study, we examine the in vivo relevance of telomeric G-quadruplexes in the budding yeast Saccharomyces cerevisiae by expressing a mutant telomerase RNA subunit (tlc1-tm) that introduces mutant [(TG)0–4TGG]xATTTGG telomeric repeats instead of wild-type (TG)0-6TGGGTGTG(G)0-1 repeats to the distal ends of telomeres. The tlc1-tm telomere sequences lack the GGG motif present in every wild-type repeat and, therefore, are expected to be impaired in the formation of G-quadruplexes. Circular dichroism analysis of oligonucleotides consisting of tlc1-tm telomeric sequence is consistent with this hypothesis. We have previously shown that tlc1-tm cells grow similarly to wild-type cells, suggesting that the ability to form telomeric G-quadruplexes is not essential for telomere capping in S. cerevisiae cells.
INTRODUCTION
The physical ends of eukaryotic chromosomes are protected by nucleoprotein complexes known as telomeres. Telomeres protect chromosome ends from degradation, from telomere-telomere fusion events, and from being recognized as double-stranded DNA breaks [1]. In most eukaryotic species, telomeres consist of double-stranded G/C-rich DNA followed by a G-rich 3′ single-stranded overhang. Proper telomere function is ensured by the specialized proteins bound to the double-stranded and single-stranded telomeric repeats. Telomere length is kept in a state of dynamic equilibrium. Incomplete DNA replication and nucleolytic degradation cause telomeres to shorten, while the reverse transcriptase telomerase is responsible for telomere lengthening [1]. Telomerase extends the 3′ overhang of telomeres by iterative reverse transcription using its RNA subunit as a template.
Due to the G-rich nature of the telomeric repeats, telomeric DNA has the potential to form G-quadruplexes, which are highly stable secondary structures composed of Hoogsteen hydrogen-bonded guanines arranged in planar G-tetrads stacked together [2]. Intramolecular G-quadruplexes are predicted to form within sequences containing four runs of at least three guanines (G≥3NxG≥3NxG≥3NxG≥3), and the telomeric DNA of most eukaryotic organisms conform to this consensus sequence. While most studies on G-quadruplexes have been carried out in vitro, there is also in vivo work supporting the existence of G-quadruplexes at telomeres. The most direct evidence comes from studies in ciliates. The telomere-binding protein TEBPb, from the related ciliates Oxytricha nova and Stylonychia lemnae, can promote the formation of Gquadruplexes in vitro [3,4]. Knockdown of TEBPb in S. lemnae eliminates detection of telomeric G-quadruplexes in vivo using the Sty3 G-quadruplex antibody in nuclear staining experiments [4]. Telomeric G-quadruplexes are not detected during S phase, presumably to allow replication of telomeres [4]. Unfolding of telomeric Gquadruplexes during S phase requires phosphorylation of TEBPb, as well as telomerase and a RecQ-like helicase [4][5][6].
In the budding yeast Saccharomyces cerevisiae, the main telomere binding protein Rap1, like TEBPb, can bind and promote the formation of G-quadruplexes in vitro [7,8]. In contrast to the findings in ciliates, chromatin im- munoprecipitation experiments using the BG4 Gquadruplex antibody suggest that telomeric Gquadruplexes may form in late S phase, when S. cerevisiae 3′ overhangs reach their longest length [9]. The telomerase subunit Est1 can also promote G-quadruplex formation in vitro, and cells expressing Est1 mutants deficient in this activity exhibit gradual telomere shortening and replicative senescence, suggesting a potential positive role for Gquadruplexes in telomerase-mediated extension of telomeres [10]. In addition, there is evidence to suggest that stabilization of G-quadruplexes suppresses the temperature sensitivity of the telomere capping-defective cdc13-1 mutant [11]. Cdc13 is a single-stranded telomeric DNA binding protein; the cdc13-1 mutant loses the ability to block excessive nucleolytic resection of telomeric DNA at elevated temperatures, resulting in an accumulation of single-stranded telomeric DNA [12,13]. The folding of this DNA into G-quadruplexes has been proposed to facilitate telomere capping by inhibiting further nucleolytic resection [11]. Despite these findings, it remains unclear whether Gquadruplexes have an evolutionarily conserved function in telomere biology [14].
In this study, we examined the function of Gquadruplexes at S. cerevisiae telomeres by expressing a mutant telomerase RNA subunit (tlc1-tm) that introduces [(TG)0-4TGG]xATTTGG mutant telomeric repeats instead of wild-type (TG)0-6TGGGTGTG(G)0-1 repeats [15,16]. The mutant repeats are impaired in the formation of Gquadruplexes, and we have previously shown that tlc1-tm repeats are poorly bound by Rap1 [17]. Despite being deficient in telomeric G-quadruplex formation, tlc1-tm cells are viable and grow as well as wild-type cells, suggesting that the ability to form telomeric G-quadruplexes is not essential for telomere capping and cell viability in S. cerevisiae.
tlc1-tm mutant telomere sequences have reduced potential to form G-quadruplexes
To assess the role of G-quadruplexes at yeast telomeres, we require a yeast strain with telomeric DNA sequences that lack the potential to form G-quadruplexes. Such a strain can be obtained by mutating the template sequence of the RNA subunit of telomerase, TLC1. The vast majority of mutations to the TLC1 template sequence causes disruption of telomerase enzymatic activity, and consequently, replicative senescence [18]. Those that do not are often associated with slow growth, dramatic alterations in telomere profile (i.e. elongated, very short, or extensively degraded), and aberrant chromosome separation and segregation [18,19]. The tlc1-tm mutant introduces [(TG)0-4TGG]xATTTGG mutant telomeric repeats instead of wildtype (TG)0-6TGGGTGTG(G)0-1 repeats, and grows similar to a wild-type strain, even when one telomere consists entirely of mutant sequence [15][16][17]. Telomeres in the tlc1-tm mutant are on average longer and more heterogeneous in length than in wild-type strains [17], but the telomere profile of tlc1-tm is much less dramatically altered compared to most other mutants of TLC1 with altered template sequences [18,19].
The lack of the GGG motif in the mutant repeat sequence should weaken the potential of G-quadruplex formation. To test this idea, we used the G-quadruplex prediction tool, G4Hunter, where a score greater than 1.2 indicates high G-quadruplex-forming potential [20]. While analysis of wild-type sequences gave G4Hunter scores of 1.366, 1.375, and 1.286 (see sequences used in Figure 1), none of the three analyzed mutant tlc1-tm sequences has a score greater than 1, thus indicating that the mutant telomeric sequences have reduced G-quadruplex-forming potential. To validate this hypothesis, we subjected oligonucleotides with either wild-type or tlc1-tm telomere sequences to circular dichroism (CD) analysis after incubation with potassium. In agreement with previous studies reporting that yeast telomeric DNA can fold into G-quadruplex structures in vitro [7,21], we find that all three oligonucleotides composed of wild-type telomeric sequence generate a negative peak at 240 nm and a positive peak at 263 nm ( Figure 1), which is a pattern consistent with parallel Gquadruplex formation. In contrast, none of the oligonucleotides with tlc1-tm telomere sequence form such a pattern ( Figure 1). It is formally possible that tlc1-tm telomeres form less stable two-quartet G-quadruplexes (which have a consensus sequence of G≥2NxG≥2NxG≥2NxG≥2). Indeed, the spectra of tlc1-tm oligonucleotides #2 and #3, despite having low amplitude, could indicate an antiparallel Gquadruplex structure, which is characterized by a negative peak near 260 nm and positive ones at 240 and 295 nm. Nevertheless, our findings indicate that the formation of any G-quadruplex structures by wild-type telomeric sequence should be, at minimum, greatly perturbed in tlc1tm telomeric sequence. PIF1 suppresses cdc13-1, but not cdc13-1 tlc1-
tm, temperature sensitivity
To test whether tlc1-tm telomere sequences are defective in forming G-quadruplexes in vivo, we stabilized Gquadruplexes in the telomere capping-defective cdc13-1 mutant by deleting PIF1. Pif1 is a helicase and a potent unwinder of G-quadruplexes [22]. Suppression of cdc13-1 temperature sensitivity by pif1∆ has already been reported [23]. We find that pif1∆ cannot suppress the temperature sensitivity of cdc13-1 in a tlc1-tm background (Figure 2A). We observe the same effect when using the pif1-m2 allele, which is specifically deficient for the nuclear isoform of Pif1 [24]. Thus, tlc1-tm telomeres remain uncapped even in the absence of Pif1, possibly due to a lack of G-quadruplexes to stabilize.
We noticed that cdc13-1 tlc1-tm cells grow more slowly than cdc13-1 cells even at 25ºC (Figure 2A; top panel). This effect is even more striking upon dissection of a cdc13-1/CDC13 tlc1-tm/TLC1 diploid. We find no difference in the colony size formed by the haploid progeny at 22ºC, regardless of their CDC13 and TLC1 status ( Figure 2B). However, cdc13-1 tlc1-tm spores were unable to germinate at 25ºC ( Figure 2B), although the cdc13-1 tlc1-tm spores that germinated at 22ºC were able to grow at 25ºC (Figure 2A). These findings suggest that G-quadruplex-mediated capping may be important even at a temperature (25ºC) OPEN ACCESS | www.microbialcell.com where the Cdc13-1 mutant protein is only modestly impaired [25].
While our findings are consistent with a previously proposed model in which G-quadruplexes protect cdc13-1 telomeres [11], the effect of tlc1-tm on cdc13-1 cells may instead be due to reduced levels of Rap1 at tlc1-tm telomeres [17] rather than a disruption in G-quadruplex formation. However, we do not favor this possibility because telomeres in tlc1-tm cells still retain wild-type telomeric sequence in their centromere-proximal regions, so that telomere-bound Rap1 is only reduced by 40% [17].
DISCUSSION
In this study, we investigated the function of Gquadruplexes at S. cerevisiae telomeres using the tlc1-tm mutant, which causes the addition of mutant telomeric repeats that are defective in forming G-quadruplexes. Our findings suggest that G-quadruplex formation at telomeres is not essential for telomere capping nor cell viability in S. cerevisiae. In addition, our findings are not consistent with a previously proposed model whereby Est1-mediated Gquadruplex formation is required for telomerase activity [10], since tlc1-tm telomeres are efficiently extended by telomerase [17]. While we cannot exclude the possibility that less stable G-quadruplex structures (e.g. two-quartet G-quadruplexes) are able to form at tlc1-tm telomeres, there are other viable tlc1 template mutants that result in telomeric repeats that lack even a double GG motif [18,19]. Nonetheless, our findings are in agreement with a previously proposed model suggesting that telomeric Gquadruplexes serve as capping structures to protect cdc13-1 telomeres [11], and it is also possible that telomeric Gquadruplexes are important for telomere function when S. cerevisiae cells are grown in stress-inducing conditions. Furthermore, we have previously reported several telomeric defects (e.g. disruption of telomere length homeostasis) in tlc1-tm cells [17]. While we believe that most of these defects can be largely attributed to depletion of telomere-bound Rap1, it is formally possible that impairment in the formation of telomeric G-quadruplexes could contribute to some of these defects. The telomere repeats of S. cerevisiae and other Saccharomycotina species are highly divergent and differ from the TTAGGG or TTAGGG-like repeats found in many other eukaryotic species [26,27]. Budding yeast repeats can be quite long, occasionally degenerate, and often non-G/Crich [28,29]. Many of the budding yeast telomere sequences do not conform to the G≥3NxG≥3NxG≥3NxG≥3 Gquadruplex consensus. Changes in the sequence of the telomeric repeats were accompanied by co-evolution of FIGURE 1: tlc1-tm mutant telomere sequences are impaired in forming G-quadruplexes. CD spectra of oligonucleotides with either wildtype or tlc1-tm telomeric sequence. Average of three measurements is plotted.
OPEN ACCESS | www.microbialcell.com telomere-binding proteins. In organisms with TTAGGG telomeric repeats, the double-stranded telomeric sequence is typically recognized by proteins homologous to mammalian TRF1 and TRF2, while the single-stranded telomeric sequence is bound by proteins homologous to mammalian POT1. Telomere association of these proteins is highly sequence specific [30,31], so mutating the template region of telomerase RNA leads to a loss of cell viability [32][33][34]. In contrast, the telomeres of Saccharomycotina budding yeast species (with the exception of the Yarrowia clade, one of the basal lineages of Saccharomycotina [35]) are bound by Rap1 and Cdc13. Rap1 and Cdc13 have the possibility to accommodate different target sequences, thereby facilitating the rapid evolution of budding yeast telomeric sequences [29]. A consequence of this rapid evolution may be the loss of a need for telomeric Gquadruplexes. Further studies are needed to determine whether G-quadruplexes are required for proper telomere maintenance in species with TTAGGG telomeric repeats. One recent study has reported that folding of telomeric DNA newly synthesized by human telomerase into Gquadruplexes is important to support telomerase function, which the authors suggest could provide an explanation for the evolutionary conservation of the G-quadruplex-forming potential of telomeric sequence [36]. Addressing this question is especially relevant given that G-quadruplexes have increasingly been proposed as therapeutic targets in oncology [37].
If G-quadruplexes are not essential for telomere capping in S. cerevisiae, why does Rap1 have the ability to bind and promote the formation of G-quadruplexes [7,8]? We propose two possible explanations. First, this ability may have been required for telomere capping, but this requirement was lost during the evolution of the Saccharomycotina subdivision. Rudimentary G-quadruplex-based capping in cdc13-1 mutants [11] may be an evolutionary remnant of this requirement, so it would be interesting to test whether suppression of cdc13-1 capping defects by Gquadruplex-stabilizing treatments is dependent on Rap1. Second, the ability of Rap1 to bind and promote the formation of G-quadruplexes may be important for Rap1's function as a transcriptional regulator [38], rather than for telomere capping. Consistent with this hypothesis, Gquadruplex-forming sequences are strongly enriched at promoters and are thought to influence transcription [39]. These two hypotheses are not mutually exclusive, and it will be interesting to explore their validity in future studies.
MATERIALS AND METHODS Yeast strains
Standard yeast media and growth conditions were used [40,41]. Yeast strains used in this study are listed in Table 1. Deletion of PIF1 was accomplished by PCR-based gene deletion [42]. Knock-in of the tlc1-tm allele was accomplished by PCR amplification of tlc1-tm from either MCY415 or MCY416, using primers oSMS1 (5ʹ-ACCTGCCTTTGCAGATCCTT-3ʹ) and TLC1-RV (5ʹ-TTATCTTT-GGTTCCTTGCCG-3ʹ), followed by transformation of the PCR product into yeast cells using the LiAc-based method [43]. The diploid strain dissected in Figure 2B was generated by knock-in of the tlc1-tm allele into SSY238. The spore colonies were genotyped by replica plating onto YPD + clonNAT plates (to select for the tlc1-tm allele) and YPD plates that were subsequently incubated at 30ºC (to identify cdc13-1 spore colonies, which do not grow at 30ºC).
Spot assays
Cultures for spot assays were grown overnight and diluted to an optical density (OD600) of 0.5, from which four serial 1:10 dilutions were spotted onto YPD plates. Plates were incubated at indicated temperatures for 2 or 3 days. 30ºC. (B) A cdc13-1/CDC13 tlc1-tm/TLC1 diploid strain was sporulated and the resulting tetrads were dissected on YPD plates, which were incubated at 22ºC or 25ºC. Each column of colonies arose from a single tetrad.
CD spectroscopy
Oligonucleotides were dissolved in a 10 mM Tris-HCl pH 7.5 and 100 mM KCl solution in a final concentration of 5 μM. The mix was boiled for 5 min at 95ºC and then cooled down overnight. The CD spectra were then measured using a Jasco J-815 spectropolarimeter. Three reads per sample were taken at a wavelength range of 215-350 nm in a quartz cuvette with a 1 cm path length. Data were analyzed using Spekwin32 software. MCY416 BY4742 alpha tlc1-tm::natMX his3∆1 leu2∆0 ura3∆0 [15] OPEN ACCESS | www.microbialcell.com
|
v3-fos-license
|
2017-06-17T20:42:43.009Z
|
2013-11-22T00:00:00.000
|
6912433
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/bies.201300110",
"pdf_hash": "72f3659a937bc4a773d4e7f80568d33cda09283d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45535",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "72f3659a937bc4a773d4e7f80568d33cda09283d",
"year": 2013
}
|
pes2o/s2orc
|
Carbohydrate metabolism during vertebrate appendage regeneration: What is its role? How is it regulated?: A postulation that regenerating vertebrate appendages facilitate glycolytic and pentose phosphate pathways to fuel macromolecule biosynthesis
We recently examined gene expression during Xenopus tadpole tail appendage regeneration and found that carbohydrate regulatory genes were dramatically altered during the regeneration process. In this essay, we speculate that these changes in gene expression play an essential role during regeneration by stimulating the anabolic pathways required for the reconstruction of a new appendage. We hypothesize that during regeneration, cells use leptin, slc2a3, proinsulin, g6pd, hif1α expression, receptor tyrosine kinase (RTK) signaling, and the production of reactive oxygen species (ROS) to promote glucose entry into glycolysis and the pentose phosphate pathway (PPP), thus stimulating macromolecular biosynthesis. We suggest that this metabolic shift is integral to the appendage regeneration program and that the Xenopus model is a powerful experimental system to further explore this phenomenon.
Introduction
Vertebrate appendage regeneration entails the reconstruction of outward growing tissue structures, including limbs, fins, digits, and tails. Many vertebrate species, including fish, amphibians, and reptiles, and to a lesser extent mammals, have the ability to regenerate their appendages following amputation [1,2] (for an example of vertebrate tail appendage regeneration, see Supplementary Movie 1). The regeneration process coordinates a variety of biological processes, all of which rely on molecules and energetic equivalents produced during cellular metabolism. Yet despite its intuitive importance, very little is known about how cellular metabolism is regulated during vertebrate tissue regeneration.
Tissue regrowth during appendage regeneration is an inherently anabolic process. Cells of regenerating tissues must alter their metabolic program in order to accommodate the increased production of new cell membranes, proteins, and nucleic acids. Most biosynthetic pathways require carbon-containing precursor molecules generated directly or indirectly (though not exclusively) from carbohydrates such as glucose. For this reason, glucose utilization can be viewed as a convenient starting point to better understand the greater metabolic network utilized during appendage regeneration.
We recently found that the expression of a substantial number of genes governing glucose metabolism was greatly altered during Xenopus tadpole tail regeneration [3]. These data and others have led us to hypothesize that glucose metabolism and its regulation plays an essential role during vertebrate appendage regeneration. Here we take the opportunity to highlight the largely ignored role for carbohydrate metabolism during appendage regeneration and to encourage research aimed at better linking these two processes.
The phases of Xenopus tail appendage regeneration
The Xenopus tadpole tail contains a diverse collection of axial tissues, including the spinal cord, dorsal aorta, notochord, skeletal muscle, and epidermis ( Fig. 1A and B) ( [3], reviewed in [4]). All of these tissues regenerate within one week following tail amputation. Elegant grafting experiments have shown that most of the regenerated tail tissues are derived from lineage specific precursors [5]. In the case of skeletal muscle, tail amputation activates stem cell-like muscle satellite cells, which then differentiate and repopulate the skeletal muscle of the new tail [5]. Several growth factors govern tail regeneration, including the BMP, Notch, Wnt, Fgf, and TGFb pathways [6][7][8].
Xenopus tadpole tail regeneration can be divided into three phases: an early, intermediate, and late phase [3]. During the early phase (from 0 to 24 hours post-amputation (hpa)), epidermal wound healing occurs, and inflammatory cells migrate to the site of injury (Fig. 1C). During the intermediate phase, (from $24 to 48 hpa), a regenerative tissue bud appears distal to the injury site and an increased rate of cell proliferation becomes apparent (Fig. 1D). During the late phase (from $48 hpa onwards), the tail and its tissues (including blood vessels, neurons, and muscle) regenerate to . Blue arrow shows blood and other cellular debris that spilled from the wound site by 1 minute postamputation (mpa). Fluorescence signal detects inflammatory cells using a Xenopus laevis transgenic line [42]. Scale bar represents 500 mm and is applicable to the panels in D and E. D: Transillumination and immunofluorescence (anti-phosphohistone H3) images showing proliferating cells at two different time periods during Xenopus laevis tail regeneration. Open red arrow shows regenerative bud tissue. E: Transillumination and immunofluorescence images showing the regeneration of neuronal tissue (anti-N-acetylated tubulin), vascular tissue (Flk-1: eGFP X. laevis transgenic line [45], and skeletal muscle (anti-12/101, [46] reconstitute a fully functional appendage (Fig. 1E).
The expression of glucose import modulators increases during Xenopus tadpole tail appendage regeneration
To better understand Xenopus tropicalis tadpole tail regeneration, we decided to identify which genes changed their expression levels during the regenerative response. To do this, we collected RNA samples from the early, intermediate, and late phases of regeneration (as well as a pre-amputation reference) and analyzed them using genome-wide Affymetrix microarrays (MIAME Experiment E-MEXP-2420) [3]. We found that the most highly upregulated gene following tail amputation was leptin, a gene that encodes a cytokine that regulates appetite and blood vessel growth [3,9,10]. The gene expression data also showed that proinsulin, the gene that encodes insulin, was upregulated approximately threefold following tail amputation.
Both leptin and insulin stimulate glucose import into cells by increasing the activity of glucose transporters [11]. These transporters are composed of many different subunits [12], and genes encoding some of these subunits were also markedly upregulated following tail amputation. An example is the expression level of slc2a3 (facilitated glucose transporter, member 3), which was elevated 25-fold within six hours following amputation.
In addition, some of the signaling pathways implicated during tail regeneration can alter cellular glucose metabolism and intake. For example, PI3K/Akt signaling has been shown to increase glucose transport into cells and activate the glucose metabolic enzymes hexokinase (HK) and phosphofructokinase (PFK) [13,14]. Notably, both leptin and insulin activate PI3K/ Akt signaling, as do several receptor tyrosine kinase (RTKs) that have been implicated in tail regeneration [3,9,15].
Together, these data led us to speculate that regenerating tissues actively increase glucose cellular import. Because regeneration is an inherently anabolic process, we reasoned that increased glucose import is important for the production of new macromolecular components. In the next sections, we speculate in more detail on the mechanisms by which glucose metabolism might be utilized and regulated during regeneration.
Cutting carbon emissions via glycolysis
During its complete combustion, glucose generates approximately 36 energy-bearing ATPs and six CO 2 molecules (Fig. 2). However, from the viewpoint of a rapidly growing tissue system, CO 2 "emissions" can be considered detrimental, as molecular carbon substrates are needed for the anabolic reactions that underlie tissue growth [14]. Toward this aim, inhibiting the complete combustion of glucose (and thus the generation of CO 2 ) allows glucose to be diverted into anabolic pathways that generate nucleic acids, proteins, and lipids ( Fig. 2A).
During its complete combustion, glucose is first processed in glycolysis, generating two molecules of pyruvate that are later fully oxidized in the Krebs cycle ( Fig. 2A). However, instead of entering the Krebs cycle, glucose derivatives produced in glycolysis can be used in anabolic biosynthetic reactions ( Fig. 2A) [16]. For instance, dihydroxyacetone phosphate (DHAP) can be used in the production of certain lipids; and 3phosphoglycerate and pyruvate can be used in the synthesis of several amino acids, such as serine, cysteine, glycine, alanine, valine, and leucine, thus contributing to an increase in protein mass.
Other macromolecular precursors and co-factors are generated in the pentose phosphate pathway (PPP), a metabolic pathway that stems from glucose after its phosphorylation by HK ( Fig. 2A). The rate-limiting step of glucose-6-phosphate entry into the PPP is governed by the enzyme, glucose-6phosphate dehydrogenase (G6PD) [17]. We found that g6pd, the gene that encodes G6PD, was significantly upregulated within six hours following amputation and remained at high levels throughout the intermediate and late phases of regeneration [3] (Fig. 2B), suggesting that the PPP is promoted during tissue regeneration.
Oxidation reactions in the PPP generate two molecules of NADPH, a co-factor, which is critical for lipid synthesis ( Fig. 2A). NADPH is also essential for the production of the deoxyribonucleotides needed for DNA synthesis (Fig. 2). Moreover, the PPP generates ribose-5-phosphate (R5P), which is essential for the production of nucleic acids and the amino acid histidine. Finally, the PPP gives rise to erythrose-4-phosphate (E4P), which when combined with phosphoenol pyruvate (PEP), is involved in the generation of the aromatic amino acids tyrosine, phenylalanine, and tryptophan. These observations suggest that an increase in glucose entry into glycolysis, combined with shunting of glucose into the PPP through the upregulation of g6pd expression, may play crucial roles in facilitating the regeneration program.
ROS sensitive pyruvate kinase isoform 2 (PKM2) controls carbohydrate flux from glycolysis into the Krebs cycle In order for glucose to be used in glycolysis and the PPP, its entry into the Krebs cycle should be diminished. A well-studied enzyme that controls the flow of glucose into the Krebs cycle is pyruvate kinase M (PKM). This enzyme mediates the conversion of PEP to pyruvate in the final step of glycolysis ( Fig. 2A). PKM therefore regulates the balance between glycolysis and oxidative phosphorylation. Two differentially spliced isoforms of the pkm gene have been described, dubbed pkm1 and pkm2. Of particular interest is PKM2, which is highly expressed in embryonic and cancer tissues [18].
PKM2 activity can be inhibited by growth factor stimulated tyrosine phosphorylation [19]. This is relevant because pathways that activate RTKs, such as the FGF signaling, are known to be necessary during Xenopus tail appendage regeneration [6]. Also, reactive oxygen species (ROS) have been shown to inhibit the activity of PKM2 via the oxidation of one of its cysteine residues [20]. Notably, we have found that ROS production is markedly increased and required for Xenopus tadpole tail regeneration (Fig. 2C) [21].
Our gene expression data also showed that injured Xenopus tail tissues increase the level of expression of hif1a [3], which has been shown to suppress the metabolic activities of mitochondria [22]. Thus, we hypothesize that tyrosine phosphorylation, ROS production, and hif1a expression coordinately play essential roles in decreasing the combustion of glucose during appendage regeneration and thus increase carbohydrate entry into the anabolic pathways necessary for tissue growth.
Glucose utilization in proliferating systems: The Warburg effect
Previous studies have shown that rapidly dividing tissues, such as tumors, exhibit altered metabolism and glucose utilization [23,24]. In the 1920s, Nobel laureate Otto Warburg reported that, even in the presence of sufficient oxygen, cancerous tissue exhibits decreased oxygen consumption per catabolized glucose molecule, a phenomenon known as the Warburg effect or aerobic glycolysis [24,25]. In other words, cancer cells increase glucose consumption to maximize biosynthetic capacity rather than enhance their ATP supply via pyruvate oxidation in the Krebs cycle.
Warburg's initial observation was later confirmed in experiments examin-ing proliferating lymphocytes, suggesting that increased glycolysis could be somewhat inherent to rapidly dividing cells [26]. Recently, a Warburg effect has also been described in proliferating embryonic tissues [27][28][29]. Stem cells may also depend on Warburg-like metabolism. Recent evidence suggests that induction of pluripotency in differentiated cells correlates with a shift to a more glycolytic state [30,31]. Whether the muscle satellite stem cells implicated during Xenopus tadpole tail regeneration depend on Warburg-like metabolism is an intriguing possibility to be examined in future studies.
Studies have also shown that PPP dependent processes -such as NADPHdependent detoxifying mechanisms and production of reactive oxygen species (ROS) -are implicated during cancer progression [32,33]. Accordingly, genetic studies have reported that cancerous tissues exhibit increased expression of glycolytic enzymes [34].
The abnormally high rate of glucose uptake and glycolysis in cancerous tissues has prompted glycolytic pathway inhibitors to be explored as anticancer agents [35]. In addition, the radioactively labeled glucose substrate analog fluorodeoxyglucose (FDG) is currently used to help locate cancers within the body using positron emission tomography (PET) [36].
These studies demonstrate that altered glucose metabolism -the Warburg effect -can be viewed as a general property of proliferating systems. Although it has never been formally reported, we argue here that a Warburg-like metabolism may be an essential property of regenerating tissues. Using the Xenopus model to examine carbohydrate metabolism during vertebrate appendage regeneration Thus far we have discussed how gene expression (leptin, proinsulin, slc2a3, g6pd, hif1a), signaling pathways (PI3K/ Akt signaling downstream of leptin/ insulin/PDGF, PKM2 inhibition downstream of RTK activity), and the production of ROS (ROS sensitive PKM2 inhibition) are implicated during tail regeneration. We have hypothesized that these collectively function to increase carbohydrate flux into anabolic reactions. Given that tissue regrowth is biosynthetic in nature, the idea that glucose metabolism is altered during regeneration to accommodate anabolic pathways makes sense. However, these ideas have not been formally tested.
We would argue that the Xenopus tadpole tail regeneration model represents an ideal system to investigate the role and regulation of carbohydrate metabolism during appendage regeneration. The Xenopus model has a welldeveloped series of genomic resources, such as a sequenced genome [37] and over one million ESTs [38]. Frogs are relatively easy to house, and tadpoles can be raised in the thousands at minimal cost [39]. The tadpole tail is semi-transparent, allowing live imaging Figure 2. Production of biosynthetic precursors during glycolytic metabolism and their putative regulation during Xenopus tail appendage regeneration. A: Pathways demonstrating how glucose or its derivatives can contribute to biosynthetic processes as well as how glucose metabolism may be regulated during appendage regeneration as outlined in the essay. Diagram adapted from [13,16,24]. Colors indicate conceptually different pathways or interactions: glycolysis toward glucose combustion (black); pentose phosphate pathway (PPP, shown in red); molecular contributions of biosynthetic pathways (blue); NAD/H, NADP/H, ATD/P reactions shown in green; reintroduction of PPP products into glycolysis (gray); putative inhibitory mechanisms during Xenopus tadpole tail regeneration (yellow); putative activation mechanisms during Xenopus tadpole tail regeneration (purple); putative activity of PI3/Akt given its previously characterized interactions with leptin/insulin/ RTK activity [9,15]. Asterisk ( Ã ) indicates that the PK inhibition by ROS and tyrosine kinase activity have been reported for the PKM2 version of the PK enzyme [19,20]. Acronyms are as follows: HK, hexokinase; G6PD, glucose-6-phosphate dehydrogenase; 6PGL, 6-phosphogluconolactonase; 6PGDH, 6-phosphogluconate dehydrogenase; PPEI, phosphopentoseisomerase; PPE, phosphopentose epimerase; PFK, phosphofructokinase; TK, transketolase; TA, transaldolase; PGI, phosphoglucose isomerase; ALDO, aldolase; TPI, triosephosphate isomerase; GAPDH, glyceraldehyde phosphate dehydrogenase; PGK, phosphoglycerate kinase; PGAM, phosphoglycerate mutase; ENO, enolase; PK, pyruvate kinase. B: In situ hybridization showing expression of g6pd following amputation and during the regeneration of Xenopus tropicalis tadpole tails. Solid red arrow shows a portion of the notochord that has exited the wound site. Open red arrow shows regenerative bud tissue. C: Transillumination (trans) and HyperYFP ([H 2 O 2 ]) images showing the detection of the reactive oxygen species (ROS) hydrogen peroxide (H 2 O 2 ) following Xenopus laevis tail amputation using the H 2 O 2 sensitive HyPerYFP probe [21,42]. Relative levels of H 2 O 2 levels shown in the scale found to the right of the images. Solid red arrow shows a portion of the notochord that has exited the wound site. of regenerating tissues. Furthermore, the Xenopus model is amenable to a wide range of genetic modification protocols, including targeted mutations [40] and the generation of transgenic lines [41].
In addition, transgenic Xenopus lines can be produced to allow the analysis of metabolic changes during regeneration in vivo, over long periods of time, and in a tissue-specific manner. For example, in a previous study we generated transgenic Xenopus lines that ubiquitously expressed a ROS-sensitive molecular sensor, called HyperYFP [42]. This transgenic line allowed us to assess the changes in ROS levels during tail regeneration (Fig. 2C) [21]. A similar approach can be exploited in order to generate additional transgenic lines that express genetically encoded fluorescent metabolic indicators. One such tantalizing genetically encoded indicator is the Peredox protein, a GFP-RFP fusion protein that reports changes in NAD þ /NADH ratios, a major readout of cellular metabolism [43].
Aside from genetic modification, experiments using Xenopus tadpoles could also address questions regarding glucose intake during appendage regeneration. One particularly intriguing experiment, if feasible, would be to subject a regenerating organism to food or culture medium supplemented with FDG and subsequently performing FDG-PET on the regenerating organismmuch like the PET scans of cancer patients -in order to assess whether an increase in glucose uptake occurs during tissue regeneration.
Similarly, experiments using Xenopus tadpoles could address whether regenerating appendage tissues exhibit the Warburg effect. A straightforward way to assess this possibility would be to replicate similar experiments to those performed by Otto Warburg and others on regenerating tissues. In addition, assessing the activity of glycolytic enzymes such as PK during regeneration would also provide evidence of increased carbon flux into glycolysis. This approach has previously been done in regenerating rat liver [44]. Performing metabolomic analyses on regenerating appendages, such as tails and limbs, could further corroborate such studies.
Additional experiments could determine whether the glycolysis-promoting isoforms of PKM, such as PKM2, are preferentially expressed in regenerating tissues. In addition, examining the phosphorylation or oxidation state of the PKM2 via Western blot or targeted proteomic analyses might also help elucidate whether PK activity is modulated during different phases of regeneration. These experiments might help confirm whether anabolic pathways are promoted at the expense of oxidative phosphorylation.
Conclusions and prospects
Vertebrate appendage regeneration is a fascinating process that is not yet fully understood. In particular, we know little about how cells alter their cellular metabolism during regeneration. Here we have used recent evidence to speculate that regenerating appendages utilize several mechanisms to shift glucose metabolism toward anabolic pathways. Confirming these speculations may be an important step toward the development of more effective regenerative therapies, as proper cellular metabolism may facilitate a more efficient regenerative response.
In this regard, the Xenopus tadpole model is a powerful system to investigate the metabolic components of vertebrate appendage regeneration. However, discoveries made in Xenopus should be confirmed in other models of appendage regeneration -including zebrafish fin regeneration, mouse digit regeneration, and limb regeneration in the Mexican salamander/axolotl (Ambystoma mexicanum) -before a more complete understanding of the role and regulation of carbohydrate metabolism during vertebrate appendage regeneration can emerge. Indeed, investigations using these models may potentially yield insights into the fundamental and evolutionarily conserved metabolic underpinnings of successful vertebrate appendage regeneration, and may even shed light into why some organisms have better regenerative capacities than others.
|
v3-fos-license
|
2023-01-16T14:16:07.282Z
|
2017-02-07T00:00:00.000
|
255838708
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s12885-017-3081-3",
"pdf_hash": "38b57f7ef77a9a74aa66924dcdb13585a577f298",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45537",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "38b57f7ef77a9a74aa66924dcdb13585a577f298",
"year": 2017
}
|
pes2o/s2orc
|
Potential importance of protease activated receptor (PAR)-1 expression in the tumor stroma of non-small-cell lung cancer
Protease activated receptor (PAR)-1 expression is increased in a variety of tumor cells. In preclinical models, tumor cell PAR-1 appeared to be involved in the regulation of lung tumor growth and metastasis; however the role of PAR-1 in the lung tumor microenvironment, which is emerging as a key compartment in driving cancer progression, remained to be explored. In the present study, PAR-1 gene expression was determined in lung tissue from patients with non-small-cell lung cancer (NSCLC) using a combination of publicly available RNA microarray datasets and in house-made tissue microarrays including tumor biopsies of 94 patients with NSCLC (40 cases of adenocarcinoma, 42 cases of squamous cell carcinoma and 12 cases of other type of NSCLC at different stages). PAR-1 gene expression strongly correlated with tumor stromal markers (i.e. macrophage, endothelial cells and (myo) fibroblast markers) but not with epithelial cell markers. Immunohistochemical analysis confirmed the presence of PAR-1 in the tumor stroma and showed that PAR-1 expression was significantly upregulated in malignant tissue compared with normal lung tissue. The overexpression of PAR-1 in tumor stroma of NSCLC appeared to be independent from tumor type, tumor stage, histopathological differentiation status, disease progression and patient survival. Overall, our data provide evidence that PAR-1 in NSCLC is mainly expressed on cells that constitute the pulmonary tumor microenvironment, including vascular endothelial cells, macrophages and stromal fibroblasts.
Background
Lung cancer is the leading cause of cancer related death, with around 1.6 million deaths worldwide and the mortality rates for lung cancer are still increasing annually [1,2]. Non-small-cell lung cancer (NSCLC), the most common type of lung cancer, has a devastating survival outcome. Traditional chemotherapy, including predominantly platinum-based regimens, as first-line standard treatment for NSCLC only shows a modest prolongation of median and overall survival. Despite aggressive multimodality therapy, 5-year survival rate for patients with stage IV NSCLC at diagnosis is only approximately 2% [3]. More recently, targeted therapies showed efficacy in patients with advanced NSCLC who have specific genetic alterations, like mutations of the anaplastic lymphoma kinase gene or of the epidermal growth factor receptor [1]. However, these available molecular therapies can only be applied to selective patients and the observed benefits are small, suggesting that more in-depth studies of molecules that relate to the pathogenesis of NSCLC is required.
Protease-activated receptor (PAR)-1 is a cell surface seven-transmembrane G protein coupled receptor that is activated by proteolytic cleavage. Removal of the Nterminal extracellular domain of PAR-1 reveals a new tethered ligand that binds to the body of PAR-1 and activates transmembrane signaling to intracellular G proteins, thereby leading to multiple pathophysiological responses [4,5]. Overexpression of PAR-1 has been detected in various types of cancers, including ovarian, breast, lung, prostate cancer and melanoma [6][7][8][9][10]. Importantly, elevated PAR-1 expression is closely associated with diseases progression and overall survival in breast, prostate, gastric cancer and melanoma [6,8,9,11]. Moreover, tumor cell PAR-1 is recently identified as a promising target to decrease lung cancer progression. Indeed, PAR-1 pepducin inhibitors not only block the migration of both primary and established lung cancer cell lines, but also significantly limit lung tumor growth in nude mice [10]. Moreover, melanoma growth and metastasis were significantly decreased in mice treated with PAR-1 small interfering RNA (siRNA) [12].
During the last decade, the paradigm that tumor growth solely relies on the malignant cells has shifted to a more comprehensive view that tumor growth is dependent on interactions between cancer cells and their adjacent microenvironment, also known as the stroma [13]. The tumor stroma, predominately composed of basement membrane, fibroblasts, vasculature with endothelial cells, inflammatory cells and extra cellular matrix proteins such as collagen and fibronectin [14], is indeed emerging as a key player in promoting carcinogenesis by modulating tumor growth, angiogenesis, invasion and metastasis [15,16]. Targeting the tumor stroma is consequently under intense investigation as novel treatment strategy in cancer.
Interestingly, PAR-1 expression is not tumor cell specific and PAR-1 is also expressed on key cell types that constitute the tumor stroma such as endothelial cells, fibroblasts and macrophages. Activation of PAR-1 on these stromal cells leads to increased vascular permeability, fibroblast activation, extracellular matrix production and cytokine secretion, thereby potentially driving tumor growth and metastasis [13]. In line with these observations, colonic adenocarcinoma growth was limited in PAR-1-deficient mice, suggesting the importance of PAR-1 in the tumor microenvironment [17]. In addition, pancreatic tumors in PAR-1 deficient animals were significantly smaller compared with tumors in wild type mice. Moreover, the same study also showed that stromal cells drive tumor growth and induce chemoresistance of pancreatic cancer in a PAR-1 dependent manner [18]. Overall these data point to an important role of stromal cell-associated PAR-1 in tumor progression. However, the role of stromal PAR-1 in lung cancer has not been explored yet. In the present study, we examined PAR-1 expression in NSCLC stroma and assessed its correlation with disease progression.
Patients
Tissue microarrays (TMAs, triplicate cores per case) were prepared with tumor sections obtained from NSCLC patients during surgery according to the guidelines of the Medical Ethical Committee of the Academic Medical Center of Amsterdam. The TMAs consist of samples from 94 patients with NSCLC, including 40 cases of adenocarcinoma (ADC), 42 cases of squamous cell carcinoma (SCC) and 12 cases of other type of NSCLC at different stages (Table 1). On each TMA, 3 cases of healthy lung tissue (i.e. adjacent normal tissue) were also included.
Immunohistological analysis
Four-μm sections were first deparaffinized and rehydrated. Endogenous peroxidase activity was quenched with 0.3% H2O2 in methanol. PAR-1 staining was performed with a primary antibody specific for PAR-1 (ATAP-2 ;1:200; SC-13503, 24 h at 4°C, Santa Cruz, San Diego, CA) [19,20]. A horseradish peroxidase-conjugated polymer detection system (ImmunoLogic, Duiven, the Netherlands) was applied for visualization, using an appropriate secondary antibody and diaminobenzidine staining. Specimens with PAR-1 immunostaining were reviewed jointly at a multihead microscope by 2 investigators blinded to the patients' clinical status. To evaluate immunohistochemical expression of PAR-1, the intensity of PAR-1 staining was graded by consensus on a scale from 0 to 3 (0 = negative staining; 1 = weakly positive; 2 = moderately positive; 3 = strongly positive). Slides were photographed with a microscope equipped with a digital camera (Leica CTR500).
Statistics
Statistical analyses were conducted using GraphPad Prism (GraphPad software, San Diego). Comparisons between conditions were analyzed using two tailed unpaired t-tests when the data were normally distributed; otherwise Mann-Whitney analysis was performed. Results are expressed as mean ± SEM, P values < 0.05 are considered significant.
PAR-1 gene expression is correlated with lung tumor stroma activation
To explore the association of PAR-1 expression with the NSCLC stroma, we correlated PAR-1 gene expression levels with specific markers of different stromal cell types, including macrophages, endothelial cells, epithelial cells and (myo) fibroblasts in resected tumor specimens using publicly available microarray datasets. To this end, 3 markers were selected for each stromal cell type, except for (myo) fibroblasts for which we included markers of differentiated fibroblasts and markers for extracellular matrix (ECM) produced by myofibroblasts. Interestingly, tumors with higher PAR-1 levels also displayed elevated expression levels of markers for macrophages, endothelial cells and (myo) fibroblasts on the microarrays. Using the GSE3141 dataset ( Fig. 1), PAR-1 gene expression was correlated with all three markers for human monocytes and macrophages, i.e. CD68 (p < 0.01), CD163 (p < 0.001) and CD14 (p < 0.0001) [21]. Correlations with specific vascular endothelial cell markers (e.g. Platelet endothelial cell adhesion molecule (PECAM)-1) and fibroblasts markers (e.g. Vimentin (VIM) and fibroblast activation protein alpha (FAP)) were also significant (p < 0.0001), with rvalues ranging from 0.2 to 0.7. The commonly used differentiation marker for fibroblasts ACTA2 (gene encoding for alpha-smooth muscle actin, α-SMA [22]) and markers for prominent constituents of ECM deposition Collagen, type I, alpha (COL1A1) and Fibronectin (FN1) were also all correlated with PAR-1 gene expression in the NSCLC specimens (all p < 0.01). Intriguingly, PAR-1 expression did not correlate to epithelial (tumor) cell markers Epithelial cell adhesion molecule (EpCAM), Cadherin 1 (CDH1) and Mucin 1 (MUC1). These observed correlations (and lack of correlation in epithelial cells) were confirmed in four additional independent microarray datasets from NSCLC (Table 2). However, no correlation between PAR-1 and stromal markers was observed in the healthy control group included in the Hou et al. set (GSE19188), suggesting the correlation between PAR-1 gene expression and stroma activity specifically exists in tumor microenvironment. To confirm the identity of the stromal cell types expressing PAR-1, we performed immunohistochemistry with different cell type markers on consecutive lung cancer slides. As shown in Additional file 1: Figure S1, PAR-1 positive areas are also positive for CD31 (endothelial marker), CD68 (macrophage marker) and aSMA (myofibroblast marker).
PAR-1 is overexpressed in stroma of primary pulmonary tumors on TMAs
To confirm the presence of PAR-1 in NSCLC stroma, we next analyzed PAR-1 protein expression in tumor sections using immunohistochemistry. Ninety-four patients with pathologically confirmed diagnosis of NSCLC were included into this study. The median age at diagnosis was 66 years (range 30 to 86 years), and the majority of patients had NSCLC stage I disease (n = 53, 57.6%). Six cases were well differentiated (2 ADC, 1 SCC, 3 other types), 30 cases were moderately differentiated (12 ADC, 18 SCC) and 22 cases were poorly differentiated (10 ADC, 11 SCC, 1 other types) (Table 1). Overall, strong PAR-1 expression was seen in stroma of all different types of NSCLC (ADC, SCC and large-cell carcinoma) as opposed to weak PAR-1 staining on control sections (Fig. 2). In line with our observations in the tumor microarray datasets, the stromal cells (fibroblastlike cells, inflammatory cells and endothelial cells) were all intensively stained for PAR-1, while cancer cells were negative for PAR-1 or showed only weak PAR-1 staining. Subsequent quantifications showed that 93 out of the 94 cases had PAR-1 expression in the stroma, with an average score of 2, while 1 SSC patient was PAR-1 negative. Importantly, the average PAR-1 score in control lungs was significantly lower as in NSCLC stroma (average score of 1; Fig. 3a). As shown in Fig. 3b, PAR-1 levels were similar in different subtype of NSCLC (average scores of 2.11, 2.01 and 2.08 for ADC, SCC and other type of NSCLCs respectively). Stromal PAR-1 expression levels did not correlate with clinical variables like stage of NSCLC (Fig. 3c), differentiation status (Fig. 3d), disease progression (Fig. 3e) and overall survival (Fig. 3f ).
Discussion
One of the anticipated future treatment options for NSCLC is to target the interactions between tumor and stromal cells, since stromal cells provide additional signals that support tumor growth and invasion [1,16]. In the present study, we determined PAR-1 expression in NSCLC patients and found high PAR-1 expression predominantly in the tumor stroma compartment during early stage cancer. This was reflected by the correlation of PAR-1 gene expression with stroma markers like CD163, CD31 and vimentin, and ECM proteins like collagen and fibronectin, as well as by a significant increase in the intensity of PAR-1 staining in stromal cells of tumor tissue compared with normal lung tissue. Although it has been documented that upregulation of PAR-1 expression appears in a variety of invasive cancers of epithelial origin, our data do show that increased PAR-1 expression in NSCLC patients arises mainly in the tumor stroma rather than in the epithelial cancer cells.
The observed PAR-1 expression pattern in NSCLC resembles that seen in other malignancies. In breast cancer, PAR-1 expression, as shown by immunohistochemistry and in situ hybridization, is observed in mast cells, macrophages, endothelial cells, and vascular smooth muscle cells of the metastatic tumor microenvironment. Interestingly however, PAR-1 expression is particularly increased in stromal fibroblasts surrounding breast carcinoma cells as opposed to low/negative expression in fibroblasts of healthy or benign conditions [23]. Moreover, in prostate cancer PAR-1 is predominantly expressed in peritumoral stroma. In particular, PAR-1 is mainly expressed in myofibroblasts and to a lower level in endothelial cells in isolated capillaries around the malignant glands [24,25].
The enrichment of PAR-1 expression in the stroma surrounding the tumor may actually be clinically relevant. Indeed, in the setting of pancreatic cancer, PAR-1 also coincides with the expression pattern of the stromal markers, such as vimentin, collagen I and α-SMA [18]. More importantly, PAR-1 promoted monocyte recruitment due to fibroblast dependent chemokine production, thereby driving pancreatic tumor growth and chemoresistance [18]. In the context of lung cancers, the expression of PAR-1 mRNA in alveolar walls with surface spreading of neoplastic cells was shown to increase by 10-fold compared with alveolar walls without surface spreading of neoplastic cells, and stimulation of PAR-1 led to the proliferation of alveolar capillary endothelial cells, pointing to PAR-1 as a potential regulator in alveolar angiogenesis [26]. Interestingly, accumulating evidence indicates that PAR-1 also exerts pro-inflammatory and pro-fibrotic functions through macrophages and fibroblasts during pulmonary fibroproliferative disease progression [27][28][29], which may also benefit tumor progression and metastasis.
Previous studies about PAR-1 in NSCLC focused on its function in cancer cells. Indeed, multiple reports showed that PAR-1 modulates lung cancer cell proliferation and migration, thereby supporting tumor growth and invasion [10,30]. Hence, targeting PAR-1 to inhibit progression of lung cancer cells seems to be an option for cancer therapy. Recently, emphasis has shifted toward the tumor stroma for novel therapeutic strategies and several approaches targeting the stromal tissue in different types of cancers have been proved to be effective [31][32][33]. Our data showing high stromal PAR-1 expression in NSCLC may thus indicate stromal PAR-1 may be the main target of the treatment for NSCLC. However, before drawing conclusions on potential clinical implications of stromal PAR-1 in NSCLC, it is important to elucidate the functional consequence of PAR-1 activation on stromal cells with respect to lung cancer development.
In the present study, we observed that PAR-1 expression is highly upregulated in the tumor stroma but not in normal lung tissue, suggesting that PAR-1 may have a diagnostic value in NSCLC. However, the increased PAR-1 expression does not seem to correlate with diseases progression, which indicates that stromal PAR-1 in lung cancer is crucial for carcinogenesis but may not be a determinant factor for cancer progression. These results are in line with a recent study by Erturk and colleagues, who determined serum PAR-1 levels in 80 patients with lung cancer [34]. Serum PAR-1 concentrations of lung cancer patients were significantly increased as compared to controls (i.e. median values of 26.45 ng/mL and 0.07 ng/mL, respectively), but serum PAR-1 levels did not correlate with clinical variables and failed to predict prognosis of the lung cancer patients. In apparent disagreement, other studies using immunohistochemistry analysis showed that PAR-1 may be a prognostic factor for poor prognosis in [35,36]. Importantly however, these studies analyzed tumor cell PAR-1 expression and did not address PAR-1 expression in the stromal compartment.
Conclusion
In summary, our data show PAR-1 is overexpressed in the tumor stroma of NSCLC, but stromal PAR-1 expression levels do not correlate with disease progression and/or overall survival.
Additional file
Additional file 1: Figure S1. Correlation of PAR-1 expression and specific markers for endothelial cells, macrophages and myofibroblasts. Consecutive lung cancer slides stained for PAR-1 (left panels), CD31 (endothelial marker), CD68 (macrophage marker) and aSMA (myofibroblast marker). Please note that due to the use of consecutive slides, the structure of the tissue in the PAR-1 stained slide is somewhat different from the CD31, CD68 and aSMA stained slides. Pictures were taken with 100x magnification. (TIF 7026 kb)
Funding
This study was supported by grant from the Netherlands Organization for Scientific Research (016.136.167). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Authors' contributions CL conceived and designed the experiments, performed the experiments, analyzed the data and wrote the manuscript; CJM performed part of the experiments and analyzed the data; JJTHR performed part of the experiments and analyzed the data; MDK analyzed the data; HMH performed part of the experiments; KB analyzed the data and wrote the manuscript; CAS conceived and designed the experiments, and was a major contributor in writing the manuscript. All authors have read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Not applicable.
Ethics approval and consent to participate This research project used anonymized human tissue (both NSCLC tumorous and adjacent healthy tissue) that was removed from a patient during the normal course of treatment and which was later made available for scientific research (so-called 'further use' of human tissue). According to the Code of Conduct for dealing responsibly with human tissue in the context of health research (Human Tissue and Medical Research: Code of conduct for responsible use drawn up by the Federation of Dutch Medical Scientific Societies in Fig. 3 Association of stromal PAR-1 expression with clinical parameters in NSCLC patients. a Stromal PAR-1 expression in healthy lung tissue and in NSCLC. b Stromal PAR-1 expression in healthy lung tissue and in different types of NSCLC. c Stromal PAR-1 expression in healthy lung tissue and in different stages of NSCLC. d Stromal PAR-1 expression according to the differentiation status of NSCLC, including well differentiated, moderately differentiated and poorly differentiated. e Stromal PAR-1 expression in NSCLC patients with disease progression and in patients with stable disease (no-progression). f Stromal PAR-1 expression of survivors and non-survivors of NSCLC. All data are expressed as mean ± SEM, *P < 0.05, **P < 0.01 and *** P < 0.001 collaboration with the Dutch Patient Consumer federation, the Federation of Parent and Patient Organisations and the Biobanking and Biomolecular Resources Research Infrastructure; https://www.federa.org/sites/default/files/ digital_version_first_part_code_of_conduct_in_uk_2011_12092012.pdf) these biological materials are as such not subject to any requirement for ethical review or consent from patients.
|
v3-fos-license
|
2022-04-27T06:24:04.502Z
|
2022-04-26T00:00:00.000
|
248389451
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00232-022-00234-0.pdf",
"pdf_hash": "285506505e1f06fa96bf4dabe515e4e6c7017aa9",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45540",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"sha1": "223b372201016450521a6e427d62b33e0ab46bfc",
"year": 2022
}
|
pes2o/s2orc
|
Interdigitation-Induced Order and Disorder in Asymmetric Membranes
We studied the transleaflet coupling of compositionally asymmetric liposomes in the fluid phase. The vesicles were produced by cyclodextrin-mediated lipid exchange and contained dipalmitoyl phosphatidylcholine (DPPC) in the inner leaflet and different mixed-chain phosphatidylcholines (PCs) as well as milk sphingomyelin (MSM) in the outer leaflet. In order to jointly analyze the obtained small-angle neutron and X-ray scattering data, we adapted existing models of trans-bilayer structures to measure the overlap of the hydrocarbon chain termini by exploiting the contrast of the terminal methyl ends in X-ray scattering. In all studied systems, the bilayer-asymmetry has large effects on the lipid packing density. Fully saturated mixed-chain PCs interdigitate into the DPPC-containing leaflet and evoke disorder in one or both leaflets. The long saturated acyl chains of MSM penetrate even deeper into the opposing leaflet, which in turn has an ordering effect on the whole bilayer. These results are qualitatively understood in terms of a balance of entropic repulsion of fluctuating hydrocarbon chain termini and van der Waals forces, which is modulated by the interdigitation depth. Monounsaturated PCs in the outer leaflet also induce disorder in DPPC despite vestigial or even absent interdigitation. Instead, the transleaflet coupling appears to emerge here from a matching of the inner leaflet lipids to the larger lateral lipid area of the outer leaflet lipids. Graphical abstract Supplementary Information The online version of this article (10.1007/s00232-022-00234-0) contains supplementary material, which is available to authorized users.
Introduction
Plasma membranes play pivotal roles in cell physiological processes by regulating and controlling diverse signaling, sensing and transport mechanisms. One of the outstanding features of plasma membranes on the molecular level is a pronounced asymmetric distribution of its lipids across the bilayer, which is generated and controlled by proteins known as flipases, flopases and scramblases (van Meer 2011). Historically, lipid asymmetry was proposed for erythrocytes already half a century ago (Bretscher 1972), i.e. the same year the famous fluid-mosaic model for membrane structure was coined by Singer and Nicolson (1972).
Only 1 year later Verkleij et al. reported first experimental data for lipid asymmetry in erythrocytes using a clever combination of lipid degrading enzymes (Verkleij et al. 1973). Most recently these data were confirmed also for other eukaryotes and extended to details of hydrocarbon chain asymmetry (Lorent et al. 2020). The emerging picture is that eukaryotic plasma membranes have an outer leaflet enriched in choline lipids, such as phophosphatidylcholines (PC) and sphingomyelins (SM), and an inner leaflet containing the amino lipids phosphatidylethanolamine and phosphatidylserine, as well as phosphatidylinositol.
There is yet another type of asymmetry. Most naturally occurring membrane lipids have mixed hydrocarbons. Phospholipids for example typically have a saturated hydrocarbon chain at the sn-1 position of the glycerol backbone and unsaturated hydrocarbon at sn-2. In eukaryotic plasma membranes the majority of these (poly)unsaturated hydrocarbons is located in the inner leaflet (Lorent et al. 2020). Mammalian sphingomyelin in turn has in general only few double bonds in its hydrocarbons, but significantly different chain lengths. Interestingly, lipidomics data on plasma membrane leaflet composition showed also small amounts ( ∼ 1 mol %) of saturated chain asymmetric phosphatidylcholine, such as 14:0-16:0 PC, 14:0-18:0 PC and 16:0-18:0 PC (Lorent et al. 2020). While mixed saturated/unsaturated hydrocarbons have been related to a compromise between bilayer bending flexibility and permeability (Antonny et al. 2015), little is known about the role of mixed saturated phospholipids. Chiantia and London reported an effect of brain sphingomyelin and milk sphingomyelin (MSM) on the lateral diffusion of inner leaflet lipids using asymmetric lipid vesicles fabricated by cyclodextrin (CD)-mediated lipid exchange (Chiantia and London 2012). This type of coupling was found to depend on the extent of chain-length asymmetry, as well as the hydrocarbon chain composition of the inner leaflet lipids. That is, the lateral diffusion of dioleoyl PC was slowed down by MSM only, which was attributed to the longer interdigitating N-acyl chain of MSM. Surprisingly, fluorescence lifetime measurements of the systems found no effect of hydrocarbon chain interdigitation on the overall order of the inner leaflet lipids, demonstrating that interleaflet coupling can be different for different membrane properties. Hydrocarbon chain interdigitation-mediated ordering of lipids in the opposing leaflet was observed in molecular dynamics (MD) simulations, however (Róg et al. 2016).
In general, hydrocarbon chain interdigitation is thought to be an important factor in a functional coupling of both membrane leaflets even in the absence of proteins, although other mechanisms have also been discussed (see e.g. Eicher et al. 2018 and references therein). Interdigitation is believed to increase the shear viscosity between membrane leaflets, which appears to be consistent with the observed reduction of lateral diffusion discussed above (Chiantia and London 2012). This is contrasted, however by experiments with short and long chain fluorescent lipid analogues penetrating to different amounts into the opposing lipid leaflet, which did not reveal different interleaflet viscosities (Horner et al. 2013). We have recently reported the structure of 14:0-18:0 PC (MSPC), 18:0-14:0 PC (SMPC), 16:0-14:0 PC (PMPC) and MSM symmetric bilayers combining small-angle X-ray and neutron scattering (SAXS, SANS) and MD-simulations ). Indeed, we observed an increase of the extent of interdigitation with increasing length difference between the two chains. Interestingly, however, we also found that a significant fraction of the longer chain is bending back and hence not penetrating into the opposing leaflet. This indicates that effects of hydrocarbon ordering in the opposing leaflet might at least to some extent not originate from interdigitation and the associated interleaflet viscosity.
In order to gain further insight, we performed SAXS/ SANS experiments on asymmetric large vesicles (aLUVs) with an inner leaflet composed of mainly di16:0 PC (DPPC) and outer leaflets enriched in either MSPC, SMPC, PMPC or MSM. In the following, these systems are referred to as DPPC in /MSPC out , DPPC in /SMPC out , DPPC in /PMPC out and DPPC in /MSM out . This does not imply, however, complete lipid exchange. The advantage of SAXS/SANS experiments is the lack of bulky labels that might either perturb the delicate balance of intermolecular forces in bilayers or not sample all intramembraneous environments equally. This advantage is, however, frequently challenged by the need for extensive data modeling . We have previously reported models for analyzing scattering data of aLUVs (Heberle et al. 2016;Eicher et al. 2017). For the hitherto studied systems, containing 16:0-18:1 PC (POPC), 16:0-18:1 phosphatidylethanolamine, and DPPC, we observed a structural leaflet coupling only when at least one of the leaflets was in the gel phase, but not for all-fluid membranes, i.e. in the L phase (Heberle et al. 2016;Eicher et al. 2018). Here, we focus on fluid membranes using a modified asymmetry model, which features in addition to the recently introduced headgroup hydration layer ) also the possibility that the center of mass of the terminal methyl groups does not coincide with the center of the lipid bilayer. Such scenarios might occur due to hydrocarbon chain interdigitation or back-bending and are expected for the currently studied systems.
We found that all aLUVs with saturated chain asymmetric PCs have an increased area per lipid in both leaflets, i.e. decreased molecular packing, as compared to the same lipids, but in symmetric bilayers. Apparently, this results from only a minor interdigitation of the longer hydrocarbon chain into the opposing DPPC leaflet that presumably leads to an increase of configurational entropy of all lipid chains. In contrast we observed for DPPC in /MSM out an about three times deeper interdigitation of the long MSM acyl chains and a concomitant lateral condensation of both lipid leaflets, suggesting a loss of hydrocarbon configurational entropy due to increased van der Waals forces. We additionally, applied our analysis to aLUVs with an outer leaflet enriched in the monounsaturated lipids POPC and 18:0-18:1 PC (SOPC), i.e., DPPC in /POPC out , and DPPC in /SOPC out . In this case, we find an increased area per lipid in both cases, however the direction of the shift of the methyl-groups is opposite in the cases. While back-bending of the 18:1-chains prevails for POPC, the longer 18:0-chains in SOPC slightly interdigitate into the DPPC-leaflet.
Results and Discussion
Modeling aLUVs with Chain Asymmetric Lipids Figure 1a and b show SAXS/SANS data of DPPC in /SMPC out aLUVs in comparison to scattering data from symmetric LUVs composed of DPPC/SMPC mixtures representing either the inner or the outer leaflets of the aLUVs. Data have been obtained at 50 • C, i.e. well-above the chain melting temperatures of both lipids (Marsh 2013). SAXS data most clearly deviate at low scattering vectors, q. This indicates a modification of the lipid's headgroup scattering contrast, e.g. due to differing hydration . As discussed previously , SANS is not sensitive to this effect because of the lower contrast in the headgroup regime. In the following, we focus specifically on the mid-q range, however, where SAXS and SANS data of aLUVs both show a pronounced 'lift-off' of the scattered intensity as compared to the compositionally symmetric LUVs (dashed boxes in Fig. 1a, b). The degree the asymmetric curves lift off from the incoherent baseline in SANS is a known measure for the difference in deuteration between inner and outer leaflet that creates a contrast in the neutron SLD-profile in the hydrocarbon regime, see e.g. Eicher et al. 2017). This effect can also be used to monitor the stability of the system with respect to lipid flip-flop (Nguyen et al. 2019;Marx et al. 2021) (for stability checks for the presently studied systems, see Appendix B). To accentuate this lift-off in SANS, we used chain deuterated DPPC (DPPCd62) as acceptor lipid. For our here studied aLUVs this matches the scattering contrast between the inner leaflet hydrocarbons and the solvent in case of 100 % D 2 O. Also SAXS data show an intensity lift-off in this q-range, which can, however, not be uniquely attributed to the asymmetric composition of the aLUVS. For example, a similar effect has been observed before for small unilamellar vesicles and interpreted as head group asymmetry due to an increased membrane curvature (Brzustowicz and Brunger 2005;Kučerka et al. 2007). However, even compositionally symmetric LUVs may show such features, which can be accounted for by considering membrane thickness fluctuations . To jointly analyze SAXS/SANS data of asymmetric vesicles with both leaflets in the L phase, we adopted the following strategy. Firstly, we modified the scattering density profile (SDP)-model for flat asymmetric bilayers (Eicher et al. 2017) with a vesicle form factor and a headgroup hydration layer as detailed recently for symmetric LUVs ). Compared to previous SAXS/ SANS reports on aLUV structure this allowed us to include also low-q data in the analysis (Eicher et al. 2017. The modified SDP-model for aLUVs also included the above mentioned membrane thickness fluctuations . However, the obtained fits yielded unsatisfying results (Fig. 1c). Inspired by (Brzustowicz and Brunger 2005;Kučerka et al. 2007), we therefore considered, in a second attempt, the possibility of headgroup asymmetry. That is, the volume distribution functions describing inner and outer leaflet phosphate groups (PCN) and choline-CH 3 groups were allowed to differ in their relative positions to the backbones, and also in the widths of the phosphate groups in/out PCN . This improved the agreement between model and data only slightly, however (Fig. 1c). In the third step we finally took into account that the longer hydrocarbon of the outer leaflet SMPC can interdigitate into the inner leaflet. In terms of our SDP-model this means that some of the terminal methyls will be off-centered from the interface between the lipid leaflets; for details see "SAS-Data Analysis" section and Appendix A. The obtained fits gave the overall best agreement with SAXS data (Fig. 1c). Note that differences between the three models are small in SANS (Fig. 1d). This supports the idea that the observed lift-off in SAXS is dominated by hydrocarbon interdigitation. We also tried to model data with interdigitated hydrocarbons, but symmetric heads. This decreased the agreement between model and data, however (data not shown). The final successful model therefore allows in addition to our previous model for aLUVs ) also for interdigitated hydrocarbons and asymmetric lipid heads. The origin of headgroup asymmetry for the here studied systems is unclear at present, but might be due to lipid crowding (mass imbalance) because of CDmediated lipid exchange. Figure 2a demonstrates the excellent agreement of our model for DPPC in /SMPC out aLUVs over the complete studied q-range. These data also include SANS measurements performed at 37% D 2 O. At this contrast the solvent roughly matches the outer leaflet of our vesicles, which mostly contains protiated lipids. A combination with other contrasts Fig. 2 Model fits to full q-range SAXS and SANS data (a), using the asymmetric SDP-and separated form factor model. b Shows the SDP-model volume probability functions of lipid moieties and surrounding water and the resulting neutron-SLD and electron den-sity (ED) profiles. The neutron-SLD of inner and outer leaflet chain regions differ greatly due to the inner leaflet enriched in chain-deuterated DPPCd62. The parsing scheme for PCs is shown in c, using the corresponding colors to the SDP model functions gives therefore additional constraints for the adjustable parameters of our model. The applied parsing of membrane structure with volume distribution functions and resulting electron and neutron scattering density profiles are presented in Fig. 2b and c. Figure 2b also shows several parameters used to describe the transmembrane structure. These include the Luzzati bilayer thickness D B , as the sum of the inner and outer monolayer thicknesses, D in M , D out M , the leaflet thicknesses of the hydrocarbons, D in C , D out C , as well as the position of the center of the terminal CH 3 distribution function, z CH3 . Due to the ability of hydrocarbons to also bend back, z CH3 measures interdigitation and back-bending at the same time. For details and all definitions see " SAS-Data Analysis" section and Appendix A. The results for the structure of DPPC in /SMPC out aLUVs are listed in Table 1 and will be discussed in the next section; all adjustable parameters for the fits are reported in Supplementary Table S2.
Structure of aLUVs with Saturated Chain Asymmetric Lipids
Having established a structural model for the analysis of asymmetric vesicles containing chain asymmetric lipids we first applied it to aLUVs with outer leaflets enriched in MSPC, SMPC and PMPC, while maintaining DPPC based inner leaflets. Results listed in Table 1 report our best fit-results for the main compositional and structural parameters (full details are given in Tables S2 and S3; for fits see Figs. 2, S2 and S3). acc/don designate the acceptor/ donor concentrations in the vesicles, which we constrained during the modeling by the results of the compositional analysis using gas chromatography (see "Compositional Analysis Using Gas Chromatography" section and Supplement). The distribution over both leaflets is given by in/out acc/don and results from the analysis of SANS-data, as discussed in "Modelling aLUVs with Chain Asymmetric Lipids" section. The lipid exchange efficiencies yielded an about 70% exchange of the outer leaflet and varied only slightly between the different samples. Only for MSPC this was reduced to 55%. Regarding structural parameters, the Luzzati thickness D B is a well established measure for the width of a bilayer, taking into account its smeared out water/bilayer interface (Tristram-Nagle 2015). Our measure for the interdigitated state of the hydrocarbon chains is the position of the center of the volume probability distribution of the terminal methyl groups with respect to the bilayer center z CH3 . Negative values imply a shift towards the inner leaflet and vice versa. We introduced the parameters V bw and poly in Frewein et al. (2021) to describe the volume per bound water molecule and the membrane thickness polydispersity. We do not discuss these results in further detail, as the exact nature of this polydispersity is still unclear and V bw is strongly coupled to the total number of bound water molecules, n W . The numbers we find here for V bw and poly agree quite well with the ones observed for symmetric vesicles . We HC ∕A in/out are connected to the area per lipid of the respective leaflet A in/out by the chain volumes V in/out HC , for which we used tabulated values depending on the number of hydrocarbons in the lipid (Nagle et al. 2019). To get a feeling how the structure compares to the one of its components, we also calculated the area for each lipid A acc/don , by assuming linear additivity of the areas: The structure of the lipid headgroup is determined by several adjustable parameters, which are further discussed in section A. Here we give the number of water molecules per headgroup within the Luzzati thickness n in/out W , which reflects the extension of the headgroup into the aqueous phase. In symmetric LUVs of chain-asymmetric PCs values for n W between 9 and 15 were found .
The transbilayer structures of symmetric DPPC, MSPC, SMPC and PMPC bilayers in the L phase have been recently shown to be relatively similar, with the most striking difference being the localization of the CH 3 -groups, which exhibits a linear dependence on the chainlength-mismatch . In particular, the lateral area per lipid of these lipids was between 62 and 63 Å 2 , suggesting that there are no significant perturbations in the bilayer caused by chain-mismatch. Also symmetric mixtures of either of the chain-asymmetric lipids with DPPC, which were prepared as symmetric references mimicking either the inner or the outer leaflet of the aLUVs, showed no significant differences in structure; structural parameters for these reference samples are given in Table 1 (see also Tables S1 and S3; Figs. S4 and S5).
In the case of asymmetric vesicles, however, the presence of a chain-asymmetric lipid in the outer leaflet leads to a shift of the terminal methyl groups towards the DPPCcontaining (inner) leaflet. We note the high experimental uncertainty of the absolute values of CH 3 location (Table 1), which impedes us from discussing differences between MSPC, SMPC, and PMPC. A common observation for all three lipids is that the area per lipid in the inner leaflet is increased, for MSPC and PMPC also in the outer leaflet. This suggests that hydrocarbon chains, which penetrate into an opposing leaflet of lipids with less chainlength-mismatch, evoke a state of higher disorder in the hydrocarbon chain region. Notably, there are also changes in the headgroup regions, which seem to emerge along with the CH 3 -asymmetry. Outer leaflet headgroups extend further into the water and therefore accommodate a higher number of water molecules than in the symmetric references. Overall compared to membranes of symmetric inner and outer leaflet references these asymmetric bilayers are thinner, as a result of the increase of area per lipid in one or both leaflets.
Next we focused on the natural lipid mixture MSM, which comprises a much higher chain-asymmetry than the here studied PCs due to its prevalent long acyl chains (22:0, 23:0, 24:0, 24:1). In symmetric vesicles of MSM we found a wider spread of the CH 3 -region than for all studied PCs and a slightly higher area per lipid of 64.8 Å 2 . Inner and outer leaflet mixtures of MSM and DPPC have A in ∼ 64 Å 2 and A out ∼ 63 Å 2 in symmetric LUVs (Tables 1 and S4; Fig. S6). In asymmetric vesicles the area per lipid of the inner leaflet is ∼ 5 % smaller than in the reference, with about twice as much interdigitating hydrocarbons as compared to aLUVs with the donors MSPC and SMPC. The long chains of MSM seem to have an ordering effect on DPPC, which is neither present in symmetric vesicles, nor in asymmetric vesicles with less interdigitation. In contrast, outer leaflet MSM has a similar effect on the headgroup regions of the asymmetric vesicles compared to outer leaflet MSPC, SMPC, and PMPC, i.e., the outer leaflet of the asymmetric vesicle is more hydrated than the inner leaflet.
Given that for fluid phase asymmetric vesicles there is not yet any evidence that opposing monolayers would influence each others structures (Eicher et al. 2017 and that interdigitation has been shown to have little to no influence on lipid diffusion in symmetric bilayers (Schram and Thompson 1995;Horner et al. 2013), this is a surprising result in asymmetric bilayers. Our data suggest a delicate interplay between repulsive entropic/steric forces and attractive van der Waals interactions. In asymmetric bilayers with low interdigitation, the configurational entropy contribution of the hydrocarbon termini apparently dominate and the penetrating chain segments perturb the packing of the opposing DPPC. In contrast, the long chains of MSM share a larger surface of contact and their cohesion leads to an ordering of chains. Indeed, MD-simulations of asymmetric membranes containing 24:0-sphingomyelin reported an increase of order for the interdigitating moieties of its hydrocarbon chain (Róg et al. 2016).
Structure of aLUVs with Monounsaturated Lipids
Finally, we extended our study also to aLUVs with outer leaflets enriched in POPC or SOPC, i.e. lipids with mixed saturated and unsaturated hydrocarbons. Such mixed chain lipids are much more common in mammalian plasma membranes than those studied in the previous section (Lorent et al. 2020). Interestingly, these lipids share with PMPC, SMPC, and MSPC, a nearly equal overlap region of interdigitating and back-bending terminal methyl groups in symmetric bilayers ). In the case of asymmetric membranes the role of unsaturated hydrocarbons at the bilayer center is even less clear. We have therefore employed the here developed analysis model also to DPPC in /POPC out and DPPC in /SOPC out aLUVs. Table 2 gives the corresponding results for the main structural parameters (see also Table S4; Fig. S3). The overall lipid exchange was about equal for both lipids, although POPC flipped slightly more into the inner leaflet during aLUV preparation. The overall membrane thickness D B of both aLUVs was about 2 Å thinner than the combined D B ∕2 -values of symmetric inner and outer leaflet mimics. Within experimental uncertainty these thickness changes occurred almost equally in both leaflets, i.e. ΔD in M ∼ ΔD out M , although the inner leaflets thickness appears to be somewhat more affected. This is particularly reflected in the changes of D in C and D out C , which were more pronounced for the DPPCenriched inner leaflet. This leads to a large increase of A in by about 8% as compared to symmetric DPPC bilayers, or a change of DPPC lateral area ΔA acc ∕A acc ∼ 7% . The area per lipid values in the outer leaflet remained unchanged within experimental uncertainty. Moreover, the areas per lipid in the inner leaflet closely matched those of the outer leaflet.
Analysis for the position of the terminal CH 3 center of mass revealed interesting differences between POPC and SOPC containing aLUVs. In particular, we found that in the case of a POPC-donor, the CH 3 function is located in the outer leaflet and resides for SOPC slightly within the inner leaflet. This could mean that, the 18:1-chains in the leaflet containing POPC bends back to match the length of 16:0. Another possibility is that the kink induced by the double bond of the 18:1-chains is responsible for the terminal methyls being located further in the outer leaflet. SOPC in turn, with its longer 18:0 chain seems to slightly interdigitate, thus shifting the CH 3 center of mass again into the DPPC-rich inner leaflet. The headgroups in both systems also exhibit a high degree of asymmetry. In the SOPC-rich leaflet they stretch to the outside, as it happens in the saturated systems, leading to a high number of water molecules per headgroup in the outer leaflet. For DPPC in /POPC out this situation is reversed, showing a broader headgroup with more water in the inner leaflet. The symmetric reference vesicles qualitatively mirror this behavior, with the outer leaflet mimics having a higher number of hydration waters than the inner ones, however not to the extent of the asymmetric vesicles.
While understanding the subtle effects on headgroup hydration encourage additional experiments, the loosening of lipid packing in the inner leaflet of the presently studied aLUVs containing monounsaturated lipids emerges as a salient feature. Previous SANS experiments on POPC in / DPPC out experiments also reported a softening of the DPPCenriched leaflet below the melting temperature of DPPC, but no coupling effects when both leaflets were in the L phase (Heberle et al. 2016). Also in a later report, using joint SAXS/SANS experiments, we found no coupling for fluid POPC in /DPPC out aLUVs (Eicher et al. 2017). We tested the parameters used in that study for our vesicles and indeed found another local minimum with the same areas per lipid. However, this drove the system to z CH 3 = − 2.5 Å, which would imply a similar interdigitation into the inner leaflet as MSM. We therefore consider this solution less likely. We speculate that absence of interleaflet coupling in our previous studies is a combination of lower data quality, and improved modeling and optimization routines used for fitting present data. However, we cannot fully exclude sample specific properties or slight differences in sample preparation as a potential cause.
Conclusion
To the best of our knowledge the present study provides the first experimental evidence of structural coupling of allfluid asymmetric bilayers. Previously, transbilayer coupling was only observed when at least one leaflet was in the gel phase (Heberle et al. 2016;Eicher et al. 2018), including also lateral diffusion studies of chain asymmetric sphingomyelin (Chiantia and London 2012). The observed coupling suggests a subtle balance of ordering and disordering Fig. 3. For MSPC, SMPC, PMPC we found a minor interdigitation, which led to a loosening of the packing of inner leaflet DPPC. MSM, whose long acyl chains penetrates significantly into the opposing monolayer instead caused an overall lateral condensation of the bilayer. We propose that the configurational entropy of the hydrcarbons, which increases with chain length is able to disorder by fluctuation-mediated steric repulsion the inner lipid leaflets only upon minor chain overlap. On the contrary, energetic optimization of hydrocarbon cohesive forces outweighs this effect in the case of large interdigitation. Such a scenario indeed was suggested from MD simulations (Róg et al. 2016). The decrease of inner leaflet DPPC packing in the case of outer leaflets enriched in POPC and SOPC suggest an additional scenario. Here, the larger lateral area required by the unsaturated hydrocarbon seems to generate a packing mismatch, which is alleviated by increasing the area per lipid of DPPC residing in the inner leaflet. Both scenarios are likely to affect differential stress between the two leaflets (Hossein and Deserno 2020). As most chain-asymmetric saturated lipids are long-chain sphingomyelins such as the ones used in this study or phospholipids with one mono or polyunsatured hydrocarbon chain, such transleaflet coupling schemes might indeed be present also in natural membranes. We note, however, that cholesterol, which is the most abundant lipid in mammalian plasma membranes has been shown to modulate interdigiation-based ordering of inner leaflet lipids (Róg et al. 2016). In order to keep our analysis tractable, we had to exclude cholesterol in the present study. With ongoing efforts in fabricating and analyzing more realistic models of mammalian plasma membranes such goals seem within reach.
Lipids, Chemicals and Sample Preparation
Lipids were purchased from Avanti Polar Lipids (Alabaster, AL, USA) and used without further purification. Chloroform, methanol (pro analysis grade), sucrose and methyl--cyclodextrin (m CD) were obtained from Merck KGaA, Darmstadt, Germany. We prepared asymmetric unilamellar vesicles following the heavy donor cyclodextrin exchange protocol (Doktorova et al. 2018). Acceptor and donor lipids were weighed (ratio 1:2 mol/mol), dispersed separately in a chloroform/methanol mixture (2:1, vol/vol) and dried under a soft argon beam in a glass vial. Acceptor vesicles were prepared from a mixture (19:1 mol/ mol) of chain deuterated DPPC (DPPCd62) and dipalmitoyl phosphatidylglycerol (DPPGd62); donor vesicle consisted of the single species indicated. The resulting films were kept over night in vacuum to ensure the evaporation of all solvent and hydrated with ultrapure H 2 O containing 25 mM NaCl (acceptors, 10 mg/ml lipid) or 20% (wt/wt) sucrose (donors, 20 mg/ml) followed by 1 h incubation at 50 °C (room temperature for POPC and SOPC) and 5 freeze/thaw cycles. Acceptor vesicles were extruded at 50 °C using a handheld mini extruder (Avanti Polar Lipids, AL, USA) with a 100 nm pore diameter polycarbonate filter 31 times or until reaching a polydispersity index < 10% (measured by dynamic light scattering (DLS) using a Zetasizer NANO ZS90, MalvernPanalytical, Malvern, UK).
Donor vesicles were diluted 20-fold with water and centrifuged at 20,000 g for 30 min. The supernatant was discarded, the resulting pellet suspended in a 35 mM m CD solution (lipid:m CD 1:8 mol/mol) and incubated for 2 h at 50 °C, while being shaken at a frequency of 600 min −1 . Acceptor vesicles were added and incubated for another 15 min. The exchange was stopped by diluting the mixture 8-fold with water and centrifuging again at 20,000 g for 30 min. The supernatant containing the asymmetric vesicles Fig. 3 Schematic of possible lipid arrangements of interdigitated systems with saturated lipids of low(a) (DPPC in / MSPC out , DPPC in /SMPC out , DPPC in /PMPC out ) and high (b) chainlength-mismatch (DPPC in /MSM out ), as well as DPPC in /POPC out (c) and DPPC in / SOPC out (d) was then concentrated to < 500 l using 15 ml-Amicon centrifuge filters (Merck, 100 kDa cut-off) at 5000 × g. To remove residual CD and sucrose, the filters were filled with the desired solvent (H 2 O for SAXS and 37% and 100% D 2 O for SANS) and re-concentrated in 3 cycles. The final vesiclesizes were again measured by DLS to ensure the absence of donor MLVs.
Symmetric reference vesicles were prepared using only protiated lipids and extruded as the acceptor vesicles, but using pure H 2 O or D 2 O. Inner leaflet mimics contained 90 mol% acceptor lipid (DPPC/DPPG 19:1 mol/mol), 10 mol% donor lipid; the outer leaflet samples were mixtures of 30 mol% acceptor and 70 mol% donor lipid.
Small-Angle Scattering (SAS) Experiments
SANS measurements were performed at D22, Institut Laue-Langevin, Grenoble, France, equipped with eiter 1 (DOI: 10.5291/ILL-DATA.9-13-822, DOI: ILL-DATA. TEST-3063) or 2 (DOI: 10.5291/ILL-DATA.9-13-938 ) 3 H multidetectors. Sample-to-detector distances were 1.6, 5.6 and 17.8 m with corresponding collimations of 2.8, 5.6 and 17.8 m for the single detector setup; or 5.6 and 17.8 m with the second detector out of center at 1.3 m, with 5.6 and 17.8 m collimations. The neutron wavelength was 6 Å ( Δ ∕ = 10% ). Samples were filled in Hellma 120-QS cuvettes of 1 mm pathway and heated to 50 °C using a bath circulator. Lipid concentrations were about 5 mg/ml in 100% D 2 O and 15 mg/ml in 37% D 2 O. Data were reduced using GRASP (www. ill. eu/ users/ suppo rt-labs-infra struc ture/ softw are-scien tific-tools/ grasp/ accessed on 25 June 2019), performing flat field, solid angle, dead time and transmission correction, normalizing by incident flux and subtracting contributions from empty cell and solvent. SAXS data were recorded at BM29, ESRF, Grenoble, France (DOI: https:// doi. org/ 10. 15151/ ESRF-ES-51413 6943), equipped with a Pilatus3 2 M detector, using a photon energy of 15 keV at a sample-to-detector distance of 2.867 m (Pernot et al. 2018). Samples were measured at a concentration of 10 mg/ml, at 50 °C and exposed for 20 times 2 s in a flow-through quartz capillary of 1 mm light path length. Data reduction and normalization were done by the automated ExiSAXS system; for subtraction of solvent and capillary contributions SAXSutilities 2 (www. saxsu tilit ies. eu accessed on 29 October 2020) was used.
SAS-Data Analysis
To analyze SAS-data we model our lipid bilayer using volume probability distribution functions describing the localization and extent of different parts of the lipids within the membrane. This approach has been previously introduced for SAS data evaluation as the SDP-model (Kučerka et al. 2008) and later extended to asymmetric bilayers (Eicher et al. 2017). For symmetric vesicles we use a previously introduced ) modified version of the SDP-model, which includes the vesicle form factor via the separated form factor-method (Pencer et al. 2006), membrane polydispersity and a headgroup-hydration shell. We also extend the asymmetric SDP-model by the same aspects, as well as modifying the distribution function of the terminal methyl to better allow for examining hydrocarbon chain interdigitation. The full model is presented in Appendix section A. To take into account the presence of lipid mixtures, we average all volumes and scattering lengths for each part of the lipid. To give an example, for a 1:1 mixture of DPPC and PMPC we assume an average lipid with 31 hydrocarbons. This includes the assumption that the lipids mix homogeneously within their leaflet. We note that the disagreement between model and low-q SAXS data for symmetric vesicles (Figs. S4-S7) is due to technical issues that occurred during the experiments, not to inadequacies of the model. Previously reported SAXS data of symmetric LUVs were fully accounted for in this q-range by including a hydration shell . Moreover, we also showed that the hydration shell does not contribute to higher scattering vectors. Consequently, reported structural data for symmetric reference LUVs are not affected by difficulties in fitting low-q SAXS data.
Fitting was done as described in Frewein et al. (2021), including the same SAXS/SANS-weighting, negative waterpenalty and Trust Region Reflective optimization algorithms (Virtanen et al. 2020). Errors were estimated from the covariance matrix, considering also possible systematic errors (e.g. from aLUV compositional uncertainties). For derived quantities we used Gaussian error propagation. For asymmetric vesicles we constrained the areas per lipid A acc and A don by Gaussian priors with mean and standard deviations of the results in Frewein et al. (2021). Also the total lipid concentrations acc ∕ don were constrained by Gaussian priors, using the results from gas chromatography (GC) compositional analysis. As the number of parameters describing the transmembrane structure is doubled, compared to symmetric vesicles, we fixed the distance between the hydrophobic interfaces and the backbones d BB to 0.9 Å and the backbone width BB to 2.1 Å. Like for symmetric vesicles, we fixed the volumes of the individual moieties of the lipids according to Nagle et al. (2019) and the smearing parameters were set to CH2 = 2.5 Å and Chol = 3 Å.
Appendix A Asymmetric SAS-Model
Models for asymmetric flat bilayers distinguish themselves from symmetric models mainly by the necessity to include the imaginary part of the form factor. We adopted the SDP model for asymmetric bilayers introduced in Eicher et al. (2017), with the following modifications: We included the vesicle form factor by the separated form factor method (Pencer et al. 2006) to describe low-q SANS data. We introduced some flexibility in the headgroup contrast using a higher-density hydration layer as described in Frewein et al. (2021). Finally, we described the terminal methyl groups of all lipids by a single error-function which is not necessarily centered around the bilayer center.
The necessary Fourier-transforms for the used distributions are given in the following. The error-function slab is centered around , has a width of d, a smearing parameter of and its area normalized to 1.
For the Gaussian distribution centered at and standard deviation we use: (1) (2) We calculate the form factor by summing over the parts given in Table 3 for inner and outer leaflet, using the functions reported in Eqs. (1) and (2), multiplied by the respective weighting and contrast. Weighting factors for the headgroup contributions (BB, PCN, Chol) are V k A , A denoting the area per lipid and V k the volume of the respective moiety. All terminal methyl groups are condensed into one error function, whose center can deviate from the membrane center. This allows the description of interdigitated or back-bent states without increasing the number of parameters. Finally, CH2 and BW are weighted by the respective chain widths D in/out C , to fill the whole unit cell area, followed by subtraction of the quasi molecular groups they contain. In the case of an asymmetric CH3-distribution, however, the contribution of the methyl groups are not necessarily contained in the respective CH2-distributions. In this case, one of the leaflets loses some material with the volume V s , which has to be respected in the model.
The connection between chain widths, volumes and area per lipid is then given by the following relations: We define V s as half the integral over the terminal methyl group distribution function from its center CH3 to the center of the bilayer (the area indicated in Fig. 5), multiplied by the area per lipid of the leaflet towards which the distribution shifts: (4) Note that the sign of V s changes according to the direction of the shift, being positive in case of a shift towards the inner leaflet and vice versa.
The membrane thickness polydispersity is implemented as described in Frewein et al. (2021), applying the same relative Gaussian distribution on inner and outer chain-width. Contrasts of the individual moieties k are defined by Δ k = b k V k − solvent , b and denoting scattering length and scattering length density for either radiation (X-rays or neutrons). A graphical representation of all distances between moieties and thicknesses is given in Fig. 4.
The full model includes the vesicle form factor F sphere , the weighted average of bilayer form factors F bil , which we split into real and imaginary part, and the incoherent background I inc : Arrangement of the hydrocarbon chain volume probability distributions in the membrane center. The distribution of CH3-groups (yellow) is assumed to shift to the left (inner leaflet), causing a transfer of volume V s from the outer to the inner leaflet. As the yellow curve comprises the methyl groups of both inner and outer leaflet, the shaded area corresponds to twice this volume The prefactor for the meythyl group error functions f CH3,k is given by: To describe the contribution from the overall vesicle shape we use the Schultz-distributed form factor of a sphere, as described in Kučerka et al. (2007): Mean vesicle radius R m and polydispersity R enter via the auxiliary quantities s =
Stability of Membrane Asymmetry
To check the stability of our asymmetric vesicles, we monitored their SANS signal over one night while keeping them at 50 °C. The systems DPPC in /POPC out , DPPC in /SOPC out , DPPC in /MSPC out and DPPC in /SMPC out were completely C,k (9) F sphere = 8 2 (z + 1)(z + 2) s 2 q 2 1 − 1 + 4q 2 s 2 −(z+3)∕2 cos (z + 3) arctan 2q s stable both in their overall vesicle structure (low-q) and in their transmembrane structure (high-q). For DPPC in / PMPC out the low-q region also remained unchanged. However, in the high-q, the sample slowly equilibrated towards a less asymmetric state (Fig. 6). We hypothesize that the reason we could only observe this flip-flop for the PMPCdonor is due to its lower hydrophobic volume and thus a lower energy barrier for the head-groups when traversing through the membrane. Compared to the timescale of our other SAS-experiments, the flip-flop was slow and is therefore assumed not to interfere with the results.
Declarations
Conflict of interest All authors have no financial or non-financial interest declare.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visithttp:// creat iveco mmons. org/ licen ses/ by/4. 0/
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2015-09-30T00:00:00.000
|
8338187
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.5709/acp-0174-y",
"pdf_hash": "39ceb46cd7f0f848e9460a0772c4b698c5a8d730",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45544",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "39ceb46cd7f0f848e9460a0772c4b698c5a8d730",
"year": 2015
}
|
pes2o/s2orc
|
Man, You Might Look Like a Woman—If a Child Is Next to You
Gender categorization seems prone to a pervasive bias: Persons about whom null or ambiguous gender information is available are more often considered male than female. Our study assessed whether such a male-bias is present in non-binary choice tasks and whether it can be altered by social contextual information. Participants were asked to report their perception of an adult figure’s gender in three context conditions: (1) alone, (2) passively besides a child, or (3) actively helping a child (n = 10 pictures each). The response options male, female and I don’t know were provided. As a result, participants attributed male gender to most figures and rarely used the I don’t know option in all conditions, but were more likely to attribute female gender to the same adult figure if it was shown with a child. If such social contextual information was provided in the first rather than the second block of the experiment, subsequent female gender attributions increased for adult figures shown alone. Additionally, female gender attributions for actively helping relative to passive adults were made more often. Thus, we provide strong evidence that gender categorization can be altered by social context even if the subject of gender categorization remains identical.
IntroductIon
No other social category is used as early, automatically, and pervasively as gender (Stangor, Lynch, Duan, & Glas, 1992;Weisman, Johnson, & Shutts, 2014). Gender attributions can have immense consequences, as gender stereotypes are still present (Seem & Clark, 2006) and contribute to the prevailing inequality between men and women (Brandt, 2011;Gaucher, Friesen, & Kay, 2011). Adding momentum to this notion, people readily attribute a gender to a person even in the absence of explicit gender cues. First described more than 30 years ago (Silveira, 1980) as a people = male-bias (in short: male-bias), early investigations of this effect were conducted using written descriptions of persons (e.g., Hamilton, 1991;Merritt & Kok, 1995). The effect has since appeared in many studies of human visual processing, suggesting that the underlying mechanisms are not necessarily linked to the generic use of male pronouns common in many languages.
To our knowledge, one study contradicts the general existence of a male-bias in gender categorization : Johnston, Miles, and Macrae (2008) studied how women's gender categorization could vary as a function of ovulation and found that participants were more likely to misjudge male faces as female than make the opposite error, independent of ovulation stage. This finding is unlikely to stem from the exclusively female participants, as other studies found no difference in the male-bias of men and women (e.g., Schouten et al., 2010;Wild et al., 2000). Interestingly, Johnston and colleagues used the same argument to explain this female-bias as had been used to explain male-bias before by Wild and colleagues: That is, misidentifying either sex as the other might be associated with higher social costs, such as missing the opportunity for finding a mating partner. Thus, because this line of argument can be used to account for male-or female-bias, it seems likely that it is not a definitive explanation for either.
A different justification for the existence of male-bias is that male gender attribution serves as a default response because female gender is identified by means of the absence of male gender cues (Hess et al., 2009). Yet again, the reverse argument, namely that male-bias occurs because male is defined as the absence of female cues, has also been made (Intons-Peterson, 1988). Moreover, both arguments seem to ignore the possibility that male-bias could result from differences in variances of physical gender cues and not their mere presence or absence: If the spread of the distribution was broader for male than for female gender cues, this could lead to a higher probability of perceiving an ambiguous cue as male (cf. Clifford, Mareschal, Otsuka, & Watson, in press). So far, however, this alternative explanation has not been empirically tested. In summary then, it seems that male-bias is prevalent, yet has so far been ill-described, insufficiently understood, and explained. The current study addresses this divide, which might also help to overcome the challenges male-bias poses to the creation of standardized stimulus material (Todorov, Dotsch, Porter, Oosterhof, & Falvello, 2013).
One prominent gap in the explanation of the male-bias is that previous research has not answered the question whether it results from mere default responding (as argued by Wild et al., 2000) or from a biased default percept (Hess et al., 2009;Intons-Peterson, 1988).
One reason for this shortcoming is that participants are generally only provided bipolar choice options: male versus female. In one study (Gaetano et al., 2014), participants were asked to indicate yes or no whether a certain stimulus was male, and separately whether a stimulus was female, instead of the classical male-female categorization task.
As a consequence, participants were more likely to assign male gender when targeting male hands, as well as less likely to assign female gender when searching for female hands. Whereas such a tendency to identify male hands can be expected of participants who prefer the label male over female, such a preference is uninformative with respects to the yes or no responses to female targets. Male-bias then-at least in response to silhouette hand shapes-appears to involve more than a preference for assigning male versus female labels. Nevertheless, simple binary key press responses force participants to arbitrarily opt for a male favoring response when uncertain and cannot measure participants' confidence.
One method that allows more elaborate analyses of uncertainty would be to measure the trajectory of reaching movements towards the two response options (Quek & Finkbeiner, 2014), albeit this method does not give participants the possibility to decide for an intermediate judgment. Another way for taking uncertainty into consideration-one that is easier to implement than measurement of movement trajectorieswould be to provide a third neutral response option reflecting uncertainty. Even though an option alongside male and female (or yes and no) may not guarantee its selection when the participant is uncertain, and this limitation is unpacked further in the Discussion, the assertion that male-bias is an artifact of the choice between only a male-and female-response becomes explicitly testable with the addition of a third option. The present study aimed to broaden our understanding of gender categorization and male-bias by allowing participants to use three response options: male, female and I don't know.
The fact that male-bias appears in a variety of perceptual tasks implies that gender categorization is a multi-modal process. Considering the privileged and fundamental role of gender in human interaction (Stangor et al., 1992), it seems likely that gender categorization is governed by perceptual as well as cognitive processes. Earlier research has shown that written person descriptions set in a business context promote a higher male-bias relative to educational or interpersonal contexts (Merritt & Kok, 1995), and that mothers' male-bias in choosing pronouns for child-like animals in picture books decreased if characters were shown in a social context with an adult (De Loache, Cassidy, & Carpenter, 1987). More recent research has mainly concentrated on gender categorization of a narrower set of stimuli-that is, of faces. Systematic influences on face gender categorization so far include emotional expressions (Bayet et al., 2015;Becker, Kenrick, Neuberg, Blackwell, & Smith, 2007;Hess et al., 2009), face race (Johnson, Freeman, et al., 2012), additional information in form of a male or female name (Huart, Corneille, & Becquart, 2005), and even proprioceptive toughness experienced by participants (Slepian, Weisbuch, Rule, & Ambady, 2011). In addition to these stimulus-related aspects, social desirability (or social approval) may also affect gender categorization and contribute to context effects. Social desirability or social approval effects stem from a tendency of participants "to portray themselves in keeping with perceived cultural norms" or "the need to obtain a positive response in a testing situation" (Adams et al., 2005, p.389). Withinsubject comparisons in a gender categorization task might therefore reflect participants' inclination to respond to different conditions in a way they believe to be appropriate rather than their actual perceptions of gender.
Taking into account these potential effects of context information and social desirability, gender categorization can be conceptualized as a dynamic integrative process to which not only multiple levels of perception, but also higher levels of cognition and stereotypes contribute (Freeman & Ambady, 2011;Johnson, Lick, & Carpinella, 2015;Johnston et al., 2008). It is all the more surprising then that current research almost exclusively focuses on gender categorizations of individual faces and has not re-examined earlier effects of gender stereotypical contexts reported two and more decades ago (De Loache et al., 1987;Merritt & Kok, 1995). To that end, our study investigated the extent by which perceptions of gender from pictures of adult figures are altered by context information-specifically the presence or absence of a child, as well as the active involvement of the adult in helping a child.
Social desirability was explicitly considered as one factor potentially contributing to the presence or strength of a male-bias.
Hence, the goal of the present study was to determine whether: 1) the male-bias will still arise for drawings of human figures devoid of specific gender cues, given that a third response option I don't know is provided, and 2) social context information (i.e., the presence of a child accompanying a target figure) can alter gender perception of visual stimuli. Our study used stimuli controlled regarding all other content and lower-level stimulus properties, while varying the social context systematically. Participants were asked for their subjective gender attributions regarding adult figures shown in three different context conditions: alone, passively present next to a child, or actively helping a child. We expected that participants would show a male-bias at least for pictures of adults alone. Moreover, given that the presence of a child is a feminine-stereotyped context, we hypothesized that male-bias would be reduced when adults were depicted with child. Last, we also expected that seeing the adult actively helping the child in a nurturing rather than dramatic context would further decrease the male-bias, similar to the educational context in Merritt and Kok's (1995) study and particularly because gender imbalances in care-giving remain large even today (e.g., Barone, 2011). A smaller control experiment served to take potential effects of presentation order and social desirability into consideration.
Ethics approval
This study was given formal ethics approval by the Ethics were drawn without explicit male or female gender cues: Each had a short haircut, average non-curvy figure, and wore wide pants and t-shirt. A total of ten different situations were shown (e.g., an adult kneeling next to a table and chair). Three variations of each situation were derived, adult alone, social passive, and social helping, resulting in a total of 30 stimuli. The adult alone condition provided no social context information and served as a baseline measure. In the social passive condition, the adult figure was shown next to a child who acted without assistance-for example, grabbing a ball on a table. In the social helping condition the adult was depicted actively helping the child to reach a goal-for example, pushing a faraway ball towards the child. Slight body posture changes were necessary to convey the differences between social passive and social helping conditions, otherwise the adult figures were identical across all conditions. Figure 1 shows pictures for all three conditions for one example situation. The complete stimulus set is available at https://osf.io/ijk8w/.
Figure 1.
stimuli for the three different conditions for one example situation. Pictures were generated in order to ensure maximum similarity between conditions. Arrows' labels describe changes made for generating pictures with differing social context. dashed frames group context conditions according to blocks within which pictures were randomized.
procEdurE and dEsign
The experiments reported here were the first out of two which participants took part in during the time course of a regular university lecture. Each participant received one booklet and a separate consent form. The study was conducted in the language of the participants' degree course (either English or German) at a German higher education institution. Stimuli were presented on projection screens in lecture halls.
Each picture was shown for 6 s, preceded by a preparation slide shown for 5 s and followed by a slide prompting participants to indicate their decision in the provided booklets for 5 s (see Figure 2). Participants were free to change their answers, even though no specific instructions regarding changes were given and only unambiguously indicated final answers were included in the data. The order of picture presentation was quasi-randomized within two blocks, and was identical for all participants: one block contained the adult alone pictures (n = 10), the other one contained the social passive and social helping pictures (n = 20), intermingled such that pictures of the same situation were separated by at least one other picture. In the main study (Experiment 1), pictures of adults alone were first shown to participants (n = 10), followed by pictures of child-accompanied adults (n = 20; 10 passive, 10 helping). We deliberately let participants rate the adult alone pictures first to collect baseline measures of gender attribution to a single figure without explicit gender cues. The exact order of stimuli is listed in the .text file available at https://osf.io/ijk8w/. In a small control experiment (Experiment 2) the order of the two blocks was reversed, to test for effects of presentation order. Moreover, this control experiment also served as a partial control for effects of social desirability and social approval that cannot be ruled out by means of within-subject comparisons in Experiment 1. If male-bias in social context conditions would be on the level of or even lower for participants in Experiment 2 than for participants in Experiment 1, social context must affect male-bias over and above any possible-albeit not explicitly measured-effects of social desirability.
Prior to all analyses high frequency components were removed from the time series by applying a 15 Hz low pass Butterworth filter.
CoP excursions were analyzed using two broad classes of parameters, related to (1) the amount of sway, and (2) the frequency contents of sway.
participants
A total of 276 undergraduate students took part in Experiment 1; the complete raw data is available at https://osf.io/m5ciw/. Only participants with normal or corrected to normal vision were included in our analyses, leading to the exclusion of 36 participants (missing information about vision impairments: N = 26; uncorrected vision impairments: N = 10). Another 12 participants were excluded because answers were missing for more than three pictures in one or more conditions. Thus, data of 228 participants (M age = 21.5 years, SD = 3.6, 25.4% men) were analyzed. The left column of Table 1 Preliminary analyses were conducted to investigate picture sequence effects within each of the two blocks. To that end, Pearson correlations were computed between trial number and the difference between male and female gender attributions as well as the proportion of I don't know responses.
As main analyses, we first calculated 95% between-subjects confidence intervals (CIs) around the mean difference in male-bias and proportion of I don't know responses between picture categories. To illustrate our findings and substantiate the former results, we calculated 95% within-subject CIs around the gender attribution difference rates and around proportions of I don't know responses per context condition. Within-subject CIs were calculated via the approach proposed by Cousineau (2005), with a correction by Morey (2008) and R-code provided by Baguley (2012). To compare data between experiments, simple between-subjects 95% CIs were calculated. The degree of overlap between CIs formed the basis of analysis, as this conservative approach is considered to yield more information-rich interpretations of data compared to the dichotomous assessment of p-values (e.g., Cumming, time sequence for one example trial. All pictures were shown for 6 s, preceded by a 5 s preparation interval and followed by 5 s for responding. the order of pictures was pre-randomized within each block. 2012,2014). For independent groups, 95% CIs whose extremes just touch upon a meaningful difference even given a very conservative criterion (approx. p = .006), while the most common criterion for significance (p < .05) is approximated if 95% CIs for independent groups overlap by up to half of the (averaged) margin of error (Cumming & Finch, 2005). As a measure of effect size for differences between proportions, we calculated Cohen's d with Hedge's correction using the function cohen.d in the R-package effsize.
Second, the proportion of response alterations between context conditions for each situation was analyzed. Alterations were counted between adult alone and social passive as well as between social passive and social helping pictures within the same situation. These alterations were then sorted into four different categories: no change, change to female, change to male, and change to I don't know. Response alterations that included missing values were excluded from analyses. Finally, 95% CIs of the proportion of alterations falling into each category were calculated.
social contExt influEncEs gEndEr attributions but doEs not EliminatE thE malE-bias
First, trial sequence within each block did not influence the differ- As all of these CIs lay meaningfully above zero, a clear male-bias was evident in all three context conditions. As indicated by the CIs of differences not overlapping zero in the left half of Table 2, the presence of a child modulated the magnitude of difference between the amount of male and female responses: The likelihood with which male attributions were more frequent than female attributions was greater for pictures showing an adult alone compared to both social passive and social helping pictures by d = 1.07 and d = 1.25, respectively. Additionally, the male-bias was more strongly reduced in social helping compared to social passive context conditions, albeit to a lesser degree, d = 0.21.
I don't know responses were rare (6.2% to 8.4%) and occurred with similar frequency in social passive and social helping conditions (see Figure 3b and Table 2 Mean difference between the proportion of male and female responses (a, c) and mean proportion of I don't know responses (b, d) on the y axis for each context condition on the x axis in experiment 1 (top) and the control experiment 2 (bottom). gray shading marks conditions that were shown in the first block of each experiment. cat's eyes represent 95% within-subject cis. Non-overlapping CIs indicate meaningful differences between conditions. Dots mark magnitude of these differences; ••d > 1.00, •d > 0.50.
changE in rEsponsEs across diffErEnt contExt conditions
For each situation the adult figure was identical across all three context conditions, apart from slight, necessary changes in posture to convey the situational difference between social passive to social helping (see Figure 1). Thus, alterations of responses occurring above chance can be attributed solely to influences of contextual changes. Figure 4). Changes to male (5.0% and 8.8%) or I don't know responses were much rarer (7.3% and 5.0%). As both of these alterations occurred with similar and very low frequency (see Figure 4), it is likely that they represent random rather than systematic response changes. These results illustrate that participants' decreased male-bias for pictures showing the adult along with a child (see Figure 3a,b) truly emanates from participants switching their initial gender attributions for a given adult figure to female due to changes in social context.
ExpErImEnt 2
In Experiment 1 we deliberately let participants rate the adult alone pictures first to enable collection of baseline measures of gender attribution to a single gender-ambiguous figure. In order to investigate the influence of the reversed order of presentation, the sequence of the stimuli blocks was altered for a small subset of participants (N = 21, see Table 1 for details). We herewith aimed to provide further evidence that social context was a major factor leading to systematic changes of gender attributions to female and were not primarily influenced by presentation sequence or social desirability alone.
Ethics approval, stimuli, procEdurE and dEsign
The procedures of Experiments 1 and 2 were identical in all respects but block sequence. Participants in Experiment 2 were first shown the 20 social context pictures and afterwards the ten adult alone pictures; picture sequence was identical and pre-randomized within blocks for both experiments. As in Experiment 1, trial sequence within one block did not influence male-bias (both |r| ≤ .29, both p ≥ .41) or proportions of I don't know responses (both |r| ≤ .16, both p ≥ .50).
participants
A total of 21 undergraduate students took part in Experiment 2. No participants had to be excluded from analyses according to the exclusion criteria also applying to Experiment 1. The right column of Table 1 lists the complete population characteristics.
social contExt influEncEs gEndEr attributions indEpEndEnt of block sEquEncE
A male-bias persisted throughout all three conditions (see Figure 3c) in Experiment 2 just as in Experiment 1. Meaningful differences in male-bias and proportions of I don't know responses for this experiment are displayed in the right half of Table 2. In line with results from Experiment 1 participants in Experiment 2 exhibited a decreased male-bias for adult figures depicted as helping a child rather than being passively present next to him/her (see Figure 3c and Figure 3d and Table 2), d = Proportion of response alterations within pictures of one situation in experiment 1. changes were counted and categorized between adult alone and social passive (a) as well as between social passive and social helping conditions (b). As in Figure 1,
comparison of ExpErimEnts 1 and 2
In order to directly compare results of Experiments 1 and 2, we selected a comparison sample from Experiment 1 matched to participants of Experiment 2 with regard to gender, handedness, field of study, nationality, and study language (see rightmost part of Table 1).
This comparison enables us to rule out that social context effects depend on block order. The corresponding 95% CIs used for comparison are shown in Table 3.
Male-biases for social context pictures were not meaningfully different for Experiments 1 and 2 as would be expected if sequence effects alone would cause an effect (see Table 3). Moreover, the malebias was smaller for social helping pictures shown in the first block in Also, male-bias was moderately smaller in Experiment 2 compared to Experiment 1 for social helping pictures, d = 0.68, albeit this difference can be considered meaningful only when applying a less strict criterion than non-overlapping 95% CIs. Thus, changes in the male-bias for adult figures depicted alone in Experiment 2 followed a different pattern than in Experiment 1, but confirmed that social context information influences gender attributions.
changE in rEsponsEs mirrors EffEcts for proportions of rEsponsEs
As for Experiment 1, we verified that differences in proportional responses resulted from changes of responses to identical figures, by analyzing participants' response alterations (see Figure 5). In contrast to Experiment 1, there was no absolute tendency of participants in Experiment 2 to most often remain constant in their gender attributions. When comparing gender attributions for social passive and social helping pictures we found stronger evidence that seeing an adult figure actively helping a child in a nurturing situation increases the likelihood that this figure is perceived as female (see Figure 5a). Given the higher proportion of female gender attributions to helping compared to passive adults, response alterations between social context and adult alone pictures fit the expected pattern given that previous gender attributions influence later ones: Participants switched to female rather than to male responses (see Figure 5b Experiment 2 also confirmed that male-bias is even more reduced for pictures showing a helping adult figure compared to a passive one beside a child. Analyses of response alterations vindicate the assumption that the presence of a child leads to more female gender attributions compared to depictions of the same adult alone, even though overall male gender attributions prevail. Note. Male-bias refers to the difference between proportions of male and female responses, such that higher values here indicate a stronger male-bias. Meaningful differences between experiments, as indicated by non-overlapping CIs, are highlighted in bold. Note that under application of the less strict difference criterion of overlap by up to half of the (averaged) margin of error, male-bias for social helping pictures is smaller in Experiment 2 compared to Experiment 1
GEnEral dIscussIon
The study presented here is the first to systematically investigate the male-bias using drawings of human figures that do not provide explicit gender cues. It makes three major contributions to the field of social visual perception. First, we show that a male-bias is evident for human adult figures depicted without explicit gender cues. Second, we provide evidence that this bias persists even when the alternative I don't know is provided. Third, we demonstrate that the male-bias can be reduced-albeit not extinguished-by providing specific social context information. Whereas the first two findings provide novel and compelling evidence for a male-bias in visual perception, the third finding emphasizes the importance of information unrelated to the perceptual appearance of the to-be-categorized figure for gender categorization.
Pictures of adult figures devoid of gender cues are predominantly perceived as male
The tendency to attribute male gender to gender-ambiguous stimuli has been reported in the methods section of many studies investigating gender categorization of visual stimuli (e.g., Davidenko, 2007;Hess et al., 2009;Johnson, Freeman, et al., 2012;Johnson, Iida, et al., 2012) while only sometimes being mentioned as a finding itself (Armann & Bülthoff, 2012;Gaetano et al., 2014;Schouten et al., 2010;Wild et al., 2000). Irrespective of its centrality within such studies, male-bias has been targeted exclusively via binary tasks that do not allow distinction of response biases from perceptual biases because a binary response format forces participants to opt for either male or female responses when uncertain. This study is the first to explicitly report male-bias in social visual perception while providing a response-alternative suitable to capture uncertainty: Participants attributed male more often than female gender to all of the adult figures across three different conditions and two sequences of presentation (see Figure 3a,c (Gaetano et al., 2014). In line with previous studies of gender categorization (e.g., Schouten et al., 2010;Wild et al., 2000), the gender of the participants themselves did not affect the presence or strength of the male-bias.
Social context changes gender attributions to identical figures
The adult figures pictured in our stimuli were identical across social contexts, with only minor posture changes between social passive and social helping conditions. The stimuli depicted everyday situations and calculations of specific changes across conditions relied on comparisons within the variations of a situation across context conditions. We found that participants' male-bias was strongly influenced by the con- (Merritt & Kok, 1995;see also De Loache et al., 1987), and are consistent with the notion that gender stereotypes still prevail today (Seem & Clark, 2006).
Findings from Experiment 2 ruled out a trivial explanation of these findings by means of presentation order, social desirability, or social approval alone (for a definition see, e.g., Adams et al., 2005, p.389). Such effects may have caused participants in Experiment 1 to, for example, (a) give more female ratings to the social context pictures because they Proportion of response alterations within pictures of one situation in experiment 2. changes were counted and categorized between social passive and social helping (a), social passive and adult alone (b) as well as between social helping and adult alone conditions (c). As in Figure 1, example pictures are framed according to condition (black = adult alone, light gray = social passive, dark gray = social helping). cat's eyes represent 95% cis. It may seem counter-intuitive at first glance that participants in Experiment 2 showed the strongest male-bias for social passive pictures and not for the adults depicted alone, as in Experiment 1. If one supposes that participants formed a reference frame for gender attributions after early presentations and without knowing that the same adult figures will be presented alone, it may have been the case that these participants perceived or felt the need to make a larger difference in their gender attributions for social passive and social helping pictures.
The social passive condition was hence least affected, as participants likely created a reference frame from the first-shown social context block in Experiment 2. In other words, of the two social contexts, the social passive pictures would have appeared less female-stereotypical and so presumably are the ones to yield the higher male-bias.In summary, the specific social context always impacted gender categorization, but within the reference frame of information provided so far.
We were able to rule out the possibility that a decrease in male-bias In sum, our study shows that gender attributions to identical adult figures bearing no explicit gender cues can be altered in a stereotypeconsistent way by providing social contextual information-here, the presence of a child or the act of nurturing or helping a child. It therewith extends and updates earlier findings that contextual information can alter perceived gender in a way consistent with stereotypes (De Loache et al., 1987;Merritt & Kok, 1995). This finding not only points out the influence of cognitive processes on gender categorization, it is also a demonstration of the pervasiveness of benevolent gender stereotypes in a well-educated, young population.
Limitations and future directions
The highly controlled design of our stimuli can be regarded as the study's greatest strength but also as a weakness. The black-and-white comic pictures are simpler and more abstract than naturalistic stimuli, hence, our findings should be generalized with caution. One advantage of these abstract, harmless stimuli is that they are well suited to study gender categorization in children, which could help elucidate the development of gender categorization. First results along this line indeed suggest that social context also modulates children's gender attributions (Brielmann & Stolarova, 2014b) in line with the very recent finding of an angry-male-bias for faces in a population of children aged 5-6 years (Bayet et al., 2015). Also, our stimuli represent a class of real-life encounters with believed-to-be gender-ambiguous visual information rather well: child media. Given that adults' use of gendered language may influence children's development of sexist thoughts (Leaper & Bigler, 2004, but see also Friedman, Leaper, & Bigler, 2007, for contradictory findings), our results still have implications for everyday life.
They directly point to a critical flaw in efforts to create gender-fair child media by providing protagonists that are devoid of gender cues.
Another restriction of our findings is that the context information provided only comprised children and the act of helping a child in nurturing, non-dramatic ways. It will be important to test whether other social contexts unrelated to children reduce the male-bias, or whether male rather than female gender attributions would be promoted by showing an act of helping that is physically taxing and might, hence, be more often seen in men (see Eagly, 1987, for a review on gender-differences in helping behavior). Related to this point, the stimuli we employed might be considered predominantly male or female, depending on whether that gender is defined by the absence of clear cues for the other (as argued by, e.g., Hess et al., 2009;Intons-Peterson, 1988).
As our analyses focus on the changes in gender attribution between conditions, however, the above interpretations hold true, regardless of the default gender of the stimuli.
Finally, the possibility remains open that participants in our experiment hesitated to choose I don't know as an answer, perhaps due to an expectation that an adult figure should be male or female. Considering the academic context of the study, participants may have considered the I don't know option inappropriate or undesirable-despite instructions emphasizing that there were no right or wrong answers. Thus, this response option might not have strictly captured participants' uncertainty as intended. Following studies from our lab include the label no gender as a response option to increase the likelihood that it is perceived as a viable response. Another option would be to directly measure participants' response efficiency (a method, e.g., used by Quek & Finkbeiner, 2014) to estimate gender attributions' certainty, or to use a rating scale from female to male, allowing participant to really rate the perceived masculinity and femininity on a continuous scale. Alternatively, the expectation-versus perception-based nature of the male-bias may further be probed by manipulating the ratio between male and female figures shown. If male-biased outcomes are the result of response tendencies and not of skewed perceptions, then participants' bias scores should linearly track the ratio of female targets or male targets. For instance, participants inclined to respond male both (a) when in doubt and (b) irrespective of the relative frequency of male targets will appear most male-biased when targets are rare. In contrast, consider the outcome when male-bias were governed by perception: Measures would be smaller when targets are rare because the increased frequency of female lures presumably primes perception during ambiguous trials. As previous studies have adopted the unbiased ratio (e.g., Becker et al., 2007: Study 2;Bruce et al., 1993;Gaetano et al., 2014;Wild et al., 2000) that closely matches that found among real human populations (Central Intelligence Agency, 2014), manipulating this ratio in future studies will give a more precise answer to the question whether the male-bias can be accounted for by a response bias.
Conclusion
This study is the first to report an in-depth investigation of a male-bias in gender categorization of complete human figures. A robust malebias was observed even though a neutral I don't know alternative was provided, rendering an explanation of the male-bias by means of a pure response bias unlikely. Despite the fact that drawings of adults were identical in all context conditions, the size of the male-bias decreased in two context variations including social interaction with a child. Participants were more likely to attribute female gender to adult figures shown along with a child, especially when the adult was actively helping a child. If such social context information was provided before the adult figure had been seen without a child, the higher likelihood of female gender attribution carried over to pictures providing no ad-ditional context information. Hence, we were able to show that gender categorization of visual stimuli that bear no explicit gender cues is influenced by contextual information in a gender stereotype confirming way, albeit not completely canceling a general male-bias.
author notE
This research was supported by the Zukunftskolleg of the University of Konstanz. We would like to thank all participants as well as the researchers assisting with data collection: Joana Chomakova, Marcus Kicia, and Jennifer Müller.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2007-06-29T00:00:00.000
|
7378350
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/1475-2875-6-83",
"pdf_hash": "a376b955c8140c7765a33efc8faa511d35ed402d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45545",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"sha1": "1dd49b5def89f1eccd1ddf22671b456bbd8c41b0",
"year": 2007
}
|
pes2o/s2orc
|
Understanding and improving access to prompt and effective malaria treatment and care in rural Tanzania: the ACCESS Programme
Background Prompt access to effective treatment is central in the fight against malaria. However, a variety of interlinked factors at household and health system level influence access to timely and appropriate treatment and care. Furthermore, access may be influenced by global and national health policies. As a consequence, many malaria episodes in highly endemic countries are not treated appropriately. Project The ACCESS Programme aims at understanding and improving access to prompt and effective malaria treatment and care in a rural Tanzanian setting. The programme's strategy is based on a set of integrated interventions, including social marketing for improved care seeking at community level as well as strengthening of quality of care at health facilities. This is complemented by a project that aims to improve the performance of drug stores. The interventions are accompanied by a comprehensive set of monitoring and evaluation activities measuring the programme's performance and (health) impact. Baseline data demonstrated heterogeneity in the availability of malaria treatment, unavailability of medicines and treatment providers in certain areas as well as quality problems with regard to drugs and services. Conclusion The ACCESS Programme is a combination of multiple complementary interventions with a strong evaluation component. With this approach, ACCESS aims to contribute to the development of a more comprehensive access framework and to inform and support public health professionals and policy-makers in the delivery of improved health services.
Background
The impact of malaria on health and local economies in sub-Saharan Africa is staggering. Between one and three million people die each year, mostly young children under five years of age. Deaths and illness contribute to a vicious circle of ill-health and poverty [1,2]. In recent years, the fight against malaria has gained an increased level of attention from governments of affected African states as well as from international donor agencies. African heads of state agreed in the Abuja Declaration on a concerted effort to reduce the burden of malaria on the continent and endorsed the ambitious goal of the Roll Back Malaria Partnership of halving the number of malaria deaths by the year 2010 [3]. Among the malaria control strategies promoted internationally and adopted by most endemic African countries, prompt access to effective treatment especially for young children and pregnant women features prominently [2].
The need for prompt and effective treatment to prevent progression to severe disease and death essentially raises two important issues: first, the choice of a safe and efficacious drug and second, questions of how to optimize equitable access to rationally prescribed treatment.
In order to address the first point, artemisinin-based combination therapies (ACT) have been advocated as treatment of choice in Africa [4] in an effort to improve on drug efficacy following the increasing failure rate of a number of other drugs. Tanzania adopted this policy in 2004 and implemented it at the end of 2006 [5]. However, the choice of an efficacious drug does not necessarily directly result in improved effectiveness, and issues related to safety, use in pregnancy, and cost are also still being discussed. Yet, it would go beyond the scope of this paper to thoroughly debate all issues related to a specific drug.
With regard to the second point, it is widely acknowledged that access to quality treatment is insufficient in many settings. The poorest people often have least access to effective treatment [6] and the underlying causes of this situation are increasingly debated. On a macro-level, the discussion on access to treatment often focuses around the development of new drugs [7] and global affordability issues, including pricing and patenting of drugs. International initiatives, such as Medicines for Malaria Venture (MMV) [8], are increasingly financing and speeding up the development and introduction of new efficacious antimalarials. At a local community level however, the situation is a lot more complex and availability and affordability of drugs are only few among a number of factors influencing prompt and effective treatment [9,10]. In many developing countries, weak health systems as well as lack of equipment and qualified staff lead to incorrect diagnosis and treatment [11,12]. Physical access may be impeded by long distances to the nearest point of care, inadequate logistics or inability to pay for secondary costs such as transport [13]. Further, malaria is a common and socially well accepted illness in endemic countries and its potential severity is often underestimated. Insufficient knowledge of the appropriate treatment or an understanding of the illness that differs from the bio-medical explanation can lead to the use of alternative treatment sources and non-adherence to recommended regimens [14,15].
Several initiatives have attempted to address access questions on a local level, either by strengthening home-based management [16], by improving the involvement of commercial drug providers [17] or through a general improvement of health system performance. Information and education of caretakers and care providers has been useful in improving malaria case management and compliance at home and in drug selling shops [17][18][19]. Several models for improving case-management in health facilities have been tested and combined approaches were most likely to have a (sustainable) impact [20]. In any case, considering the complexity of the issues involved it seems obvious that there is no such thing as a single "magic bullet" approach to solve the problem. What is needed is a comprehensive concept addressing several of the access dimensions, ranging from availability and affordability to accessibility, acceptability and quality of care. This paper presents a programme that was developed to understand and improve comprehensively access to appropriate malaria treatment in a highly malaria-endemic rural area of south-eastern Tanzania.
The aim of the ACCESS Programme is to investigate factors influencing access to malaria treatment in rural Tanzania in order to develop a set of interventions addressing the main obstacles to access. These interventions are then thoroughly evaluated. The focus is on children below five years of age and pregnant women, who are the most vulnerable groups in this holo-endemic setting in terms of the detrimental consequences of malaria [21,22]. This paper presents a general overview of the ACCESS Programme, while future reports will provide detailed study results of the major evaluation and monitoring components.
is delimited by the Udzungwa Mountains in the north and the Mahenge Mountains in the south. Parts of Ulanga's southern and south-eastern areas, as well as Kilombero's extreme east are part of the Selous Game Reserve. The Kilombero district is connected to the Tanzania-Zambia highway through a mostly unpaved but well maintained road. For vehicles the only connection to the Ulanga district is made by a motorized ferry over the Kilombero River.
In 2002, there were 517,000 people living in the 109 villages of the two districts. Ifakara, the administrative capital of Kilombero, is the major settlement in the valley with a population of approximately 46,000. Ulanga's capital Mahenge is smaller with 7,300 inhabitants [23]. In the early 1970s, the national social engineering project to build communal villages ("vijiji vya ujamaa") brought the valley's scattered inhabitants to more organised village centres along the edges of the valley [24]. Most people there rely on subsistence farming for their livelihoods. Labour intensive rice farming on distant fields in the floodplain forces many families to move to their farming sites (shamba in Kiswahili) during the cultivation period. In the fields people stay in improvised "shamba huts" for up to six or more months. Rice, maize and cassava are the main cash crops. The main agricultural exports from both districts are rice, timber, charcoal and some fish. Since the 1980s, an increasing number of nomadic Maasai and Sukuma pastoralists had moved to the valley with large cattle herds until a government directive ordered them in April 2006 to move to other parts of the country or to reduce their herds in order to preserve the Kilombero wetland ecosystem [25,26].
The climatic and ecological conditions in the floodplain are favourable for mosquito breeding. Malaria transmis-sion in the valley is high and perennial. Recent work has confirmed entomological inoculation rates (EIR) of 350 infective bites per person per year, despite high mosquito net usage rates of 75% (G. Killeen, personal communication). At the level of health services, malaria is the most frequent diagnosis in outpatients in rural health facilities.
A number of malaria control interventions have been tested and/or implemented in the area by the Ifakara Health Research and Development Centre [27] in collaboration with the Swiss Tropical Institute [28]. The most extensive operation was the large-scale introduction of insecticide-treated nets (ITN) in the frame of the KINET project [29]. Today, promotion of ITN use through social marketing is ongoing in the frame of the national ITN programme.
Monitoring and evaluation (M&E) of the ACCESS Programme is carried out in the area of the local Demographic Surveillance System (DSS) [30]. The DSS serves as a comprehensive epidemiological framework for studies on the project's impact. In the absence of a vital registration, DSS field workers routinely record births, deaths, migrations and socio-economic indicators for every household in a defined geographical area of 2400 km 2 ( Figure 1). Each household is visited every four months. The area comprises 12 villages in Ulanga and 13 in Kilombero. In mid 2004, the total DSS population was 74,200 (Ulanga: 31,800; Kilombero: 42,400) in 17,050 households. The DSS does not include Ifakara town where ACCESS M&E activities are also implemented.
Main interventions
ACCESS interventions apply two main approaches: 1. Creating demand for appropriate malaria diagnosis and treatment in the community through a social marketing approach.
2. Strengthening the supply of quality malaria case-management at health facilities and drug shops through training, quality management, improved supportive supervision and new diagnostics.
The main areas of intervention are described below and summarized in Figure 2 (status at the end of 2006). Activities may change in the future as experience is gathered and analyzed by the programme's M&E activities.
Intervention area 1: Behaviour change campaigns for prompt and appropriate health care seeking Sensitization of community leaders As a first preparatory step in implementing community activities, local community leaders (political and religious leaders, leaders of social groups, non-governmental organizations and other key opinion leaders) were informed about ACCESS objectives and activities to gain their support and collaboration. The meetings also provided space for participants to share their views and concerns on programme-related issues.
Social marketing
A social marketing approach was chosen to increase knowledge and awareness of malaria and to promote prompt and appropriate treatment seeking from reliable sources. The design of the behaviour change communication campaign was based on experiences from projects such as KINET (Kilombero Net Project) [29,31], TEHIP (Tanzania Essential Health Interventions Project) [32] and IMPACT-Tz (Interdisciplinary Monitoring Project for Antimalarial Combination Therapy in Tanzania), as well as the national social marketing for ITNs and the results from exploratory focus-group discussions. In the first year, implementation was done in the DSS area and Ifakara only, followed by a step-wise scaling-up to the other villages of Kilombero and Ulanga.
The main target audience of the campaign are mothers and caretakers of children under five years of age and pregnant women. However, other household members and the general population are secondary audiences in order to achieve homogeneity of understanding in the population.
Messages stress the importance of prompt recognition of malaria symptoms and immediate correct treatment with the recommended first-line drug (sulphadoxinepyrimethamine [SP] until end 2006). Health facilities and licensed drug stores (pharmacies, part II drug stores [duka la dawa baridi] and Accredited Drug Dispensing Outlets [ADDO; duka la dawa muhimu]) are promoted as sources of proper treatment and advice. Prevention methods, such as the use of ITNs and Intermittent Preventive Treatment in pregnancy (IPTp) are also advocated. Finally, one set of messages highlights high fever with convulsions (locally known as "degedege") as a sign of severe malaria that can and should be treated at health facilities (rather than by traditional healers) [14,31]. ACCESS messages are in line with malaria-related messages on key family practices of the community-based Integrated Management of Childhood Illness (IMCI) [33].
Communication channels and materials to disseminate behaviour change messages were developed to reach a poor rural population in an efficient and cost-effective way. Road shows are the main vehicle for the campaign. The platform of a truck is used as a mobile stage for a health promotion team ( Figure 3). The shows are divided in four parts: 1. Dancing competition to attract a large audience 2. Comedies and role plays portraying appropriate treatment seeking and consequences of delaying treatment 3. Public lecture on malaria transmission, signs and symptoms, dangers of malaria for young children and pregnant women, and prevention and correct treatment 4. Cinema show featuring stories on prompt and effective malaria treatment Question-and-answer sessions at the end of each part allow interaction with the audience and distribution of promotion-materials (e.g. stickers, leaflets, T-shirts). Permanent billboards were erected in major villages along the main road and posters affixed in public places ( Figure 4). All materials carry campaign-related messages and the ACCESS logo ( Figure 5). Materials were locally designed and pre-tested in the community.
ACCESS road show with social marketing truck
Remote villages which are inaccessible with the truck are reached by a small 4WD vehicle branded with behaviour change slogans. It transports a mobile video unit and rooftop speakers to air behaviour change radio spots.
Special campaigns in Mother and Child Health (MCH) clinics
Special campaigns were implemented in MCH clinics. They were targeted especially at pregnant women and mothers of young children who may not attend road shows if they overlap with the women's duties in the household. During special sessions, ACCESS health promoters and MCH clinic staff informed mothers on malaria, its prevention and its proper treatment. The benefits of malaria prevention using ITNs and IPTp were particularly emphasized.
Improved access for households spending the cultivation period away from home: The shamba component
The main farming season, when many families stay in their field-huts, overlaps with the high malaria transmission season. Furthermore it represents a period of high vulnerability as it coincides with peak food insecurity, labour stress and difficult access to health services, family support and child care time. A study was undertaken within the programme to investigate the specific risks posed by staying in the fields. Results from this study may lead to the design of a "shamba intervention" if specific measures are deemed necessary.
Intervention area 2: Improved quality of care in health facilities
Trustworthy health care services of good quality are a core element for the delivery of effective diagnosis and treatment for malaria. As a result of the social marketing, the demand for quality services is expected to increase. In order to meet this demand, the health facility staff must be in the position and willing to deliver a good quality of care. The programme aims to improve quality of care with a focus on the following areas: • Correct diagnosis through the proper use of the IMCI algorithm or with improved laboratory diagnosis • Rational prescription of antimalarials, antipyretics and other drugs • Appropriate advice on prescribed treatment and malaria prevention Key activities of this component include initial refresher training for health facility staff on malaria treatment, followed by the strengthening of routine supportive supervision and the implementation of a quality management scheme in all health facilities. Training was based on IMCI algorithms for diagnosis and treatment which have proven (cost-)effective in improving quality and efficiency of child health care in rural Tanzania [34]. A protocol for the refresher training was developed in close collaboration with the Council Health Management Teams (CHMT) of Kilombero and Ulanga. The training was targeted at clinical staff, lab technicians, and medical aids of public and private health facilities. It was carried out by the CHMT, appointed trainers and ACCESS staff with financial resources from the district and ACCESS.
ACCESS Programme logo
The follow-up with routine supportive supervision will not focus on malaria only. It is planned to implement a comprehensive package of activities aiming at improving performance management for improvement of quality of services delivery: The Quality Improvement and Recognition Initiative (QIRI) was originally developed by USAID and adapted and implemented in Tanzania QIRI offers an integrated approach for the evaluation of quality of care combined with a strategy to establish the root causes of performance gaps and to develop implementable strategies to address them. A central element of this component is capacity building for joint supportive supervision and quality management, conducted by the regional and district health management teams together with community representatives. It is the aim of the programme to integrate quality management into the health supervision activities in the decentralized health system. Acknowledging the importance of patient-provider relationships and trust in health care [37,38], QIRI is designed to pay particular attention to the patients' perception of the health services.
In most malaria endemic areas, diagnosis of malaria relies mainly on clinical signs and symptoms, especially in low level health facilities. In Tanzania, only hospitals and health centres are expected to have the possibility of performing microscopy for malaria diagnosis [39], while dispensaries rely on a syndromic IMCI approach [40]. In the programme area, the malaria-attributable fraction estimated using the method of Smith et al. [41] showed that only 40% of all fever episodes were likely to be due to malaria (S. P. Kachur and S. Abdulla, personal communication). Absence of lab diagnosis may result in misdiagnoses and irrational drug-prescription [11,42].
A promising alternative to microscopy are rapid diagnostic tests (RDT) based on the detection of Plasmodium antigens. However, there is little experience with RDTs in sub-Saharan Africa although they are widely used in Asia and Latin America [43]. ACCESS plans to introduce RDTs in three dispensaries, so far lacking diagnostic tools for malaria. To compare the feasibility and value of RDTs versus conventional diagnostics, high quality microscopy will be assured in two health centres. The efficacy, effectiveness and cost-effectiveness of this intervention will be evaluated.
Intervention area 3: Improved malaria case management in drug selling shops
Self-treatment at home is often the first and quickest response to a malaria episode [16,44,45]. In many settings, the private drug retail sector plays an important role in providing drugs for home-based management of fever or malaria. On the other hand, drug shops often leave patients with sub-standard malaria drugs and poor prescribing practices [46], leading to ineffective treatment and increasing drug resistance. Experience in Kenya showed that training private drug retailers can considerably improve the services they offer [17]. However, Tanzanian drug regulations do not allow general shops to sell the first-line antimalarial drugs (SP; or ACT since end 2006), even though the national malaria control strategy mentions explicitly the availability of antimalarials on household-level [5,47,48].
As a result of this ambiguous policy, the initial plan to train general shop keepers had to be withdrawn and other avenues explored. As an alternative, the programme supports the introduction of Accredited Drug Dispensing Outlets (ADDO; duka la dawa muhimu in Kiswahili) in the two districts. The ADDO project is being implemented by the Tanzania Food and Drugs Authority (TFDA) and Management Sciences for Health (MSH). It aims to improve access to affordable, quality medicines and pharmaceutical services in drug retail outlets in rural and peri-urban areas where there are few or no registered pharmacies [49,50]. The main components of the ADDO project are activities to change the behaviour of shop owners and dispensing staff through the provision of education, incentives and regulatory coercion. It also entails efforts to positively affect client demand and expectation of quality products and services.
ADDOs are allowed to dispense a limited range of prescription-only medicines that are found on the national essential drugs list. Ideally at least one ACT should be available through this channel, most logically the one recommended as first-line treatment in the country (currently artemether/lumefantrine [ALu], brand name Coartem ® ). For the districts of Kilombero and Ulanga, ACCESS could successfully negotiate the introduction of highly subsidized ALu in ADDOs. The ACCESS social marketing campaign promotes ADDOs as source of quality malaria treatment.
Monitoring and evaluation
The M&E activities of ACCESS are based on three key components: (1) A semi-quantitative analysis aiming at a better understanding of factors influencing access to malaria treatment in order to develop an improved access framework, (2) process monitoring in order to understand how interventions operate, and (3) a thorough evaluation of the programme's impact on treatment seeking, quality of case-management and most importantly on the health of the population. An overview of the different evaluation activities in relation to key work areas is given in Table 1.
Overall and health impact evaluation is based on a plausibility assessment of the programme's impact within a before-after design, i.e. a historical control group [51,52].
In an attempt to control for possible confounders, all other malaria control activities in both districts as well as other relevant parameters such as temperature and rainfall are closely monitored. Longitudinal data will also be compared to trends observed in Demographic and Health Surveys (DHS). A basic assumption is that the malaria transmission and other relevant epidemiological parameters remain largely unchanged during the period of observation with the exception of the factors that are monitored in the frame of the programme.
Health impact assessment through the DSS The health impact assessment will be based on data collected through the DSS. The main outcome indicators are: overall and malaria-specific mortality, reported fever incidence rates in children and adults, as well as reported degedege (convulsion) rates in children. Furthermore, the DSS will be used as a sampling frame for representative community-based epidemiological studies.
Cause-specific mortality is calculated on the basis of "verbal autopsies". Since 2002, specially-trained DSS supervisors elicit information on causes of deaths by interviewing bereaved relatives about the circumstances of the death, the signs and symptoms observed during the illness leading to the death, and the action taken. This information is coded to give likely causes of death in broad categories.
The socio-economic status (SES) of households is assessed once a year on the basis of a list of household assets. This allows the results of DSS data and other communitybased studies to be stratified by wealth quintiles, which is essential in order to consider equity dimensions in the analysis. The aim of the ACCESS Programme is to contribute to an equitable reduction of (malaria-related) mortality.
Exploratory focus-group discussions
Initial exploratory focus-group discussions (FGD) with parents and caretakers of young children informed the programme on knowledge, attitudes and practices related to malaria treatment. A total of 88 people participated in the ten FGDs, four of which were done with men, six with women. Main issues that came up during FGDs are listed in the results section.
Community-based surveys on treatment-seeking for fever Repeated cross-sectional community surveys are the programme's main tool to assess changes in care-seeking behaviour for fever episodes. Explanatory Model Interview Catalogues (EMIC) are used to simultaneously collect cultural epidemiological qualitative and quantitative data on patterns of distress, perceived causes and help seeking [53]. A baseline survey was carried out in 2004 in the DSS villages and Ifakara town. Interviews were done with 80 caretakers of children and with 68 adults who experienced a fever episode in the preceding two weeks. Only people who had recovered from their illness the day of the interview were included while others were advised to consult a health professional. The same methodology will be applied in follow-up surveys every two years. It is expected that over the course of the programme, the number of appropriately treated fever episodes will increase with more people shifting to qualified care providers.
The EMIC was also used in a longitudinal study exploring treatment seeking during the cultivation period, when many people live in their shamba huts. About 100 household owning a temporary home in the fields were randomly sampled from DSS villages and followed up during one farming season. Each household was visited once a month by a team of field workers who recorded each family member's stay in the field, the occurrence of fever episodes and other indicators. In case of a fever episode in the preceding two weeks, an EMIC interview was conducted.
A household survey in a larger sample of 3'654 persons carried out in 2006 by a partner project (IMPACT-Tz) in the study area was used to assess uptake of social marketing messages by the population.
Case-control study on degedege
A case-control study on degedege (convulsions) was nested in the DSS data collection. The study compares treatment seeking patterns and self-observed signs and symptoms for fatal ("cases") and non-fatal ("controls") degedege episodes in children. Degedege has commonly been treated by traditional healers rather than with modern medicine [14]. However, this may have changed over time. EMIC questionnaires and extended verbal autopsies (VA) were used as data collection tools for non-fatal and fatal cases respectively. Non-fatal degedege cases were reported routinely by DSS field-workers. A random sample was then followed up every two weeks for an EMIC interview. This study is expected to provide information on observed "danger signs" and factors related to treatment seeking and leading to death or recovery. It will add an important aspect to the existing knowledge on management of fatal malaria including cases of convulsions as described by de Savigny et al. [54].
Quality of care at health facilities
Initial assessment of quality of care is based on yearly surveys in a sample of public and private/mission health facilities. Tools were adapted from the multi-country evaluation of IMCI [55]. Activities include patient-provider observations, as well as staff and patient exit interviews. Furthermore, laboratory equipment is checked for functionality and drug stocks are recorded. With the implementation of the QIRI tools for supportive supervision in 2007 (as outlined above), evaluation of quality of care will be done largely through QIRI which will assess quality of care twice annually in all health facilities. Results will then feed directly into activities aimed at improving quality of care. It is expected that the programme's interventions will lead to improved malaria case-management and more rational prescription of antimalarial drugs.
Health facility attendance and availability of antimalarials
Health facility attendance data and frequency of specific diagnoses are routinely recorded by health facility staff for the health management information system (HMIS). This information is collected bi-monthly from all private and public health facilities in Ifakara (one public, one private) and the DSS area (10 public, five private) by ACCESS staff. Together with the DSS fever incidence data which provides a community estimate, this information will allow to calculate the proportion of fever cases diagnosed as malaria and treated at health facilities. This proportion is expected to rise over the course of the project. In the frame of this activity, availability of antimalarial drugs is monitored in all health facilities in the DSS.
Quality of antimalarial drugs
In 2005, a study was designed to get an overview of the quality of antimalarials available in the programme's study area. For this purpose, all antimalarial selling points in the 25 DSS villages as well as in Ifakara were visited, including general shops, drug stores, pharmacies and health facilities. Samples of SP, amodiaquine and quinine products were purchased and the amount of active ingredient quantified according to the United States Pharmacopoeia (USP 24) using previously set up high-performance liquid chromatography (HPLC) methods [56,57]. In accordance with USP standards, products with less than 90% and more than 110% of the labelled amount of active ingredient were counted as failures.
Quality of services at shops selling drugs
Based on the methodology developed by Goodman et al. [58] for monitoring antimalarial drug availability, the DSS villages and Ifakara are searched annually for drugselling shops and the shopkeepers are interviewed. A structured questionnaire is used to record current drug stock and shopkeeper's knowledge of malaria treatment. Simultaneously, the shop's locations are recorded with hand-held GPS devices. This approach allows monitoring the shopkeepers' knowledge of malaria treatment, the services and drugs offered, as well as the coverage of shops stocking drugs as a proxy for availability and accessibility.
In a second approach, "mystery shoppers" (simulated clients) buy drugs in local commercial outlets. For this purpose, local villagers are hired and instructed to go to a nearby shop and ask for treatment for fever/malaria on the basis of standard case scenarios.
Costing of implementation activities
A financial analysis of the intervention costs will be performed after the interventions have been running for at least two years. A cost-effectiveness analysis will combine measures of effectiveness (see under health impact assessment) and financial costs. For this purpose, a clear distinction is maintained at the level of IHRDC administration between the cost related to interventions and the cost related to research, monitoring and evaluation.
Assessing the impact of the introduction of Artemether-Lumefantrine
Tools developed by the ACCESS Programme will be used to monitor prospectively the health impact of the switch in first-line treatment for malaria from SP to Artemether-Lumefantrine. This assessment will be done in the frame of a related but separate project, called "Artemether-Lumefantrine in vulnerable patients -exploring health impact" (ALIVE). It will include monitoring changes in child mortality trends as well as annual community-based cross-sectional studies and an in-depth compliance study in 500 children.
Community leaders' sensitization and social marketing
Community activities started with the sensitization of community leaders followed by road shows in the 25 villages of the DSS area and in Ifakara town in 2004. The 2005 round covered an additional 56 non-DSS villages in both districts (59%) and by the end of 2006, a total of 114 (79%) villages were reached with both activities ( Figure 6). On average 40 community leaders per village (90% of the invited) attended the sensitization meetings (total of over 5,000 in three years) and shared their views and concerns, such as: Road shows were generally very well attended. Turn-up varied considerably depending on the size of the village, ranging from few hundred to a few thousand people during big shows such as in Ifakara. In a cross-sectional survey done in 2006 in the DSS area, 39% (95% CI 37.2 to 40.4) of the people mentioned that they had attended an ACCESS road show. Men were 2.2 (95% CI 1.9 to 2.5) times more likely to have attended such a show than women (P < 0.001) and younger people were more often exposed than older (Figure 7). Further, many people had been in contact with or seen promotion materials such as t-shirts and caps (48%), a vehicle displaying ACCESS slogans (46%), or billboards (35%). Community leaders' sensitization meetings reached 16% of the interviewed.
MCH campaigns
So far, 18 special sessions for pregnant women have been carried out in MCH clinics in the DSS area, one in Ifakara and 28 in non-DSS villages of both districts. In the DSS alone, about 4,700 mothers attended the sessions, representing approximately 28% of all women in reproductive age.
Health facility intervention
Between November 2004 and April 2005, several refresher training sessions were organised in collaboration with the CHMTs of Kilombero and Ulanga. In Ulanga, 100 (89% of total) clinicians, nurses, medical aids and technicians from rural dispensaries and health centres attended the trainings. In Kilombero, 39 (93% of total) clinicians were trained. The tools for supportive supervision and quality management are currently being developed.
Accredited Drug Dispensing Outlets
The ADDO programme was targeted at the 32 existing drug stores in Ulanga and 93 in Kilombero District [59,60].
Monitoring & evaluation
2004 baseline population of the DSS area was 74,200 people, with a crude death rate of 11.6/1,000 people-years observed (PYO). The probability of dying before reaching the age of one year is 63.9/1,000 PYO and before the age of five 109.5/1,000 PYO. The risk of a fever episode ("homa kali") in the two weeks preceding the interview was estimated at 144/1,000 people between May and August and 119/1,000 between September and December. The risk of a degedege episode in the previous two weeks was 12/1,000 people between September and December. Mosquito net coverage during the main cultivation period (and peak malaria transmission season) Coverage of social marketing campaign in 25 DSS villages: proportion of the population that has attended an ACCESS road show by age group Focus-group discussions revealed mainly the following malaria-related concerns: • SP had a bad reputation in Tanzania following media coverage on severe side-effects (Stevens-Johnson syndrome) at the time of its introduction as first-line treatment in 2001 [62]. Some people feared SP although they or their children had never experienced severe side-effects, which are known to be rare [63]. People were confused about different SP brand names.
• Modern medical treatment was preferred over traditional medicine and children were treated more quickly than adults. Drug shops were often more conveniently reachable and adults would often buy paracetamol from a shop as first treatment for a fever episode.
• A majority of the people failed to resort to sources of treatment that they otherwise would prefer -such as a hospital. Factors such as cost, absence of trusted medical professionals, unavailability of diagnostic instruments, long waiting time, and distance were mentioned as important obstacles.
These findings, together with national treatment guidelines and information from other projects and surveys were used as basis for developing the behaviour change campaign.
Quality tests of antimalarials (SP, amodiaquine and quinine) purchased from health facilities and shops in 2005 confirmed the existence of sub-standard drugs in the study area. In total 25% of the collected tablet samples did not meet the USP specifications for the amount of active ingredient and were mostly under-dosed. 12% of them contained only minimal amounts of active ingredient. Overall, 24% of the collected SP tablets and 40% of the quinine sulphate tablets were sub-standard. All amodiaquine tablets and quinine injections contained the labelled amount of active ingredient. Sub-standard drugs were found mainly at general and drug shop level and mostly originated from Tanzania and India [57].
Discussion and conclusion
In order to develop and validate a generic framework on issues related to access to treatment [64], the ACCESS Programme took malaria as an empirical case study. Of course, access issues are also pressing with regard to most other high-burden or neglected diseases in developing countries. By focusing on malaria we chose a povertyrelated disease that affects large parts of sub-Saharan Africa in terms of both, disease and economic development, at a time when funding for its control is more readily available than ever before [65].
The Kilombero Valley is an area for which the malaria situation has been particularly well described thanks to numerous research activities [66][67][68][69]. The preventive use of insecticide-treated mosquito nets has been advocated through the large social-marketing of the KINET project between 1997 and 1999. It resulted in high levels of ITN ownership and use [29,70]. However, access to prompt and appropriate treatment is still poor. A baseline study in the frame of this programme found that only 14% of young children received an effective antimalarial in the correct dose on the day of illness onset [71]. The aim is, therefore, to expand the successful approach chosen for ITNs to the crucial issue of access to treatment. The main target groups of the interventions are those most at risk in holo-endemic areas such as the Kilombero Valley: young children and pregnant women [21,22,72].
Interventions to improve the complex issue of access to malaria treatment are more likely to be successful if several working approaches are combined. Social marketing applies concepts and techniques used in commercial marketing to prompt behaviour change that benefits the target group [73]. In recent years, it has become increasingly popular in health promotion where it has been proven effective e.g. in promoting the use of ITNs and reducing child mortality [70]. However, care has to be taken that men and women profit equally from the approach -a challenge that has to be tackled by the programme. In the frame of ACCESS, the marketed "product" is the knowledge and awareness of malaria and the concept of treating a malaria episode appropriately. The "price" to be paid by the community is the adoption of the desired care-seeking and preventive behaviour. However, inducement of behaviour change alone is not sufficient; health services which are acceptable and of good quality must be available. Hence, the behaviour change campaign is also a way of empowering the community to demand for good quality health care. Activities to improve quality of health services become central components of the programme.
The major providers of malaria treatment services remain health workers. Their practices are influenced by a variety of factors and environments [20]. The Integrated Management of Childhood Illness (IMCI) strategy adopted by Tanzania is an effective step to improve health worker performance leading to a reduction in child mortality [74] and out-of-pocket expenditures by patients [75]. However, health systems often fail to implement effective guidelines in a sustainable way [76]. The challenge therefore remains to assure adherence to IMCI guidelines and to address factors not directly related to case-management (e.g. motivation or job satisfaction). Multi-faceted approaches including supervision and strengthening of district-level health management are more likely to improve performance [20]. The ACCESS Programme therefore combines training and information with the implementation of a quality-improvement process including strengthening the supportive supervision capacity of the district health management team.
As an alternative to formal health services, antimalarials can be obtained from the commercial sector. Drug shops and general stores are the most important alternative treatment sources for malaria in the study area [58,77]. In an attempt to ensure quality of services, antimalarial drugs sales have recently been banned in general shops. With no alternative sources replacing general shops this policy resulted in a decreased availability of antimalarials in the study area [47]. An alternative approach which has worked well in Kenya would be training of drug vendors [17]. However, current Tanzanian legislation does not allow the selling of antimalarials in general shops. Consequently, any national strategy has to focus on improving the performance of drug stores and their dissemination to underserved areas through the ADDO project.
For the impact evaluation of ACCESS, a plausibility design had to be adopted [51]. Identifying a comparable place as control area would not have been possible and randomization of different areas for intervention would not be feasible within the frame of this programme. Supporting evidence for causally linking an observed impact with the programme's interventions will be obtained through the collection of multiple indicators on intervention delivery, coverage and potential confounders. While the limits of such a design in establishing a causal link are obvious and well known, it needs to be recognized that any large-scale implementation goes through an iterative process of measuring progress and impact while continuously adapting and improving the process. Consequently, the interpretation of results has to take into account contextual changes and external influences. Data from other DSS sites and DHS in Tanzania will be of particular importance in interpreting mortality data and putting them into perspective.
Baseline data demonstrated heterogeneity in the availability of treatment sources, unavailability of medicines and providers and serious quality problems with regard to drugs and services. This supports the basic assumption that there are several inter-linked factors influencing access to effective malaria treatment.
The comparative advantage of the ACCESS Programme is its combination of multiple interventions on different levels of the health system, including a strong evaluation and research component. With this approach, the programme also aims to contribute to the wider debate on access to appropriate health care in developing countries. Based on Penchansky and Thomas' [78] understanding of "access" as the degree of "fit" between the health system and its users, the ACCESS Programme aims at developing a more comprehensive access framework [64]. This can then inform and support public health professionals and policy-makers in the delivery of improved health services, ideally leading to better health and well-being.
Authors' contributions
MWH was responsible for the baseline surveys of the M&E component and wrote the manuscript in collaboration with the other authors. AS, BO, CL and HM conceived the programme and its components and provided technical support and supervision. AM, CM and NI were responsible for the development and implementation of the interventions. AD, SA and IM were responsible for data collection and analysis for M&E. RN is in charge of the DSS and NI of the overall project management. JDN and RAK were responsible for the IMPACT-Tz household-survey which provided social marketing coverage data. All authors read and approved the final manuscript.
|
v3-fos-license
|
2021-10-29T15:19:26.477Z
|
2021-10-29T00:00:00.000
|
240138099
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bioone.org/journals/mountain-research-and-development/volume-41/issue-4/mrd.mm267.1/Wayne-Thiebaud-Mountains--19652019-Text-by-Margaretta-M-Lovell/10.1659/mrd.mm267.1.pdf",
"pdf_hash": "deb939b0aafe0bb6867cc592382a2024a6fe0d24",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45550",
"s2fieldsofstudy": [
"Art"
],
"sha1": "3483a66ddf119889d7ee7c0e3ea555fa2c69237f",
"year": 2021
}
|
pes2o/s2orc
|
Wayne Thiebaud Mountains: 1965–2019. Text by Margaretta M. Lovell and Michael M. Thomas
Wayne Thiebaud Mountains: 1965–2019 catalogues a 2019 exhibition at Acquavella Galleries in New York City. It includes a biography of the artist; a foreword by Eleanor Acquavella, the owner of the galleries; and essays by Michael M. Thomas, a former curator at the Metropolitan Museum of Art, and Margaretta M. Lovell, a professor of American Art at the University of California, Berkeley. The 33 plates of the works exhibited boast vivid colors and high resolution, showing Thiebaud’s much-lauded brushwork to great advantage. While it can be difficult to gauge the relative sizes of the different paintings, Thomas and Lovell occasionally provide a helpful sense of scale by comparing them to each other. Wayne Thiebaud was born in Arizona and grew up in California. He started out in cartooning and commercial art, then gained recognition for his ‘‘cheerful, impossible to resist’’ paintings of ‘‘desserts, shoes, countertops, and other quotidian objects’’ (p 11; Figure 1). He began his series of mountain paintings ‘‘entirely from memory’’ (p 7) in the 1960s, intensifying his focus on mountain subjects in the 2000s. In ‘‘Wayne Thiebaud’s Mountains: An Appreciation,’’ Thomas takes a conversational tone, avoiding excessive jargon and employing broad strokes, for example, ‘‘America is about independence and so is Thiebaud’’ (p 24). He notes the paintings’ ‘‘monumentality’’ (p 13), ‘‘precipitous verticality’’ (p 14), and ‘‘general lack of human presence’’ (p 18), in contrast to works of other mountain painters. Through such comparisons, Thomas portrays Thiebaud as an unconventional landscape painter and primes readers for the works to come. Thomas’s judicious use of quotes also conveys Thiebaud’s personal charm: ‘‘‘I’m obviously a very influenced painter and I delight in being so’’’ (p 18). As a geologist, I read Thomas’s discussion of Big Rock Mountain (Figure 2) with interest. In my view, it illustrates vertical exaggeration, cross section, and layer-cake geology effectively. The fact that Thiebaud ‘‘painted and repainted [it] over the past 15 years’’ (p 19) lends the work an appropriate sense of change over time, albeit on a human, rather than geologic, scale. Though Thomas’s essay helped me appreciate Thiebaud’s enigmatic mountains in a more nuanced way, I did not feel the ‘‘special joy in verticality’’ (p 19) he describes. While the series may reflect the artist’s joy and virtuosity, many of the works inspire disorientation and dread in this viewer. FIGURE 1 Wayne Thiebaud, Around the Cake, 1962. Spencer Museum of Art, University of Kansas, gift of Ralph T. Coe in memory of Helen F. Spencer, 1982.0144. ( 2021 Wayne Thiebaud / Licensed by VAGA at Artists Rights Society [ARS], NY) FIGURE 2 Wayne Thiebaud, Big Rock Mountain, 2004–2021/2019. ( 2021 Wayne Thiebaud / Licensed by VAGA at Artists Rights Society [ARS], NY) Mountain Research and Development (MRD) An international, peer-reviewed open access journal published by the International Mountain Society (IMS) www.mrd-journal.org MountainMedia
showing Thiebaud's much-lauded brushwork to great advantage. While it can be difficult to gauge the relative sizes of the different paintings, Thomas and Lovell occasionally provide a helpful sense of scale by comparing them to each other.
Wayne Thiebaud was born in Arizona and grew up in California. He started out in cartooning and commercial art, then gained recognition for his ''cheerful, impossible to resist'' paintings of ''desserts, shoes, countertops, and other quotidian objects'' (p 11; Figure 1). He began his series of mountain paintings ''entirely from memory'' (p 7) in the 1960s, intensifying his focus on mountain subjects in the 2000s.
In ''Wayne Thiebaud's Mountains: An Appreciation,'' Thomas takes a conversational tone, avoiding excessive jargon and employing broad strokes, for example, ''America is about independence and so is Thiebaud'' (p 24). He notes the paintings' ''monumentality'' (p 13), ''precipitous verticality'' (p 14), and ''general lack of human presence'' (p 18), in contrast to works of other mountain painters. Through such comparisons, Thomas portrays Thiebaud as an unconventional landscape painter and primes readers for the works to come. Thomas's judicious use of quotes also conveys Thiebaud's personal charm: '''I'm obviously a very influenced painter and I delight in being so''' (p 18).
As a geologist, I read Thomas's discussion of Big Rock Mountain ( Figure 2) with interest. In my view, it illustrates vertical exaggeration, cross section, and layer-cake geology effectively. The fact that Thiebaud ''painted and repainted [it] over the past 15 years'' (p 19) lends the work an appropriate sense of change over time, albeit on a human, rather than geologic, scale.
Though Thomas's essay helped me appreciate Thiebaud's enigmatic mountains in a more nuanced way, I did not feel the ''special joy in verticality'' (p 19) he describes. While the series may reflect the artist's joy and virtuosity, many of the works inspire disorientation and dread in this viewer.
MountainMedia
Thomas writes that ''Mountains endure. That's the essence of their character'' (p 22), but these landscapes suggest imminent change and even destruction, from mass movement in Big Rock Mountain to apocalypse in Mountain Fire.
Lovell provides historical and geological context in the essay ''City, River, Mountain: Wayne Thiebaud's California,'' supplementing the paintings with maps and satellite imagery. Like Thomas, she extols Thiebaud's use of impasto (relief) painting, appropriate for depictions of topographic relief. Also like Thomas, she quotes effectively, from critics appalled by Thiebaud's foray into landscape painting ('''Burn these landscapes, burn your brushes, and eat the ashes, and never paint them again''' [p 27]) and from the artist himself, who is most interested in '''painting that is representational and abstract simultaneously''' (p 29).
Thiebaud's melding of abstract and representational struck me as cinematic. Ripley Ridge (Figure 3) anticipates the Escherian Paris street scene in Inception, while Laguna Rise (Figure 4) could be the meteoric city of Novi Grad in Avengers: Age of Ultron. The artist himself acknowledged the cinematic quality of his work: ''. . .people who love paintings will spend as long as hours looking at a single painting and it unfolds like film, like a motion picture'' (Mailman 2020).
Like films, Thiebaud's mountains evoke a visceral response. The looming, precarious peaks cause ''disequilibrium'' (p 52) and vertigo. In Lovell's analysis, they are ''horizonless, no governing perspective clarifies the visual field, and no foothold positions the observer steadily on the edge of the pictured world . . . a familiar kind of subject spatially disrupted and deliberately defamiliarized'' (p 39). In the Sierra Nevada paintings, vertical and overhanging monoliths ''seem to float without base, context, or resolution'' (p 45). Thiebaud's defiance of the conventions of landscape painting results in ''bizarrely original and eerily unsettling'' scenes (p 48).
Lovell delves into Thiebaud's use of color, to edifying effect. Thiebaud rejected conventions of not only visual perspective, but also atmospheric perspective, where distant dark objects appear blue due to moisture and dust in the atmosphere scattering short-wavelength blue light (Editors of Encyclopaedia Britannica 2016). Thiebaud's mountains are predominantly blue, even in the foreground; they could be seracs or icebergs. Lovell also celebrates Thiebaud's extensive use of halation, ''a halo-like effect in which light spreads beyond the edges of a bright object'' (Oxford University Press 2021), particularly with contrasting colors like blue and orange. Passages like these add to the curious reader's enjoyment of the mountains series.
Like Thomas, Lovell takes nuanced looks at particular paintings. I was intrigued to learn that Thiebaud considered the roadcut in Road Through ( Figure 5) ''a heroic human achievement against great odds'' (p 30). To me, the road's vertical orientation and scarp-like roadcuts on either side suggest an ominous, artificial fault.
Lovell invokes ''the tales Americans tell . . . about themselves and their relationship to the hospitable continent they have occupied so completely'' (p 37) and alleges that Thiebaud's landscapes ''tattle on what Americans have done to the land . . . and suggest the attitude of the artist (and of Americans writ large) toward human landscapes, habitation, and an unquiet planet'' (p 36). Just what do Thiebaud's ''pictorial tall tales'' (p 39) tell us? We find a clue in his ''heroic violence'' (p 30) interpretation of Road Through.
While Lovell acknowledges California's indigenous and immigrant populations, neither the essays nor the paintings disambiguate Thiebaud's political perspective. Readers and viewers in search of more explicit political statements about Californian landscapes will find them outside the pages of this book in contemporary indigenous art, one recent, monumental example being Nicholas Galanin's Never Forget installation at Desert X ( Figure 6).
Wayne Thiebaud Mountains: 1965-2019 underdevelops the artist's politics, perhaps, but not his affability. Thomas and Lowell introduce us to a humble and loyal living legend, who eschewed the New York art scene to remain a longtime Sacramento resident and professor at the University of California, Davis. We also gain an appreciation of Thiebaud's range and versatility. Art lovers who are familiar with Thiebaud's earlier work will enjoy the opportunity to get to know Thiebaud as a painter of distressing mountains as well as delectable desserts. As for the tales Americans tell themselves, readers can draw their own conclusions, or search elsewhere for more critical analyses. To quote the novelist Zadie Smith (2005: 130): ''Art is the Western myth . . . with which we both console ourselves and make ourselves.'' Serving as a source of personal meaning is not art's only function, however. The arts and humanities also help conceptualize global change in mountains. The editors of the MountainMedia section invite MRD readers (and the mountain research community as a whole) to explore and emulate such expressions across disciplines.
|
v3-fos-license
|
2019-05-17T14:19:52.381Z
|
2019-04-27T00:00:00.000
|
155639055
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4441/11/5/886/pdf",
"pdf_hash": "a9de27eb5b20915b911bc5aaf7415f3884cb33fe",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45551",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "81e89bc6d4b6e57d6e1a5db2fbbf3dc417567e41",
"year": 2019
}
|
pes2o/s2orc
|
The Legal Geographies of Water Claims: Seawater Desalination in Mining Regions in Chile
: The use of desalination has been increasing in recent years. Although this is not a new technology, its use often proceeds within ill-defined and ambiguous legal, institutional, economic and political frameworks. This article addresses these considerations for the case of Chile, and o ff ers an evaluation of legal ambiguities regarding di ff erences between desalinated water and other freshwater sources and associated consequences. This discussion reviews court records and legal documents of two companies operating desalination plants, both of which have simultaneous rights granted for underground water exploitation: the water supply company in the Antofagasta Region and Candelaria mining company in the Atacama Region. The analysis shows that issues of ambiguity and gaps in the legal system have been exploited in ways that allow these entities to continue the use and consumption of mountain water. They do so by producing desalinated water, and by entering into water transfer and diversion contracts with the mining sector. These findings highlight the importance of undefined socio-legal terrain in terms of shifting hydro-geographies of mining territories, contributing conceptually to critical geographies of desalination, delineating the importance of legal geographies important for water governance, as well as empirically documenting the significance of this case to consider shifts for the mining sector and water technologies and uses in contemporary Chile.
that new water technologies are still inserted into a legal system that has failed to recognized how desalination can shape and be shaped by socio-natural dynamics. In particular, failure to distinguish desalinated water from other freshwater sources results in gaps and loopholes which are currently being exploited by the mining industry.
Socio-Legal Terrain in the Advance of Desalination
Desalination, as serving wider political agendas (e.g., by its coupling with economic development and socio-natural pressures) has recently been attracting research interest by critical scholars in geography and allied disciplines [2,[10][11][12]14]. Such analyses served to highlight that desalination is proposed as a 'fix' for solving contestations that are threatening water governance (environmental and spatial-political) over different scalar relations (regional/national and transnational) [2,[10][11][12]14]. By tracing these hydro-social relations, some scholars have also observed that political interactions over water have been reinforced by mutual collaborations through financial agreements, but also by leaving behind contestations and dependency on water transfers [9,10,14]. However, changes in power distribution are observed as shaping water governance and the privatization of oceans [9,10,14].
In these analyses, some scholars have reflected on the intersection of desalination's characteristics with legal and economic frameworks. One of the predominant assumptions is that certain pillars sustaining desalination (legal, environmental and economic, etc.), have contentious characteristics [2,12]. For example in Spain, where desalination was proposed as a 'fix' for urban socio-natural conflicts, it has been argued that desalination is unifying multiple and, sometimes, opposite interests, while at the same time highlighting major concerns, such as: the hegemonic role influencing developmental logics (tourism and agriculture), notions pertaining to legal rights over the seas (the free character of pumping seawater) and the multi-scalar strategies for financing desalination [2,12]. Some of these characteristics were early referred by Meerganz von Medeazza [31] as socially-induced factors, different from direct (i.e., brine and energy), but equally powerful in terms of their unplanned impacts from desalination. This means that in addition to the immediate impacts from the technology's uses, there are other implications derived from the ways that society made use of the technology and the water produced [31].
As a result of the combination of undefined 'techno-legal' frameworks and 'techno-political' characteristics (colocation with infrastructures that increase desalination profit), Williams [11] identifies opportunities for private capital to (re)configure the sphere of water governance. The author demonstrates that legalities are intersecting with desalination in three areas: (1) industrial land zoning and land rights, in terms of suitable locations for desalination and rights to extract water, (2) permitting processes for desalination infrastructure, and (3) new Public-Private Partnership laws for public utilities management. This approach is built on the idea that social relations are flowing through technological solutions, which ambiguous conditions (legal-political) have enabled, in order to transform water into a 'new' cooperative commodity [11].
A legal perspective pushes for consideration beyond conventional preoccupations of political ecologists (power, politics, inequities, ways of knowing and scale). These concerns are important, yet the analysis of power imbalances facilitated and created by legal-political maneuvers offers a new perspective for the understanding of socio-environmental-economic injustices. As Andrews and McCarthy [27] (p. 9) have argued "a political ecology that seeks to examine the full range of contestation over human-environment relationships may, in some contexts, need to devote more attention to the formal political and policy arena and specifically legal geographies". Indeed, legal geography offers to political ecology an important understanding of natural-social boundaries as defined by legal institutions and practices [20,36]. While legal knowledge ruling desalination has been covered mainly from water rights over the seas, notably, what appears to be less developed are the gaps and ambiguities of this legal system in accounting for and distinguishing desalination from other water types/sources. This is particularly important in cases where uses of desalination are intersecting with other water supply sources (mountain water, sewage water and recycled water), and where there is no effort to distinguish water coming from different sources. As we explore in Chile, these legal loopholes, provide opportunities for ongoing exploitation and reconfigured hydro-geographies of the mining industry.
As such, we engage insights from Budds and Hinojosa [16] (p. 129) particularly emphasis whereby "supply-led technical solutions, proposed and constructed for mining, can significantly modify hydrological regimes and patterns and rules of access". We contend that changes in hydro-social cycles stem from, what we call, the legal coupling. We define this as the insertion of one legal framework into another in order to fill gaps (e.g., loopholes and unclear concepts) for the facilitation of legal-spatial outcomes. This is only one of many ways in which legal and regulatory structures can be changed, deployed and reinforced. Our work suggests that, in desalination, this is enabled by its intersection with broader water legal systems. We understand 'water legal system' as comprising Water Code and Sanitation Law. In doing so, the paper not only adds new dimensions to the discussions of desalination's legal features, but also, to the longstanding debate on 'modern water', wherein water is reduced to its chemical composition H 2 O and the social contexts are abstracted [37].
Legal institutions and practices can reveal new definitions of water and, more broadly, approaches to water governance [38]. As such, "With water management being a globally contentious issue, understanding the various interpretations of water underpinning policy could facilitate a critical examination of the assumptions held by policy makers and the likely material outcomes for diverse stakeholders within and across jurisdictions" [38] (p. 170). Here, our emphasis is that legal interpretations of artificial water might expand the understanding of current socio-environmental outcomes. Defining desalinated water, from the perspectives of the public trust doctrine legal principle, and international legislation aiming to protect marine environmental impacts, became a key issue with legal scholars [32,[39][40][41]. Examples of international norms are the United Nations Convention on the Law of the Sea (UNCLOS) and soft laws (the Montreal Guidelines, Agenda 21 and the Washington Declaration). By looking beyond how law responds to technologies in international/national commons, and instead to how socio-legal discourses can make, un-make and re-make spatial forms with corresponding legal spaces and vice-versa [26,36], the study aims to shed light on the complex socio-spatial order, of formal and informal legal instruments, as a product of social power arrangements [26,[42][43][44]. In this sense, we situate our study in legal geography, where urban political ecology has been useful as a means to understand that water policies, environmental needs and social organizations are combining, which represents a (re)politicization of urban waterscapes that creates uneven socio-ecological conditions [12,45].
Focusing on water governance, legal geographer scholars have shown how local communities are challenging national legalities through communal norms of water management and local knowledge [21,23,43]. This is identified as producing plural hydro-social territories [21]. Recently, a less anthropogenic form of water governance is captured by reviewing court cases and the rights of nature 'rivers' to legal defense in court (rights recognized in many Constitutions e.g., Ecuador, Bolivia and Mexico City) [25]. Water requirements for non-humans (animal and plants) have also been proposed through a revision of watershed-scale drought plans, wherein ecological impacts were disclosed as primarily acknowledging impacts to fish [24]. Within this body of work, legal discourses have been highlighted by their particular power in the production of spaces: "The legal process demarcates the boundaries of water politics because the law determines who holds legitimate power to organize, distribute, and manage a region's physical water resources" [19] (p. 615). This means that legal discourses have additional power because the state has participated in their validation and, in its protection, has the force of law behind it [20,26]. Interestingly though, while these studies are quick to point out that these interactions are useful in gaining a better understanding of socio-environmental injustices, desalination technologies have scarcely been mentioned in water-society relations. This paper bridges legal geography with critical geography on desalination technologies. In doing so, it is suggested that it is firstly crucial to understand the existing water legal framework; to do so we use the case of Chile. In the next section we present the methodology used and describe the case study.
Data Sources and Collection
The research presented here is based on court records, bills and legal documents connected with two different companies operating desalination plants: Aguas Antofagasta S.A., which is the water supply company in the Antofagasta Region and Candelaria mining company in the Atacama Region. The status of the two plants is summarized in Table 1. These methods are complementing and expanding political ecology's methodological toolkit (often composed by field-based research) [27]. Therefore, as was argued by Andrews and McCarthy [27] (p. 9) this allows us "( . . . ) to better understand the legal and political dynamics central to the case that may not be addressed by political ecology's conventional suite of methods". The analysis is not presented as a comparative study, but is intended rather to explain the constrained spaces in the institutional and legal framework of two similar contexts dependent on the mining industry. The research presented here is based on court records, bills and legal documents connected with two different companies operating desalination plants: Aguas Antofagasta S.A., which is the water supply company in the Antofagasta Region and Candelaria mining company in the Atacama Region. The status of the two plants is summarized in Table 1. These methods are complementing and expanding political ecology's methodological toolkit (often composed by field-based research) [27]. Therefore, as was argued by Andrews and McCarthy [27] (p. 9) this allows us "(…) to better understand the legal and political dynamics central to the case that may not be addressed by political ecology's conventional suite of methods". The analysis is not presented as a comparative study, but is intended rather to explain the constrained spaces in the institutional and legal framework of two similar contexts dependent on the mining industry. 1 According to the environmental permit. 2 The plant has been functioning since 2003, but was expanded in 2014.
The data was collected from decisions gathered from the Appeal Court of Santiago (Sanitation Service Superintendence v. Council for Transparency 9347-2011; Aguas Antofagasta v. Council for Transparency 9368-2011) and the Environmental Tribunal (Environmental Superintendence v. Candelaria mining company 140-2016). Since the Law 20417/2010 was enacted, the Environmental Tribunal supplements the new Chilean environmental institutions with the authority to evaluate infractions of the environmental law. These documents are publicly available on each institution's website. Legal documents and bills were collected from websites of the National Congress Library -Biblioteca del Congreso Nacional de Chile-and National Congress. Data was triangulated with relevant information available in secondary sources, such as grey literature and newspaper articles covering the court cases.
All Court decisions include the following information: (a) identification of litigating parties (e.g., address and profession) and location of the conflict, (b) type of legal action and details of plaintiff and defendant arguments, (c) detailed description of arguments that, in the court's consideration, served as a basis for the decision, (d) legal references that support the decision, and (e) court decision and date of judgment. The emphasis on this method is oriented to get an interpretation of how law is experienced or 'lived' or, equally, 'law in action', which involves valuing diverse legal discourses of what is needed to achieve socio-natural and socio-economic justices [19]. Therefore, as Jepson [19] argues, the benefits are not only a better understanding of the law, but also the discourses applied to law to naturalize social power.
Data Analysis
To unpack legal records a coding framework was developed, which captures the following themes: actors involved, water legal system (desalination, surface and underground water), water consumption (underground and desalinated), water physical characteristics (underground and desalinated), and, final water users (underground and desalinated). The assignation of passages of text to one or multiple themes, allow for us to compare all of the different perspectives and opinions about a common theme. Through a consideration of space as a critical element, next to social perceptions of law, we are aiming to dive into the legal geographies [36,44] of new technologies. 1 According to the environmental permit. 2 The plant has been functioning since 2003, but was expanded in 2014.
The data was collected from decisions gathered from the Appeal Court of Santiago (Sanitation Service Superintendence v. Council for Transparency 9347-2011; Aguas Antofagasta v. Council for Transparency 9368-2011) and the Environmental Tribunal (Environmental Superintendence v. Candelaria mining company 140-2016). Since the Law 20417/2010 was enacted, the Environmental Tribunal supplements the new Chilean environmental institutions with the authority to evaluate infractions of the environmental law. These documents are publicly available on each institution's website. Legal documents and bills were collected from websites of the National Congress Library -Biblioteca del Congreso Nacional de Chileand National Congress. Data was triangulated with relevant information available in secondary sources, such as grey literature and newspaper articles covering the court cases.
All Court decisions include the following information: (a) identification of litigating parties (e.g., address and profession) and location of the conflict, (b) type of legal action and details of plaintiff and defendant arguments, (c) detailed description of arguments that, in the court's consideration, served as a basis for the decision, (d) legal references that support the decision, and (e) court decision and date of judgment. The emphasis on this method is oriented to get an interpretation of how law is experienced or 'lived' or, equally, 'law in action', which involves valuing diverse legal discourses of what is needed to achieve socio-natural and socio-economic justices [19]. Therefore, as Jepson [19] argues, the benefits are not only a better understanding of the law, but also the discourses applied to law to naturalize social power.
Data Analysis
To unpack legal records a coding framework was developed, which captures the following themes: actors involved, water legal system (desalination, surface and underground water), water consumption (underground and desalinated), water physical characteristics (underground and desalinated), and, final water users (underground and desalinated). The assignation of passages of text to one or multiple themes, allow for us to compare all of the different perspectives and opinions about a common theme. Through a consideration of space as a critical element, next to social perceptions of law, we are aiming to dive into the legal geographies [36,44] of new technologies.
This coding scheme allows us to conduct an analysis on the legal discourses about: (a) how desalination intersects the currently existing water legal framework, and (b) how desalinated water reaches parity with the characteristics (quantity and quality) of other water supply sources, making it available as a substitute for fresh water. This analysis enables the identification of gaps and failures in the water legal system, in cases where companies have multiple water sources granted by the state, and nexus with new water claims, which are involving desalination technologies. The next section provides a brief overview of the context of the mining-water nexus in Chile and dives into the context of both case studies. This is done in order to show the permanent interaction of the mining sector and water in Chile.
The Mining-Water Nexus in Chile: Water and More Water 'Desalination' for the Mining Sector
Potable water supply companies and mining industries being under the same ownership is not a new story in the mining-water nexus in Chile. During 1878, Tomas North 'the saltpeter king' owned major mining sites and the potable water company in the Iquique Region [46]. Later on (1904), the British investment was expanded to 'The Antofagasta and Bolivia Rail Way Company' and acquired the water supply company in Antofagasta. Back then water was already such a contested resource (between industries and human uses), that even the price of the water personally consumed by miners was deducted from their salaries [46].
The first solar distillation plant for mining uses-Las Salinas mine site (1872)-was also serving as a water provider for their employees. Later, other mining companies started utilizing seawater in their operations: Compañía Minera Tocopilla in 1987 and desalination plant 'Michilla' from Antofagasta Minerals in 1991 [47]. Since 2009, water used in copper mining has been increasingly obtained from ocean water [48]. Here, the geographical characteristic (high altitudes where mining sites are located) and distance from the coast are directly influencing the cost of desalinated water, therefore, while removing salt from seawater represents 51% (average 1.9 US$/m 3 ) of the total cost, the energy consumed by the pumping system represents 49% (2.6 US$/m 3 ) [35]. A different cost is associated with the desalination plant capital investment and volume of water treated (see Table 1). By numbers, while the cost of desalinated water represents 5.1 US$/m 3 , freshwater is 1.6 US$/m 3 [49]. As a consequence, strategies for reducing pumping cost/energy have been explored. For example, the SWAP model (trading water sources)-which in essence means desalinated water for coastal cities and mountain water for mining-is proposed in many public documents, such as 'Water management and mining in Chile 2007' by the Chilean Copper Commission-COCHILCO, 'From copper to innovation: a technology roadmap 2015-2035' by Fundacion Chile, and even in declarations from public authorities (mining ministry) [50]. In other words, the cost of desalination is not connected with desalinated water users, but instead with geography and distance to the coast -close to the coast would be around 1 US$/m 3 and in high terrain this increases to between 8 US$/m 3 and 10 US$/m 3 [49]. The total water consumption in the mining sector is distributed in 4 areas, which in 2017 represented: concentrator plant (67%), hydrometallurgy (14%), smelting and refinery (4%) and others (e.g., services and mine site) (15%) [48].
The importance of mining in the Chilean economy has been raised through statements such as 'the Chilean Miracle' and 'the Chilean Wage' [51]. In 2016, mining contributed at the national level with 11.2% to the GDP-while the average over the las 10 years has been 14.9% [52]. At the regional level, in the same year, it represented 47% for Antofagasta and 28% for the Atacama Region. In the Regional Strategies from both, Antofagasta (ERD 2009-2020) and Atacama (ERD 2007, water scarcity is recognized, next to the importance of the mining sector and the encouragement of using desalinated water in place of freshwater. In addition to water scarcity, and the law aiming to make mandatory the use of desalination for mining purposes, local communities are currently demanding desalination projects as partial compensation or as part of corporate social responsibility efforts e.g., Salamanca community with the Pelambres mining company [53]. The company (owned by Antofagasta plc. group) is planning to build a desalination plant to supply water for both mining and human consumption in the Salamanca community, Coquimbo Region. As we can see, this is the same configuration (mining companies adjusting their interest to potable water service) that arose years earlier when Saltpeter was extracted, and is the same that Aguas Antofagasta experienced, which in 2002 was part of the same Antofagasta plc. In 2015 the water supply company was sold to Colombian investment group EPM (Empresas Publicas de Medellin).
Aguas Antofagasta: Water Supply Company in the Antofagasta Region
Aguas Antofagasta (hereafter A.A.) is the water supplier company (responsible for everything from providing potable water through sanitation services) in the Antofagasta Region. The company acquired the water concession in 2003 through a 30-year contract from ex ESSAN S.A. (state-led company) -under which management, operation and investment are in the private arena. Aside from natural water sources, the company operates desalination plants. Mountain water is captured from the intersection of the Loa and San Pedro Rivers. According to DGA [54], the volume of water authorized for mountain water extractions for this company are: Lequena (550 L/s), Toconce (470 L/s) and Quinchamale (300 L/s). The Loa River's waters have been recognized by the WHO (World Health Organization) for having high concentration of arsenic, and because of this desalination is presented as an alternative for human consumption [8]. Although, since 1978 this situation has improved with water treatment plants [46].
According to the Environmental Impact System Evaluation-SEIA [55] the company has four desalination plants approved for providing potable water, although one of them, Aguas de Mar Antofagasta, is not yet functioning ( Table 1). The A.A. website provides information about which communities are receiving desalinated water (Antofagasta, Taltal and Mejillones) and which ones are receiving mountain water, mainly from the Loa River (Antofagasta, Mejillones, Calama and Tocopilla) (see Figure 1). As was identified by Fragkou [8] (p. 77) "(this) is creating three qualitatively different parallel metabolisms of tap water within the same region ( . . . ) one part of the city is supplied with freshwater, another with desalinated water, and a third part with a mixture of these two".
In 2003, A.A. signed a commercial agreement with the mining company Doña Inés de Collahuasi (located in the Tarapacá Region-Northern from Antofagasta), which included water transfers from the Lequena sector -covering 500 L/s (see Figure 1). In December 2011, the project started its Environmental Impact Assessment (EIA) in order to get approvals for water transfers. This has led to social mobilizations (combining NGOs and local government representatives) claiming that those waters rights' uses were granted for providing potable water-ecological impacts and water as a common resource were highlighted as well [56]. Indeed, the deputy for the region, has stated: "In the region, and province, there is water scarcity, water sources are exhausted, therefore I think that it is absolutely inadequate, inconvenient and risky to trade potable water with a mining company" [57] (p. 1).
Despite water transfers to Collahuasi being canceled, similar freshwater contracts are benefiting several other mining companies, again, in circumstances where those waters where adjudicated to provide potable water services [56]-mining sites involved in those contracts are depicted in Figure 1. Two of the companies involved in the water contracts were under the same ownership as A.A.-until 2015 they belonged to the Antofagasta plc. group (El Tesoro and Esperanza) [56]. This means that the increasing water availability through desalination is strategically coupling with the mining industry, through allowing the continuity and allocation of freshwater for mining uses. As we show through our case study, water supply companies are legally authorized to sell untreated water to private sectors, with the only requirement being to guarantee water provision for human consumption in the concession area -these contracts are endorsed by the Sanitation Service Law. Alongside this, desalinated water is allocated for human uses, while freshwater is freed for continued consumption for mining purposes. Despite water transfers to Collahuasi being canceled, similar freshwater contracts are benefiting several other mining companies, again, in circumstances where those waters where adjudicated to provide potable water services [56]-mining sites involved in those contracts are depicted in Figure 1. Two of the companies involved in the water contracts were under the same ownership as A.A.until 2015 they belonged to the Antofagasta plc. group (El Tesoro and Esperanza) [56]. This means that the increasing water availability through desalination is strategically coupling with the mining industry, through allowing the continuity and allocation of freshwater for mining uses. As we show through our case study, water supply companies are legally authorized to sell untreated water to private sectors, with the only requirement being to guarantee water provision for human consumption in the concession area -these contracts are endorsed by the Sanitation Service Law. Alongside this, desalinated water is allocated for human uses, while freshwater is freed for continued consumption for mining purposes.
The water market in the region was identified by A.A. as composed of different actors, "on the one hand, mining companies, which are operating both as water consumers and suppliers through desalinated water or seawater without treatment and, on the other hand, water rights' holders, either by selling water rights or supplying freshwater to mining through water contracts. Finally, companies which operate sanitary services, such as Aguas Antofagasta, are participating either by selling freshwater from continental water sources, desalinated water or waste water" [58] (p. 6). Thus, the role of the mining industry is pivotal in framing different water uses and access in the region. Here, the state also plays an important role in deregulating markets, or even opening new venues, e.g., through water swaps.
The demand of public access to the contracts that A.A. signed with mining companies (data of water volumes and water sources), triggered the two companion legal cases under study. Main arguments used by A.A. for the denial of sharing those contracts were: (1) the right to develop private contracts with untreated water (according to the Sanitation Services Law), (2) the poor quality of freshwater (as compared to desalinated), which allows it to have contracts for private water provisions, and (3) the non-jurisdiction of the Sanitation Service Superintendence (hereafter SISS) in The water market in the region was identified by A.A. as composed of different actors, "on the one hand, mining companies, which are operating both as water consumers and suppliers through desalinated water or seawater without treatment and, on the other hand, water rights' holders, either by selling water rights or supplying freshwater to mining through water contracts. Finally, companies which operate sanitary services, such as Aguas Antofagasta, are participating either by selling freshwater from continental water sources, desalinated water or waste water" [58] (p. 6). Thus, the role of the mining industry is pivotal in framing different water uses and access in the region. Here, the state also plays an important role in deregulating markets, or even opening new venues, e.g., through water swaps.
The demand of public access to the contracts that A.A. signed with mining companies (data of water volumes and water sources), triggered the two companion legal cases under study. Main arguments used by A.A. for the denial of sharing those contracts were: (1) the right to develop private contracts with untreated water (according to the Sanitation Services Law), (2) the poor quality of freshwater (as compared to desalinated), which allows it to have contracts for private water provisions, and (3) the non-jurisdiction of the Sanitation Service Superintendence (hereafter SISS) in private contracts. These documents offer insights into the ambiguities of desalination and the different arguments used to maintain underground water rights' uses, highlighting the water legal framework's failure in accounting for this new technology.
In the final resolution, the Appeal Court determined that the content of these water contracts must be made open to the public [59]. This decision, as was mentioned by CIPER [60] (p. 1) is a "milestone in terms of transparency . . . opens the door for the requirement of access to any document from private companies operating in a regulated sector by the state. In other words, it expands the public boundary and citizen oversight". While this process is a successful story, the ambiguities of desalination remain a blurry arena in terms of its intersection with freshwater sources. The case of A.A., having simultaneous freshwater right uses and desalinated water permits, can provide insights into new techno-legal formations sustaining desalination and how this technology is shaping water governance in mining territories. Similar formations are experienced when mining companies have both water supply sources, as is the case of Candelaria.
Candelaria Mining in the Atacama Region
Candelaria is a Canadian mining company operating in the Atacama Region since 1995. The project is located about 20 km south of Copiapó city and comprises an open pit and underground mine extracting copper ore. The company also operates a desalination plant, which obtained its Environmental Qualification Resolution (RCA) in 2011 [61] (see Figure 2). In addition to this water source, Candelaria has been granted multiple underground water rights, both in Tierra Amarilla and Copiapó [62]. According to the Environmental Superintendent, the limit authorized for freshwater extractions is 300 L/s [63].
Water 2019, 11,886 In the final resolution, the Appeal Court determined that the content of these water contracts must be made open to the public [59]. This decision, as was mentioned by CIPER [60] (p. 1) is a "milestone in terms of transparency…opens the door for the requirement of access to any document from private companies operating in a regulated sector by the state. In other words, it expands the public boundary and citizen oversight". While this process is a successful story, the ambiguities of desalination remain a blurry arena in terms of its intersection with freshwater sources. The case of A.A., having simultaneous freshwater right uses and desalinated water permits, can provide insights into new techno-legal formations sustaining desalination and how this technology is shaping water governance in mining territories. Similar formations are experienced when mining companies have both water supply sources, as is the case of Candelaria.
Candelaria Mining in the Atacama Region
Candelaria is a Canadian mining company operating in the Atacama Region since 1995. The project is located about 20 km south of Copiapó city and comprises an open pit and underground mine extracting copper ore. The company also operates a desalination plant, which obtained its Environmental Qualification Resolution (RCA) in 2011 [61] (see Figure 2). In addition to this water source, Candelaria has been granted multiple underground water rights, both in Tierra Amarilla and Copiapó [62]. According to the Environmental Superintendent, the limit authorized for freshwater extractions is 300 L/s [63]. The Copiapó River watershed has been recognized for having, in general, a good qualityalthough the mining industry have influenced it with the presence of copper, iron and chromium [64]. The Copiapó and Huasco rivers are the main sources of potable water in the region and both are experiencing water deficits, affecting four communities out of the nine in the region (Copiapó, Tierra Amarilla, Caldera and Chañaral) [65]. In this vein, desalination represents a well-accepted alternative for the reduction of freshwater consumption.
Yet, in January 2014, the Environmental Tribunal received a complaint from the Municipality of Tierra Amarilla, against Candelaria, over environmental damage. Few days later, this complaint was The Copiapó River watershed has been recognized for having, in general, a good quality-although the mining industry have influenced it with the presence of copper, iron and chromium [64]. The Copiapó and Huasco rivers are the main sources of potable water in the region and both are experiencing water deficits, affecting four communities out of the nine in the region (Copiapó, Tierra Amarilla, Caldera and Chañaral) [65]. In this vein, desalination represents a well-accepted alternative for the reduction of freshwater consumption.
Yet, in January 2014, the Environmental Tribunal received a complaint from the Municipality of Tierra Amarilla, against Candelaria, over environmental damage. Few days later, this complaint was retracted by the same lawyers acting on behalf of the Municipality. According to city councilors, the reason for this was the signing of a multimillion-dollar agreement, between the company and the Municipality [66]. Despite this agreement, the Environmental Superintendent continued with a sanction process against Candelaria. One of the core arguments in this sanction was the company's non-reduction of natural freshwater consumption [63]. By numbers, over a span of 32 months, Candelaria was selling water to other mining companies (Minosal and CMP), while in 16 of those months water was sold at a rate of more than 50% of Candelaria's freshwater extraction volume -this includes 2013-2014 years, when the desalination plant was operational. Additionally, during the same time frame (years 2013-2014) their freshwater consumption limit was exceeded several times, by 18 L/s to 45 L/s [63].
The ruling references the different water strategies adopted by Candelaria-desalinated, recycled water and sewage water (purchase from the potable water supply company-Aguas Chañar S.A.) [67]. However, the court emphasizes that the company, in the EIA permit approval, acquired the formal commitment of diminishing water extractions (in the Copiapó River watershed) proportional to newly incorporated water sources [67]. The court also referred to Candelaria's water trading: "water deliveries to third parties, without considering its source, have evidenced that, during the months that water deliveries were produced, Candelaria mining had more water available than was needed for its process" [63] (p. 81). In other words, desalination is increasing water sources available for mining use, rather than reducing freshwater consumptions.
The court's final decision was to fine Candelaria with approximately US$ 4,254,473.613, confirming the excessive use and non-reduction of freshwater consumption-considering the alternative water sources integrated in their mining operation [67]. However, similar to the previous case (Aguas Antofagasta), the court does not further elaborate on the gaps and ambiguities of the current water legal systems in accounting for new water technologies and how legal frameworks might be used to continue with freshwater consumptions. The next section explores the legal loopholes that are allowing the pursuit of legal-coupling (desalination with broader water legal systems) in order to sustain their freshwater consumption and uses in Chile.
Water Legal Framework in Chile
"Our legal framework has a lack of regulation (desalination), today we use maritime concessions, but they have a different purpose ( . . . ) Water scarcity and climate change will place Chile at a crossroads." -Alfonso De Urresti, Senator [68] (p. 1) (italics add by author) Desalination projects are not new in Chile. However, with new water policies and legal frameworks aiming to confront water scarcity, this technology is likely to increase in the country. By the year 2015, Chile had 20 desalination plants already operating (11 in the mining sector, 8 for potable water and 1 for industrial use) and there are at least other 12 plants planned [35,69]. Nevertheless, to date, these projects have no clear or prescribed permitting process for desalination infrastructure [34,68,70]. Some gaps in the new water framework are identified by the Organization for Economic Cooperation and Development (OECD) [71] as: a) no current land-use planning strategy in relation to the coastline, and b) lack of regulation and institutions to oversee the management and use of the water produced through desalination technologies, etc. As this paper contends, additional gaps appear by paying attention to the intersection of desalination with the current water legal system. Firstly, it is not clear how desalination releases previously granted water rights/uses (surface water and groundwater), nor the final use that would be destined for those waters (e.g., ecosystem, human consumption, industries), and secondly, it is ambiguous how desalination water flows would be accounted for [72,73]. A core question here is: does desalinated water become groundwater, when its uses involve, for example, filling aquifers or reservoirs? [34] (p. 125).
Ongoing legislative changes, in countries such as Spain, are trying to cover some of these gaps by declaring desalinated water as a public property (since 2005), while in the US Supreme Court it is considered under the 'public goods inalienability' principle [74]. Nevertheless, for critical geographers what remains in question is the management and use of desalination plants and the water produced. This practice has been open to contracts or licenses and, more recently, to forms of Public-Private Partnership e.g., in California and Singapore [10,11].
In Chile, legislative ambiguities and gaps have been somewhat addressed through broad legislation. For example, the right to use seawater has been coupled with maritime concessions -The Maritime Concessions Law DLF 340/1960 and the regulation 002/2005-which were created for non-consumptives uses of seawater (e.g., aquiculture), but not for consumptive uses (either of the natural seawater or the derived desalinated water) [34,72]. In other words, desalination projects are coupling their approvals with procedures stablished for seawater uses that were not framed in terms of technological uses and, more specially, for water extractions. Complementary regulation, although not strictly connected with desalination, is also used as a guideline for these projects e.g., coastline use and zoning (Inter-communal Regulatory Plan for coastline) and environmental permits (EIA) [72].
In the attempt to fix these gaps multiple draft bills are being debated at the Chilean Congress. Besides the draft bill that proposes to regulate desalinated water uses for mining projects [75] there are another two main proposals for this technology: (1) granting to the State the authorization for the construction and management of desalination plants [76] and, (2) regulating seawater uses for desalination [72]. From these documents, and the current legal system, key issues are inferred in desalination from the legal community (e.g., senators, deputies and lawyers). Here we identified three central contradictions.
Ownership
If desalinated water is no longer seawater, does it cease to be public property? The process of producing artificial water assumed as an extension of maritime concessions, has come with gaps and ambiguities, and one of them refers to ownership [72]. Referring to this, the senator Galilea mentions: "desalinated water through an industrial process isn't natural water, it is the outcome of an industrial process, and therefore telling a company, which is investing, that this is a national good of public interest, is a conceptual mistake" [68] (p. 1).
Further discussions over ownership are referring to water management. This means that even if it is agreed that seawater is in the public domain [34], due to its management, it is becoming amenable to private ownership (e.g., public-private partnerships) [72]. As Swyngedouw and Williams [12] have argued, the free pumping of seawater has already opened debates in terms of legal rights over the seas, and with privatization of the oceans this discussion is likely to increase.
Desalination Uses and Water Flows
"There is no public definition in terms of guidance and priorities for sea water uses ( . . . )" [72] (p. 7). This declaration, made by a group of Senators, seeks to avoid the replication of current mistakes in the surface water and groundwater regimes, and instead prioritize water for human consumption and aquifer replenishment [72]. Furthermore, this new approach is also highlighting the need for a direct correlation between the purpose that was intended in the desalinated water concession, and the actual final use of that water [72]. This is important in cases where desalination is approved for mining or energy services but, at the same time, is delivered/diverted for communities' uses (see for example Compania Minera del Pacifico selling water to Caserones).
In addition to desalination uses, new concerns are raised over water flows: "To date there is a lack of regulation for water flows extraction and characteristics for specific uses" [34] (p. 120). In some cases, this is read as an economic imbalance between seawater users and surface water and groundwater users [34]. A different reading is expressed by Senator Muñoz, "if there is seawater in excess (that's why we emphasize establishing quantity and purpose), it may happen that free access to water results in that water being sold back to the state for human consumption ( . . . )" [68] (p. 1).
Desalination and Granted Water Rights' Uses
Desalination is often bound to the idea of restricting legal water rights' uses and releasing water for human consumption and the ecosystem [72]. Nevertheless, the draft bill that regulates desalinated water for mining uses has ambiguities on how it would reach that goal [73]. The legal framework does not specify how desalinated water might be separated from the current water permits granted for surface water and groundwater uses [73]. Additional concerns refer to how desalination would release water right's uses and the final use that would be given to those waters [73]. In summary, there is no clear legal guidance in terms of distinguishing freshwater from desalinated water in scenarios where companies are simultaneously using both water sources. The draft bill reforming the Water Code attempts to address some of these issues by establishing that water for human consumption will have priority over other water right' uses (see draft bill 7543-12). Beyond these existing assessments, we identified in our case studies new ambiguities emerging in terms of how desalination reaches parity with other water supply sources.
In the following section, we show that legal gaps in the intersection of desalination with freshwater sources, have been addressed by a legal-coupling with the Water Code and Sanitation Services Law-with the main purpose of enabling the maintenance of groundwater consumption in support of the mining sector. Given that there is a move to make desalination mandatory, our case studies might offer insights about the role of desalination in mining territories.
Discussions in the Understanding of Desalinated Water in the Context of Water Law and Mining Regions in Chile
When desalination legalities started being discussed in the legal community, ambiguities and gaps were raised, mainly, in notions pertaining to its permitting process and the free access to seawater. These debates later evolved to value how desalination intersects with the current water legal system, for example, by considering water flows, water allocations (filling aquifers) and how alters previously granted water uses. In this section, we show that some of the loopholes of desalination have been somewhat addressed by wide water legal frameworks, such as the Water Code and Sanitation Services Law (both legacies of the Pinochet regime).
Here we will disclose that this legal-coupling is enabling the maintenance of consumption of groundwater in support of the mining sector. These issues are identified not only in the mining sector (Candelaria), but also in desalination for potable water services (Aguas Antofagasta). The case studies are revealing two gaps: (1) how desalination alters existing water rights, and (2) how desalination matches up against the purity and quantity of other freshwater sources. Implications of these ambiguities demonstrate the importance of legal and institutional frameworks for how desalination works, or fails to work, under its sustainable promise.
Desalination in Aguas Antofagasta: Changing Perspectives on Freshwater
Potable water uses of desalination, in addition to environmental permits, must function according to the Sanitation Services Law (1989) and the water quality regulation act (NCH 409/1.OS. 2005). This framework guarantees adequate sanitary services and recognizes desalination as part of them, "sea water will be admissible as a water supply source, through desalination" [77] (Article 15). Nevertheless, as we show, their primary focus on the high quality of desalinated water is affecting the perceptions of freshwater supply sources-at least from desalination plant operators. In other words, while this framework recognizes that desalination can be used to supply these services and must meet the strict potable water quality regulations, we contend, it is failing in: (1) prioritizing water supply sources, and (2) releasing water rights. Thus, desalination is allocated for potable water uses and freshwater consumption is maintained in support of mining industries.
Desalination and Water Supply Priorities
The laws' unique attention to water quality is exploited (by both the water supply company and the water state agency-SISS) to justify freshwater transactions with the mining sector, under the assumption that: freshwater has poorer quality in comparison with desalinated water [59]. Indeed, the artificial character of desalinated water, in terms of it being able to be produced at any quantity and quality-'designer water' [11] (p. 35)-is changing the perspective and priority uses of freshwater. The outcome has been to prioritize desalination for human consumption. The representative from OLCA (the Latin American Environmental Conflicts Observatory) observing this 'game changer' perspective of desalination, mentions: "50% of potable water in Antofagasta is provided by desalination, because in that region, and in particular in that city, mining is the main economic driver, and so they preferred to give fresh water to mining companies rather than to the population" [78]. In her recent study of social impacts of desalination, at the household level in the Antofagasta Region, Fragkou [8] has found that freshwater is perceived as having a higher quality by comparison with desalinated water. This means that between desalination operators and water consumers there are different perceptions of desalinated water quality.
With this in mind, in addition to what many legal scholars have found as a consequence of focusing solely on the regulation of the high quality of desalinated water e.g., ignoring environmental implications (such as cross-border pollution) [34,41], the A.A. case shows how the economic power involved in desalinated water management can prioritize uses of freshwater/desalinated water. As such, legal ambiguities in desalination are being maneuvered to determine water flows of desalinated water, as well as freshwater. In this sense, the use of the Sanitation Service law raises the issue of how water laws can handle the ambiguities of desalination.
Desalination and Water Rights
The legality of maintaining water rights uses for different purposes than potability treatment is rooted in the law that regulates tariffs in the water sector (DFL 70/1988). This law states [79] (Article 24) "if the provider (public service company) wants to supply non-mandatory services, it may freely determine payments or compensations with the interested parties" (italics by the author). As we can see, this prescription has failed to anticipate how desalination may increase water supply flows, how to tally them and how to prioritize final water users. Additional water volumes have resulted in A.A. now having 49 non-regulated customers, mainly mining companies [80]. Both A.A. and SISS refer to non-regulated costumers as private businesses, not regulated by the Superintendence of Sanitation Services-SISS, and therefore out of its control and jurisdiction [58].
The permissive right to provide non-mandatory services is used for facilitating economic development through the water network [59]. Their argument is that selling freshwater to mining companies is not regulated by the sanitation legal framework; instead, transactions are operating within the private space boundary. The price at which the freshwater is sold to mining companies varies in relation to water flows, distance, etc. For example, for 342,144 m 3 /year (contract between A.A. and Cerro Dominador) the annual price is US$ 272,950.18 and for 1,399,680 m 3 /year (contract between A.A. and Sierra Miranda) the annual price is US$ 3,343,402 (for a complete analysis of water contracts see González [81]). What is evident from these water transactions is that desalination operators can account for volumes of water rights granted (freshwater) as distinct from desalinated water flows, which is useful in terms of increasing, and accumulating, water sources and water private provision contracts. Major implications of these contracts are changes in urban water cycles -consuming desalinated water instead freshwater-and increases in water markets [8,33].
Additionally, the importance of connecting these services (non-regulated and regulated) relies on the price paid by the final customer [59]. Yet, as a community member has argued, there is a major issue "( . . . ) those waters, were originally for Antofagasta and now, since they are desalinating, Aguas Antofagasta wants to sell them" [82] (p. 376). This suggests that what is at stake is the practice of the economic 'coupling' (keeping Usher's term) [10]-the mining sector sharing the infrastructure built for sanitary services-with its further effects in determining not only water tariffs, but also water access and, more broadly, water flows.
Responses from the Council for Transparency privileged the public access to private water contracts (which might involve either freshwater or desalinated water), and this approach was confirmed by the Appeal Court of Santiago. The court made a landmark decision: the right to public information prevails over economic interest, especially when it affects sanitation services [59]. While legal authorities agreed that new water contracts could be forced to be open to the public arena, there was little consideration of how artificial water through desalination is enabling the emergence of new water contracts and water accumulation, and how it has been accounted for and prioritized in relation to freshwater. As Larson [41] argues, one of the greatest challenges of environmental law is to respond to emerging technologies. In line with this thinking, this case shows that, not only are environmental laws becoming outdated in relation to more recent technologies, but also water laws.
Desalination in Candelaria: Tailoring the Legalities of Water Flows
Strategies for reducing freshwater consumption, by Candelaria mining, include: recycled, sewage water and desalination. By numbers, the total water consumption for the 2014 year was 30,095 L/s [63]. Of that number, desalination represented 3837 L/s; sewage water 1272 L/s; freshwater 115 L/s, and; recirculated water 5195 L/s. In terms of calculating the limits of freshwater consumption, it is accounted as equivalent sewage water and desalinated water. Between 2013-2014 the freshwater limit was exceeded in 9 of the 12 months [63]. Thus, water solutions mobilised do not involve reductions of water exploitation, but sustain the mining extractive sector. Enabling this result, we contend, is the still unclear water legal system. The characteristics of the water model are broadly explained by Bauer [83] and Budds [84] in terms of economic and market features (e.g., property rights, minimum state intervention and the freedom to trade water rights). However, the contemporary practices of desalination are revealing new failures of this system. The paradox is that, while the Water Code explicitly excludes seawater, it is evident that desalinated water is altering major hydraulic infrastructures (such as reservoirs and water pipelines) [34]. As we will show, water reductions are usually read in connection with the EIA, however, ambiguities in the legal system are exploited in terms of 'tailoring' freshwater consumption. These strategies are covering: (1) how to account desalinated water flows, and (2) how desalination releases water rights.
Desalination and Water Flows
When Candelaria expanded its operation in 1997, the limit authorized for freshwater extraction was 300 L/s. In 2011, the same water exploitation (300 L/s) was approved for its desalination capacity, with a possibility of expansion (500 L/s) [67]. The EIA granted to Candelaria mining states "to the extent that Candelaria incorporates desalinated water, there is to be a proportional reduction in water extraction ( . . . ) Mountain water will still be used in case of emergencies (natural events) and operational contingencies" [67] (p. 83). Although the rule may seem straightforward, in practice desalination flows can be tricky to define. This brings up the issue of how desalination flows are intersecting and should be counted in relation to freshwater flows: annual average or monthly maximum flow [67]. These temporal scales meant that they can 'play' with monthly ratios of consumption between water supply sources.
These legal gaps have been addressed by the Water Code. This is inferred from Candelaria's statement when, accounting for water flows, argues: "this is related with groundwater rights grants in aquifers, wherein consumption levels are granted by annual volumes" [67] (p. 77). By this method annual tallies are allowing the mining company to 'play' with monthly ratios of consumption between water sources, and thus justify the partial reduction of freshwater consumption during certain periods of time.
According to the Environmental Superintendent a non-reduction of freshwater consumption occurred between the years 2000 and 2014-its desalination plant has been functioning since 2013 [67].
In this governmental institution there is a different understanding for counting water volumes: "it is not about increasing water sources, but reducing water extractions to the extent that they incorporate different water sources" [67] (p. 99). As such, they've accounted desalinated water by monthly volumes. While the court decision implies reduction of freshwater consumption, as we see, to date there are no clear guidelines in terms of how to account new water flows, nor specificity about final use that would be given to the released water and water rights granted.
Desalination and Water Rights
The Copiapó River watershed is well-known for being a zone of prohibition for new water exploitations. In fact, since 1993, there is legal resolution indicating that water sources in that watershed must be protected (DGA Resolution 193/1993) [67]. However, this resolution is not as straightforward as it seems at first glance. The legalities of water rights' uses are being exploited to sustain water consumption, and ambiguities of how desalination intersects with this water source doesn't seem to provide guidance on further reductions of freshwater consumption.
The legalities of maintaining water rights uses are claimed by Candelaria through using a legal-coupling with the Water Code: "it is a reality that fresh water extraction ( . . . ) has affected water levels (Copiapó River), nevertheless, it is a legitimate extraction that corresponds to granted water rights (Water Code). In consequence, it is not an illegal act" [67] (p. 91) (italics by the author). Effectively, neither the Water Code nor the desalination legal system have prevented this situation. In 2018, the court ruled that the 'legality' of an act can not be used to justify environmental damage [67]. Candelaria's argument is expanded, even further, by referring to water resource diminution as a result of the water legal framework; characterized by the overexploitation of water rights and weak institutional control [67]. This suggests that what is at stake is not simply the water management under the Water Code, but rather how desalination is intersecting and expanding this framework [33].
As the court ruled in this case, there is no discernable legal category which specifies how desalination intersects with other water sources and the release of water rights granted. As such, the mining company has exploited this loophole for the continuity of their freshwater uses. The ruling goes even further by acknowledging that more anthropogenic intervention is needed, in terms of new public policies and regulations to repair the environmental damage [67]. What is remarkable is that this measure is not counting desalination's uses and its socio-environmental implications in terms of increasing water consumption and accumulation, rather than securing water needs. These responses converge with the Aguas Antofagasta case by the acknowledgment of economic development being facilitated through the water network, as well as on avoiding ambiguities that are allowing the continuation of fresh water extraction in cases where desalination plants are operating.
Conclusions
The use of desalination is dramatically increasing worldwide [4]. Nevertheless, its legal and political dimensions have, only recently, begun to be evaluated, and concerns about ownership and management are attracting much interest [2,10,11,31,32]. In particular, while the technology is not new, it articulates uneasily with existing social and political frameworks. This in turn leads to legal loopholes, which are exploited by the ways in which society accesses legal knowledge and makes use of both the technology and the water produced. Legal gaps have been maneuvered through, for example, in both the USA and Singapore with new Public-Private Partnership laws for public utilities management, which in turn are offering opportunities for private capital to (re)configure the sphere of water governance [10,11]. As we see from the Chilean experience, legal loopholes are opening opportunities for the continuity of fresh water consumption to benefit the mining industry. As is shown in this paper, additional dimensions for the discussion of desalination's legal gaps are characterized by: (1) how desalination alters existing and parallel water rights/uses, and (2) how desalination reaches parity with the characteristics (quantity and quality) of other water supply sources. The particular attributes of desalination, being able to produce water at any quantity and quality, must be taken into account in any critical analysis of the technology [11]. In this way, these cases build on existing critical studies in desalination, which have demonstrated that political formations sustaining the 'desalination factory' [11] (p. 35) are permeating in the logics of economic development and privatization of nature [2,10].
The case of desalination plants operating in mining regions in Chile highlights the fact that desalination (quantity and quality) is changing perspectives on other water supply sources. Legal geography pushes for consideration of how desalination legal frameworks intersect with the extant legal and political system in ways that provide a tool for spatial interventions. In this context, the articulation of this technology with existing water laws and legal practices, what we defined as legal-coupling, enables the continued use and consumption of mountain water in support of mining development. We note that in some cases, companies might have different and parallel water sources for their operations, which are often articulated and contested within the realm of formal law and policy. Broad discussions of Chilean desalination's legal framework show ambiguities not only in terms of the permitting process, but also in how this new water source is going to be accounted for in terms of other water sources' uses. Without a clear legal reference, ambiguities and gaps have since been somewhat addressed through broad legislation: The Maritime Concessions Law, Water Code and Sanitation Services Law. Thus, this analysis also complicates recent efforts and calls for making the use of desalinated water by mining companies mandatory. Here, we see that desalination is not necessarily tied to reductions of freshwater exploitation; ambiguous laws and geography (pumping water to high altitude levels) are exploited for changing water flows.
The paper highlights two main gaps in cases where companies operating desalination plants have simultaneous water rights/uses for underground water exploitation. Firstly, the laws' unique attention to water quality for potability, is being exploited to argued that freshwater doesn't meet the requirements for human consumption, whereas desalination can reach higher quality levels. Here we can see how water has been reduced to its chemical composition H 2 O and abstracted from its social context [37]. It is, therefore, a 'game changer' for maintaining the use of freshwater in mining and reserving desalinated water for communities. Secondly, the laws' ambiguity over how to count desalination flows, allows the mining company to report only annual volumes. This means that they can 'play' with monthly ratios of consumption between water supply sources and, therefore, they can consume more freshwater during certain periods of time-having in this sense a partial reduction. In other words, the attention is towards augmenting water supplies. As we see, additional implications of these processes are that desalination plants' owners are able to have contracts as water suppliers for mining companies in the region.
Given that there is a movement to regulate desalination, it is important to investigate the role and issues that this technology is facing, both in terms of legal and geographical contexts. The Chilean case demonstrates the importance of both characteristics in how desalination works, or fails to work, in terms of socio-environmental implications. The paper's findings matter for the growing debates about desalination in both academia and by policy makers. On the policy side, the paper shows how legal discourses of nature are allowing maintenance or changes of spatial configurations and how they are articulated and contested through legal-coupling. Therefore, it highlights the importance of having clear rules about how desalination matches up against the purity and quantity of other fresh water sources and the pitfalls for releasing previously granted water rights/uses, while showing how water uses (desalinated and freshwater) are being prioritized. On the academic side, the paper expands debates on the dimensions of desalination's legal features and its implications for supporting economic development through changes in water consumption. Legal practices and legal knowledge are moving desalination critical analysis, towards the understanding of how natural-social boundaries are defined by legal institutions and practices [20,36]. Indeed, access to legal knowledge is often a tool at the service of spatial-political interventions.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2011-10-17T00:00:00.000
|
300776
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2011.00079/pdf",
"pdf_hash": "0fd50fdfaf51664d8e078d21cc18cb7ec3ba0e41",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45552",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "0fd50fdfaf51664d8e078d21cc18cb7ec3ba0e41",
"year": 2011
}
|
pes2o/s2orc
|
The bZIP Transcription Factor PERIANTHIA: A Multifunctional Hub for Meristem Control
As sessile organisms, plants are exposed to extreme variations in environmental conditions over the course of their lives. Since plants grow and initiate new organs continuously, they have to modulate the underlying developmental program accordingly to cope with this challenge. At the heart of this extraordinary developmental plasticity are pluripotent stem cells, which are maintained during the entire life-cycle of the plant and that are embedded within dynamic stem cell niches. While the complex regulatory principles of plant stem cell control under artificial constant growth conditions begin to emerge, virtually nothing is known about how this circuit adapts to variations in the environment. In addition to the local feedback system constituted by the homeodomain transcription factor WUSCHEL (WUS) and the CLAVATA signaling cascade in the center of the shoot apical meristem (SAM), the bZIP transcription factor PERIANTHIA (PAN) not only has a broader expression domain in SAM and flowers, but also carries out more diverse functions in meristem maintenance: pan mutants show alterations in environmental response, shoot meristem size, floral organ number, and exhibit severe defects in termination of floral stem cells in an environment dependent fashion. Genetic and genomic analyses indicate that PAN interacts with a plethora of developmental pathways including light, plant hormone, and meristem control systems, suggesting that PAN is as an important regulatory node in the network of plant stem cell control.
INTRODUCTION
In contrast to most animals, plants continue to form new organs throughout their lives. This remarkable capacity is dependent on the continuous presence of undifferentiated and self-renewing stem cells over long periods of time. These stem cells reside at the growing points of a plant, the tips of roots and shoots, and are embedded into specialized structures called meristems (Barton, 2010).
Several genes affecting meristem and stem cell function have been identified by mutant screens in Arabidopsis thaliana. Most notably WUSCHEL (WUS) and SHOOTMERISTEMLESS (STM ) are required for the maintenance of the shoot meristem (Barton and Poethig, 1993;Laux et al., 1996;Long et al., 1996;Mayer et al., 1998). Their inactivation causes premature differentiation and the eventual exhaustion of the stem cell pool, leading to the termination of the shoot meristem. Another group of genes, the CLAVATA (CLV ) genes, have an opposite effect on meristems and if defective, shoot meristems overproliferate and expand inappropriately (Clark et al., 1993(Clark et al., , 1995Kayes and Clark, 1998).
With the exception of CLV2, all genes mentioned above are expressed in small domains in the shoot apical meristem (SAM). Elegant genetic studies have shown that WUS and CLV3 are connected by a negative feedback loop to control the size of the stem cell pool. WUS, which is expressed in the organizing center, induces the expression of CLV3 in the overlying true stem cells, which in turn signals back to the organizing center to keep WUS expression in check (Brand et al., 2000;Schoof et al., 2000). In addition to these local regulatory interactions, meristem function is affected by global hormone signaling pathways, including auxin and cytokinin circuitries. While STM mediates cytokinin biosynthesis (Jasinski et al., 2005;Yanai et al., 2005) to allow cell proliferation in the meristem, its expression is repressed by auxin (Furutani et al., 2004), which in turn allows organ initiation on the flanks of the SAM. In contrast, WUS does not interfere with cytokinin biosynthesis, but directly regulates Atype ARABIDOPSIS RESPONSE REGULATORS (ARRs; Leibfried et al., 2005;Busch et al., 2010) that act in the negative feedback regulation of cytokinin response . This feedback system of cytokinin signal transduction is also connected to auxin signaling and ARR7 and ARR15 are directly repressed by the AUXIN RESPONSE FACTOR5/MONOPTEROS transcription factor . A-type ARRs execute important meristematic functions (Leibfried et al., 2005;Buechel et al., 2010;Zhao et al., 2010) by so far undiscovered mechanisms (Leibfried et al., 2005;Zhao et al., 2010).
Cells that leave the shoot meristem during the initial, vegetative phase of the life-cycle give rise to leaves and meristems of axillary shoots. After the transition to the reproductive phase, meristems that newly arise at the flanks of the SAM will develop into flowers instead. This is due to the redundant activity of meristem identity genes such as LEAFY (LFY ) and APETALA1 (AP1). In contrast to the shoot apex, which is indeterminate, flowers are determinate and stem cell activity ceases after a fixed number of organs have been formed. In plants that lack LFY activity, flowers are converted into partially indeterminate shoot-like structures (Weigel et al., 1992).
One set of genes that is directly controlled by the LFY transcription factor includes homeotic genes that specify the fate of the different floral organs (Parcy et al., 1998;Busch et al., 1999). We have previously shown that LFY acts together with WUS, which also encodes a transcription factor, to contribute to the transcriptional activation of the homeotic gene AGAMOUS (AG) in the center of young flowers. AG in turn, not only specifies the fate of the floral reproductive organs, but also terminates stem cell maintenance by negative feedback on WUS expression (Lohmann et al., 2001). The bZIP transcription factor PERIANTHIA (PAN) is expressed in the SAM, as well as in developing flowers, where it overlaps with STM, WUS, the CLV transcripts, and AG, respectively (Chuang et al., 1999). Loss-of PAN function leads to an increase in the number of perianth organs, the sepals and petals, while on a gross morphological level the SAM seems unaffected (Running and Meyerowitz, 1996). In flowers, PAN genetically interacts with ABC homeotic genes, however these interactions appear mostly additive (Running and Meyerowitz, 1996). PAN protein expression was shown to be independent of the meristematic regulators CLV1 and CLV3 as well as of floral meristem identity genes, such as LFY or AP1, demonstrating that PAN also acts in parallel to these factors (Chuang et al., 1999). It has been shown that PAN interacts with the NPR1-like proteins BLADE ON PETI-OLE 1 (BOP1) and BOP2 in yeast and that bop mutants share some of pan mutant features (Hepworth et al., 2005). However, their expression domains only overlap marginally, suggesting that PAN primarily acts together with other co-factors. It was shown that PAN plays important roles in the activation of AG (Das et al., 2009;Maier et al., 2009), which are strikingly modified in various day-length settings. While PAN brings about the termination of floral stem cell fate by the direct transcriptional activation of AG, its function in the SAM, where it is also strongly and specifically expressed, remains poorly understood.
RESULTS AND DISCUSSION
Since we had noted before that the floral functions of PAN are strongly dependent on the environment (Maier et al., 2009), we carefully analyzed vegetative phenotypes of wild-type Columbia and pan mutant plants under various growth conditions and found that day-length had a substantial impact on the penetrance of pan related defects. In contrast to the reproductive phase, where pan mutants showed the most dramatic aberrations under short-day conditions, pan plants at the early vegetative stage were largely undistinguishable from wild-type in short days (SD; Figures 1A,D). Conversely, pan mutants exhibited pleiotropic phenotypes when exposed to long days (LD), including elongated petioles, curled leaves, and a twisted rosette (Figures 1B,E). Under continuous light (CL), Col and pan phenotypes were less distinct, but pan plants continued to show more extreme leaf-curling and rosette twisting. In addition to the morphological traits, we observed that pan mutants flowered slightly early and on average formed 1.5 or 2.5 rosette leaves less than wild-type under LD or CL, respectively ( Figure 3A; n = 50). Furthermore, we realized that pan mutants are extremely sensitive to variations in diverse environmental conditions, including water and nutrient availability, as well as biotic and abiotic stress (data not shown). Taken together, these phenotypes indicated that PAN might act to stabilize the developmental program of the shoot apex and thus buffers the impact of diverse environmental inputs.
Since the activity of the SAM is mainly determined by the WUS-CLV feedback system, which acts on the stem cell population, as well as the repression of differentiation throughout the meristem provided by STM, we investigated their regulatory and genetic interaction with PAN. Using in situ hybridization on serial histological sections, we first analyzed in detail the mRNAexpression patterns of PAN in the inflorescence meristem and found that, consistent with a buffering function, PAN mRNA is most highly expressed in a ring-shaped domain surrounding the stem cells (Figures 2A-D). We detected weaker signals throughout the center of the SAM, suggesting that PAN might execute slightly different functions depending on expression levels. Similar to the situation identified for WUS, which was shown to bind to distinct cis-regulatory motifs with different affinity (Busch et al., 2010), these functions could be mediated by distinct sets of PAN downstream targets. However, in situ detection of PAN protein on sections of the SAM did not show the ring domain, but rather suggested that PAN is found throughout the meristem (Chuang et al., 1999). Unfortunately, we were unable to resolve whether these Frontiers in Plant Science | Plant Physiology differences were of technical nature, or reflected relevant biology.
Hybridizations on cross sections demonstrated that PAN mRNA is strongly reduced even in early organ primordia (Figures 2E-H). We next investigated how the SAM regulatory system is affected by the loss-of PAN function. First, we noticed that the SAM was significantly increased in size (Figures 2I,M) and that the WUS expression domain is substantially wider compared to the wildtype situation (Figures 2J,N). Interestingly, the stem cell domain marked by CLV3 expression remained largely unaffected despite the expanded stem cell niche (Figures 2K,O), suggesting that the regulatory interaction between WUS and CLV3 is partially uncoupled in pan mutants. In line with the enlarged meristem, we found expanded STM expression in pan apices (Figures 2L,P) and the absence of STM transcripts from emerging organ primordia was less pronounced in pan when compared to wild-type. Taken together, these results demonstrate that PAN function is required for normal SAM development, which might be mediated by its effects on the expression of the canonical meristem regulators.
To address how PAN is integrated into the regulatory network of the SAM, we analyzed its expression in wus and clv3 mutants, which represent the extremes in meristem dis-regulation. Since wus mutants rarely form inflorescence meristems, we focused our analysis on the seedling stage and found accumulation of PAN mRNA mostly in the center of the SAM in wild-type. In addition, we detected weaker signals on the periphery of the meristem and at the adaxial sides of young leaves ( Figure 2Q). Consistent with the loss-of a fully developed SAM in wus, we were unable to detect PAN transcripts in central tissue of this mutant, however, strong expression was found in leaf-primordia and young leaves ( Figure 2R). While Chuang et al. (1999) had reported that PAN protein expression is mostly independent of CLV3, we observed that PAN transcripts accumulated throughout the SAM, with a ring of strong expression toward the base with weaker signals toward the top of the expanded clv3 meristem (Figures 2S,T).
Having shown that PAN is more tightly connected to the regulatory system of the SAM than previously anticipated, we extended our analysis to test the functional interaction of PAN with CLV3, WUS, and STM using genetics. Plants that carry mutations in CLV3 are characterized by an enlarged SAM, an increase in the number of lateral organs developing from the SAM and over-proliferation of floral meristems. When we combined the clv3-7 loss-of-function allele with pan, we observed a substantial enhancement of the clv3 phenotype ( Figure 3A). Compared to clv3 single mutants, SAMs of pan clv3 double mutants were even further enlarged (arrowheads in Figures 3C,D) and developed even more lateral organs (Figures 3C,D). Consistent with an enhancement of meristem phenotypes by the pan mutation, we observed a drastic reduction SAM function when we combined wus and pan ( Figure 3F). In contrast to wus mutants, which develop a bushy stature because of the stop and go phenotype of the meristem (Laux et al., 1996), stem cell activity in wus pan double mutants ceased after the formation of leaves and elongated shoots were never formed. Since CLV3 and WUS act in the same pathway and both showed synergistic genetic interactions with PAN, we next wondered how PAN would interact with STM, whose activity is independent of the WUS-CLV system. To our surprise we found that the stm phenotype was partially suppressed in pan stm double mutants, which developed a substantially larger number of lateral organs and shoots compared to stm plants ( Figure 3G). In some cases we even observed flowers with a regular arrangement of floral organs, however these flowers remained sterile. Thus, while in the case of WUS and CLV3 PAN behaved as a molecular buffer, which is able to stabilize SAM function in the absence of other meristem regulators, this function was not observed when pan was combined with stm, suggesting that they have antagonistic activities.
To elucidate some of the mechanisms that could underlie these complex meristematic functions of PAN, we recorded the molecular phenotype of pan single mutants by transcript profiling. Wild-type and pan mutants were grown in LD for 25 days before we sampled two independent pools of 50 inflorescence meristems of each genotype by removing developing flowers older than stage 8. After Affymetrix Ath1 profiling we applied GC-RMA to normalize the data and derive expression values (Wu et al., 2004) followed by Rank Products to identify differentially expressed genes at a false discovery rate of 0.05 (Breitling et al., 2004). One hundred sixty transcripts showed increased abundance (Table 1), while 120 mRNAs were found to be significantly reduced in inflorescence apices of pan mutants compared to wild-type ( Table 2). To obtain a first insight into the potential function of PAN downstream genes we used Gene Ontology (GO) analysis on the level of the annotation of biological function, as well as using molecular function as a readout. Interestingly, we found the "response to stimulus" category as highly enriched among the genes with increased as well as reduced expression. Among the increased Frontiers in Plant Science | Plant Physiology mRNAs we found diverse functional sub-categories indicating that PAN plays a role in stress and environmental response (Figure 4). A prominent example was GIGANTEA (GI ), whose expression is controlled by the circadian clock and whose activity is necessary for normal clock function and promotion of flowering under LD (Fowler et al., 1999;Park et al., 1999). To test if GI plays a relevant role as PAN downstream gene, we created pan gi double mutants and compared them to the respective parental genotypes. Strikingly, we found that loss-of PAN function was able to fully suppress the late flowering phenotype of gi mutants in LD (Figure 5), demonstrating that GI and PAN act in the same pathway.
In contrast to the rather diverse GO categories observed in the list of genes with increased expression, the reduced transcripts revealed a much more specific developmental signature. Among them we identified a substantial overrepresentation of genes with annotated functions in hormone signaling, specifically for gibberellin, ethylene, auxin and, most prominently, cytokinin response (Figure 6). This developmental signature was also apparent in the GO analysis for molecular functions with "transcription regulator activity" and "two-component response regulator activity"as the most overrepresented annotation terms (Figure 7). Twocomponent response regulators build the backbone of cytokinin signal transduction and response, with B-type ARRs acting as cytokinin dependent transcription factors directly upstream of Atype ARRs as immediate early cytokinin response genes with roles in negative feedback regulation (Werner and Schmülling, 2009). Strikingly, only the expression of A-type ARRs was affected in pan mutants and ARR4, ARR5, ARR6, ARR7, ARR15, and ARR16, were among the transcripts with significantly reduced abundance, a result which we independently confirmed using quantitative realtime RT-PCR (data not shown). In addition to cytokinin response genes, we identified two cytokinin oxidases, CKX3 and CKX5, as genes with reduced expression. Since CKX proteins irreversibly degrade cytokinin (Mok and Mok, 2001;Werner et al., 2003) and because A-type ARRs counteract cytokinin signaling , a reduction of their expression in pan mutants suggests that PAN acts to limit cytokinin activity in the SAM. This interpretation is consistent with the finding that SAM size is increased in pan mutants reminiscent of plants with increased cytokinin levels (Bartrina et al., 2011). In addition, we had previously identified ARR5, ARR6, ARR7, and ARR15 as direct transcriptional targets of WUS, connecting these cytokinin response genes to the core regulatory system of the SAM. While from the list of genes with reduced expression an antagonistic interaction of PAN and cytokinin could be deduced, it also suggested that PAN acts to stimulate auxin signaling, since it contained YUCCA1 and YUCCA4, two genes coding for important auxin biosynthesis enzymes (Zhao et al., 2001). Since auxin directly represses transcription of ARR7 and ARR15 via the Auxin Response Factor MONOPTEROS in the SAM, PAN could act on the expression of A-type ARRs in multiple independent pathways. Strikingly, WUS was identified among the transcriptional regulators with reduced expression, confirming that PAN is intimately connected to the SAM regulatory network.
Having identified cytokinin and auxin signaling as major downstream effector pathways of PAN we next addressed the functional relevance of these regulatory interactions using genetics. We focused our analysis on ARR7 and ARR15, since both of them were shown to have important meristematic functions (Leibfried et al., 2005;Zhao et al., 2010), and combined these mutants (Figures 8D,E) with pan ( Figure 8B) and clv3 (Figure 8C) in double and triple mutant combinations. While single A-type arr mutants have no phenotypes or very mild ones (Figures 8D,E; To et al., 2004), combination of arr7 and arr15 with pan lead to severe growth retardation (Figures 8G,H). Interestingly, while removing CLV3 function in the pan background lead to massive over-proliferation and meristem expansion beyond the regular clv3 defect (Figures 3B-E), this phenotype was completely suppressed in the pan clv3 arr7 combination (Figures 8F-I). However, the growth retardation was only transient and pan arr15 as well as pan arr15 clv3 plants recovered after about 2 weeks and developed plants with pentameric flowers, which closely resembled pan clv3 mutants. This capacity to overcome A-type ARR related Frontiers in Plant Science | Plant Physiology developmental defects was also observed in plants carrying an over-activated form of ARR7 (Leibfried et al., 2005) and suggest that the cytokinin signaling system has a strong ability to adapt to perturbations. Mutation of multiple A-type ARRs, such as in an arr7 arr15 double mutant did not cause the phenotypes observed in the pan arr combinations (Figure 8J) underlining the important role of PAN in the SAM. Having observed a strong genetic interaction of PAN with components of the cytokinin response, we next tested its ability to modify auxin related defects. To this end we analyzed the interaction of PAN with PINFORMED-1 (PIN1), the major auxin efflux carrier responsible for generating local auxin maxima at the periphery of the SAM and thus organ initiation during shoot development (Gälweiler et al., 1998;Reinhardt et al., 2000). While pin1 mutants rarely developed flowers under our growth conditions (Figures 9A,C), pin1 pan double mutants exhibited a significantly increased number of flowers ( Figures 9B,C), which were deformed and generally sterile. Again, as in the case of cytokinin signaling, these results demonstrated that PAN is able to modulate auxin dependent developmental functions, in line with the hypothesis that PAN might act as a multifunctional hub for diverse meristematic functions.
SUMMARY AND OUTLOOK
Taken together, we have shown here by molecular phenotyping and genetics that PAN is connected to a plethora of diverse input pathways and may act as an integrator to buffer shoot meristem activity. PAN inputs include pathways for environmental sensing, such as day-length and other abiotic factors, as well as hard-wired developmental circuitries, such as the WUS-CLV system. Strikingly, the same holds true for the PAN output network, which we found to include components of the circadian clock and stress response as examples for modulating environmental interactions. Furthermore, PAN downstream genes showed a strong developmental signature, which was most apparently represented by a number of plant hormone signaling systems. Based on our results we suggest that PAN might act as a node between cytokinin and auxin signaling pathways, with cytokinin outputs being repressed and auxin activity being induced by PAN. PAN is a member of the D-class of bZIP transcription factors (Jakoby et al., 2002) and thus groups with the TGA regulators, which are involved in mediating pathogen defense (Zander et al., 2010). The sequence similarity of PAN and TGA pathogen response regulators suggests that PAN function might have evolved from an environmental surveillance activity, which was enhanced to include developmental roles to give rise to an integrated buffering system.
MICROARRAY EXPERIMENTS
Pools of 50 microscopically dissected inflorescence apices of pan mutants and wild-type both carrying the KB14 AG::GUS reporter gene (Busch et al., 1999;Lohmann et al., 2001) were grown for 25 days in LD conditions and profiled in duplicate using the Affymetrix ATH1 platform. RNA extraction and microarray analyses were performed as described (Schmid et al., 2005;Buechel et al., 2010). Expression estimates were derived by GC-RMA (Wu et al., 2004) at standard settings implemented in R. We determined significant changes on a per-gene level by applying the Rank products algorithm (Breitling et al., 2004) using 100 permutations and a false discovery rate cut-off of 5%. GO analysis was carried out using AgriGO (Du et al., 2010).
QUANTITATIVE REAL-TIME PCR
Total RNA was extracted from apices of plants grown in an independent experiment using RNeasy Mini columns with on-column DNAse digestion (Qiagen). Reverse transcription was performed with 1 μg of total RNA, using a Reverse Transcription Kit (Fermentas). PCR amplification was carried out in the presence of the double-strand DNA-specific dye SYBR Green (Molecular Probes) using intron spanning primers (Andersen et al., 2008). Amplification was monitored in real-time with the Opticon Continuous Fluorescence Detection System (MJR). BETA-TUBULIN-2 transcript levels served to normalize mRNA measurements.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2012-02-28T00:00:00.000
|
18704149
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1582-4934.2011.01395.x",
"pdf_hash": "2a77aa61c1fd9bf2f4cd1499a7246763975c5ea3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45553",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "2a77aa61c1fd9bf2f4cd1499a7246763975c5ea3",
"year": 2012
}
|
pes2o/s2orc
|
Orais and STIMs: physiological mechanisms and disease
Abstract The stromal interaction molecules STIM1 and STIM2 are Ca2+ sensors, mostly located in the endoplasmic reticulum, that detect changes in the intraluminal Ca2+ concentration and communicate this information to plasma membrane store-operated channels, including members of the Orai family, thus mediating store-operated Ca2+ entry (SOCE). Orai and STIM proteins are almost ubiquitously expressed in human cells, where SOCE has been reported to play a relevant functional role. The phenotype of patients bearing mutations in STIM and Orai proteins, together with models of STIM or Orai deficiency in mice, as well as other organisms such as Drosophila melanogaster, have provided compelling evidence on the relevant role of these proteins in cellular physiology and pathology. Orai1-deficient patients suffer from severe immunodeficiency, congenital myopathy, chronic pulmonary disease, anhydrotic ectodermal dysplasia and defective dental enamel calcification. STIM1-deficient patients showed similar abnormalities, as well as autoimmune disorders. This review summarizes the current evidence that identifies and explains diseases induced by disturbances in SOCE due to deficiencies or mutations in Orai and STIM proteins.
Introduction
Changes in cytosolic-free Ca 2ϩ concentration ([Ca 2ϩ ]c) are a universal signal that regulates a diversity of cellular functions, from short-term responses, such as secretion, contraction or aggregation, to long-term responses, including cell proliferation [1]. Physiological agonists increase [Ca 2ϩ ]c, which consist of two components: the release of Ca 2ϩ from the intracellular organelles and Ca 2ϩ entry through plasma membrane (PM) channels. Ca 2ϩ release from intracellular Ca 2ϩ stores is a mechanism regulated by agonist-generated second messengers, including the inositol 1,4,5-trisphosphate (IP3), cyclic ADP ribose, nicotinic acid adenine dinucleotide phosphate (NAADP) or sphingosine-1-phosphate [2][3][4][5][6]. However, Ca 2ϩ release from finite intracellular Ca 2ϩ stores is sometimes insufficient to induce full activation of cellular processes and, to maintain Ca 2ϩ signals, as well as to refill intracellular stores, Ca 2ϩ entry plays a relevant role. Ca 2ϩ entry might be achieved by different mechanisms, including voltage-operated
• Importance of Orais and STIMs in tissues • Participation of Orai and STIM in human diseases -Orai1-deficient function and human disease -STIM1-deficient function and human disease -Orai1 and STIM1 in human diabetic platelets • Orai and STIM mutant mouse as models of disease -Sudden and perinatal mortality -Immunodeficiency -Autoimmune and inflammatory diseases -Skeletal muscle -Thrombosis and haemostasis -Neuronal system • Emerging studies of Orai and STIM in cancer and cell cycle • Concluding remarks
Ca 2ϩ entry through voltage-sensitive Ca 2ϩ channels and receptoroperated Ca 2ϩ entry following receptor occupation by means other than a change in membrane potential [7]. The latter may take several forms and is conducted by different types of channels: receptor-operated channels (ROC) formed by subunits of the receptor protein itself, second messenger-operated channels (SMOC) gated by a diffusible messenger generated as a consequence of receptor occupation and, finally, store-operated (or capacitative) Ca 2ϩ channels (SOC/CRAC) activated when the luminal Ca 2ϩ concentration in the intracellular Ca 2ϩ stores is reduced as a result of receptor occupation and the subsequent generation of a Ca 2ϩ -mobilizing second messenger [8,9]. In non-excitable cells, and also in certain excitable cells, store-operated Ca 2ϩ entry (SOCE) is a major mechanism for Ca 2ϩ influx [10].
It has been established that SOC channels group a family of Ca 2ϩ -permeable channels with different biophysical properties. The first identified store-operated current, ICRAC, was revealed in electrophysiological studies of mast cells [11]. The channel conducting ICRAC was found to be non-voltage activated, inwardly rectifying and highly Ca 2ϩ selective [12]. In addition to ICRAC, other store-operated currents of greater conductance and lower Ca 2ϩ selectivity, commonly named ISOC, have been described [10]. The nature of the SOC components that mediate ICRAC and ISOC has been an issue of intense debate for over the last decades. In 2006, Orai1 was presented as the candidate to mediate ICRAC [13][14][15]. Members of the Orai family (Orai1-3) are highly conversed Ca 2ϩ channel-forming subunits consisting in four transmembrane (TM) domains located in the PM and both N-and C-termini located in the cytosol ( Fig. 1; Refs. [15,16]). The C-and N-terminal regions of Orai interact with STIM1. The N-terminal region is critical for STIM1-mediated gating [17][18][19][20], and also contains a putative calmodulin (CaM)-binding domain, suggesting a possible role of CaM as regulator of Orai channel function [17,21].
The role of Orai1 in ICRAC was identified by gene mapping in patients with hereditary severe combined immunodeficiency (SCID) attributed to the loss of ICRAC in T cells [22]. As described for ICRAC, Orai1 shows an extraordinarily high selectivity for Ca 2ϩ over monovalent cations [23]. In addition to Orai1, its homologues Orai2 and Orai3 have been reported to be able to form SOC channels. Overexpression of all Orai homologues produced or enhanced SOCE, although with different efficiencies, being greater for Orai1 than for Orai2 and Orai3 [24].
In addition to Orai proteins, the mammalian homologues of Drosophila melanogaster transient receptor potential (TRP) channels have been presented as SOC candidates. The involvement Fig. 1 Orai protein family. Representation of domain organization of human (h) and mouse (m) Orai proteins. Mouse Orai1 shares a 90% identity with human Orai1 in the amino acid sequence according the pairwise alignment generated by BLAST (http://blast.ncbi.nlm.nih.gov/). Their domain structure is highly conserved. N and C represent the amino-and the carboxyl-terminus, respectively. Coloured boxes represent different domains. Numbers above and below the domains indicate their boundaries and the amino acid position. Boundaries of mouse Orai1 were predicted by clustal protein alignment (http://www.ebi.ac.uk/Tools/msa/clustalw2/). Modified version of the figure is taken from [148].
of TRPs in SOC channel formation and the conduction cationic current ISOC remains controversial, with a number of laboratories providing evidence in favour or against this possibility. Particular attention has been paid to the canonical TRP (TRPC) subfamily members, which have been suggested to be activated by store depletion using different approaches, from overexpression of specific TRP proteins to knockdown of endogenous TRPs and pharmacological studies (for a review see Ref. [9]). -ATPases, such as the secretory pathway Ca 2ϩ -ATPase (SPCA) in the Golgi compartments or the sarco/endoplasmic reticulum Ca 2ϩ -ATPase (SERCA) that pumps Ca 2ϩ continuously towards the endoplasmic reticulum (ER) lumen opposing the Ca 2ϩ leak that occurs through the ER membrane probably via the translocon [25]. During agonist stimulation, however, dramatic changes in [Ca 2ϩ ]c occur due to opening of Ca 2ϩ channels located in intracellular organelles, such as the IP3, ryanodine or NAADP receptors, which allow Ca 2ϩ efflux from the intracellular stores, and Ca 2ϩ entry through plasma membrane Ca 2ϩ -permeable channels. Once agonist-stimulation is terminated, [Ca 2ϩ ]c is returned to the resting level through the collaborative work of Ca 2ϩ -ATPases and exchangers [26].
Intracellular
The advances in the understanding of Ca 2ϩ signalling mostly occurred in parallel with the investigation of the intracellular Ca 2ϩ stores, which are able to accumulate significant amounts of Ca 2ϩ .
Although the resting [Ca 2ϩ ]c is between 20 and 100 nM, depending on the cell type investigated, the Ca 2ϩ -concentration in the intracellular Ca 2ϩ -stores is within the micromolar range [27]. Intracellular Ca 2ϩ stores include the ER, the mitochondria, the Golgi apparatus, the nuclear envelope and the acidic lysosomallike organelles. The ER is the major source of the intracellular released Ca 2ϩ . The Ca 2ϩ content in the ER is tightly regulated by SERCA that pumps Ca 2ϩ back against a Ca 2ϩ gradient across its membrane [28]. Ca 2ϩ efflux from the intracellular stores has been reported to occur via occupation of the IP3 receptors (IP3R) or ryanodine receptors (RyR) [4]. Functional heterogeneity of the ER Ca 2ϩ pool has been reported on the base of the heterogeneous expression of SERCA isoforms and the different sensitivity of ER Ca 2ϩ compartments to distinct SERCA inhibitors. The presence of different ER Ca 2ϩ compartments might have functional relevance, with function-specific Ca 2ϩ compartments associated to discrete cellular mechanisms, although the occupation of different membrane receptors by agonists [29]. Intimately connected to the ER is the nuclear envelope, a small intracellular Ca 2ϩ store with an intraluminal Ca 2ϩ concentration of approximately 100 M [30]. Ca 2ϩ release from the nuclear envelope has been reported to be mediated by NAADP, as well as by IP3 and cyclic ADP-ribose [31], which act on specific Ca 2ϩ release channels present in the inner nuclear membrane [30]; thus, leading to transient rises in the nucleoplasmic Ca 2ϩ concentration, which could be important for the control of specific types of gene expression [30,31].
Recently, particular attention has been paid on the acidic organelles including lysosomes and lysosomal-like organelles, such as secretory granules. These organelles show a protongradient across their membranes maintained by the vacuolar proton-ATPase (V-ATPase), which provides the driving force for Ca 2ϩ uptake by a complex H ϩ /Ca 2ϩ exchange [32]. In human platelets, we have found that Ca 2ϩ uptake in the acidic organelles involves the V-ATPase, which provides the driving force solely for the maintenance of stored Ca 2ϩ , and a SERCA3 isoform involved in Ca 2ϩ store refilling [33].
Mitochondria initiate, transduce and modulate a variety of Ca 2ϩ signals, regulating spatiotemporal dynamics of cellular Ca 2ϩ signals [34]. This organelle might modulate Ca 2ϩ signalling either directly, by Ca 2ϩ uptake via the mitochondrial Ca 2ϩ uniporter or by releasing accumulated Ca 2ϩ into the cytosol by means of Na ϩ /Ca 2ϩ or H ϩ /Ca 2ϩ exchangers, or indirectly by regulating the concentration of ATP, NAD(P)H and reactive oxygen species, molecules that influence the activity of different pumps, exchangers and channels involved in the Ca 2ϩ signalling machinery [35]. Among the roles of mitochondria in Ca 2ϩ signalling, this organelle has been reported to play an essential role controlling the extent and duration of SOCE by buffering sub-plasmalemmal Ca 2ϩ [36], and has also been found to contribute to ER Ca 2ϩ refilling in the presence of IP3-generating agonists [37].
The Golgi apparatus has also been reported to act as an agonist-releasable intracellular Ca 2ϩ store. Agonist-induced Ca 2ϩ release from the Golgi apparatus was described in HeLa cells stably expressing targeted aequorin into this compartment and, as well as the ER, the Golgi apparatus was found to contribute to the rise in [Ca 2ϩ ]c upon agonist stimulation [38]. In HEK293 cells, menthol causes Ca 2ϩ release from both the ER and Golgi compartments [39]. Ca 2ϩ transport into the Golgi pool is mediated by the secretory-pathway Ca 2ϩ -transport ATPases (SPCA), which supply the Golgi apparatus with both Ca 2ϩ and Mn 2ϩ ; thus, playing a relevant role in cellular Ca 2ϩ and Mn 2ϩ homeostasis [40][41][42].
Abnormal intracellular Ca 2؉ homeostasis and disease
Physiological agonists are known to induce typical Ca 2ϩ signals to specifically regulate cellular functions, among the Ca 2ϩ signals generated by agonist stimulation, Ca 2ϩ oscillations play a relevant physiological role. Ca 2ϩ oscillations consist of cyclical release and re-uptake of intracellularly stored Ca 2ϩ and play an important role in the regulation of cellular functions. Current evidence suggests that Ca 2ϩ influx across the PM plays a relevant role in the maintenance of Ca 2ϩ oscillations, as well as in their localization within the cell [43]. Deregulation of cellular Ca 2ϩ homeostasis leads to the development of a number of cellular dysfunctions that underlie a variety of disorders. An example for the pathophysiological relevance of intracellular Ca 2ϩ signalling is cardiac diseases. In heart failure, the insufficient myocyte contraction is attributed to an insufficient increase in [Ca 2ϩ ]c as a result of a reduced Ca 2ϩ accumulation into the ER due to an abnormal expression of SERCA [44]. Abnormal ER Ca 2ϩ homeostasis associated to presenilin-1 mutations has also been reported to contribute to the dysfunction and degeneration of neurons observed in Alzheimer's disease [45]. Among other examples, SPCA mutations leading to loss of one functional copy of the human SPCA1 gene (ATP2C1) causes Hailey-Hailey disease, a rare skin disorder characterized by recurrent blisters and erosions in the flexural areas [46]. Specific mutations in RyRs results in enhanced sensitivity of RyR1 to activating Ca 2ϩ concentrations and also to the exogenous and diagnostically used ligands caffeine and 4-chloro-m-cresol, thus leading to malignant hyperthermia, a skeletal myopathy where exposure to certain volatile anaesthetics and depolarizing muscle relaxants, commonly used in anaesthesia, trigger an abnormally high release of Ca 2ϩ from the sarcoplasmic reticulum [47]. Finally, the discovery of a number of channelopathies has shed new light on the pathogenesis of a wide range of human diseases. Defects in L-type Ca 2ϩ channels resulting in structural aberrations within their pore-forming region leads to a number of neurological disorders [48]. Homozygous expression of Orai1 bearing the R91W mutation results in impairment of SOCE leading to SCID [22]. Defects in cation permeable members of the TRP channel family have also been involved in human diseases such as hypomagnesemia with secondary hypocalcaemia, mucolipidosis type IV, autosomal-dominant polycystic kidney disease, familial focal segmental glomerulosclerosis or certain forms of cancer [49,50]. The number of Ca 2ϩ signalling dysfunctions underlying human diseases is growing, which highlights the key role that Ca 2ϩ homeostasis plays in cellular physiopathology.
Sensing Ca 2؉ stores
Intracellular Ca 2ϩ stores not only provide a source for agonistinduced Ca 2ϩ mobilization but control a major Ca 2ϩ influx pathway in non-excitable cells, SOCE, which is also present in excitable cells. SOCE was identified by Putney in 1986 as a mechanism by which the depletion of the intracellular Ca 2ϩ stores activates Ca 2ϩ entry through SOC channels [51]. The nature of the signal linking the Ca 2ϩ content in the intracellular Ca 2ϩ stores to the PM SOC channels has been a matter of intense investigation immediately after the discovery of SOCE. In 2005, Dr. Cahalan's group reported that STIM1, a ubiquitously expressed protein in mammalian tissues, plays an essential role in SOCE and ICRAC, the best characterized capacitative current ( Fig. 2; Refs. [52,53]). The authors used an RNA interference (iRNA)-based screening to identify genes that impair Ca 2ϩ entry in D. melanogaster S2 cells evoked by thapsigargin, a specific inhibitor of SERCA that stimulates SOCE. Among 170 screened genes, they found that ICRAC was suppressed in STIM knockdown S2 cells. Similarly, knockdown of the human homologue of D. melanogaster STIM1 significantly reduced ICRAC in Jurkat T cells and thapsigargin-evoked SOCE in HEK293 or SH-SY5Y cells [52]; thus, suggesting an essential role for STIM1 in the mechanism of activation of SOCE. Later on, the same group reported compelling evidence for a role of STIM1 as an ER Ca 2ϩ sensor by using a STIM1 EF hand mutant that, being unable to sense intraluminal Ca 2ϩ concentration, mimics Ca 2ϩ store depletion, initiating translocation and activation of ICRAC [53]. Since 2005, a growing number of studies have provided evidence supporting a role for STIM1 as the ER Ca 2ϩ sensor that communicates information concerning the Ca 2ϩ content into the stores to PM SOC channels both in transiently or stably expressing cells and native cells [24,[54][55][56][57]. STIM1 has a single TM domain with an EF-hand motif near the N-terminus, which is located in the lumen of the ER. In addition to the canonical EF-hand domain, the intraluminal N-terminus contains a hidden EF-hand motif and a sterile-a motif (SAM) that is important for STIM1 oligomerization ( Fig. 2; Refs. [58,59]). The cytosolic C-terminus includes two coiled-coil domains which overlap with an ezrin-radixin-moesin-like domain, a serine/proline-region and a lysine-rich region [60]. In addition, different research groups have identified a cytoplasmic STIM1 region essential for the activation of Orai1 and termed STIM1 Orai-activating region (SOAR; Ref. [61]), Orai-activating small fragment (OASF; Ref. [62]), CRAC-activating domain (CAD and CCb9; Refs. [63,64], respectively). A decrease in ER luminal Ca 2ϩ concentration results in dissociation of Ca 2ϩ from the EF-hand motif, which, in turn, leads to STIM1 oligomerization and dissociation between the coiled-coil domain1 and SOAR, thus the positive charges located in the SOAR domain are free to interact with the acidic domain within the C-terminal domain of Orai1 and activate this channel ( Fig. 2; Ref. [65]).
The STIM1 homologue STIM2 has a similar structure (Fig. 2). In the presence of Ca 2ϩ , STIM2 EF-SAM domain is monomeric and well-folded, as previously reported for STIM1 EF-SAM and, despite this domain shows similar Ca 2ϩ -binding affinity in both STIMs, it is more stable in STIM2, which has been suggested to account for the different cellular functions of both proteins [66]. The function of STIM2 has not been completely elucidated. Early studies reported that, in contrast to the reported role of STIM1 in SOC activation, STIM2 suppressed this process, interfering with STIM1-mediated SOC activation, as a coordinated mechanism to regulate SOC-mediated Ca 2ϩ entry [67]. Later on, STIM2 has been shown to maintain basal cytosolic and ER Ca 2ϩ concentrations and to activate Ca 2ϩ influx upon small changes in luminal ER Ca 2ϩ content [68]. A role for STIM2 in the activation of SOC channels either in a store-operated mode activated through depletion of ER Ca 2ϩ stores by IP3 or via a store-independent mechanism mediated by cell dialysis during whole-cell perfusion has been reported [69]. STIM2 has also been reported to play an essential role in SOCE in mouse neurons [70]. STIM isoforms have been widely recognized as the ER Ca 2ϩ sensors [71]. However, we have recently reported that STIM1, and also STIM2, are located in the acidic Ca 2ϩ stores. In human platelets, we detected STIM1 and STIM2 in isolated lysosomal compartments and dense granules. We have found association of STIM2 with STIM1, as well as between these proteins and Orai1, upon selective discharge of the acidic Ca 2ϩ stores by using the vacuolar H ϩ -ATPase inhibitor bafilomycin A1. Suppression of the association of STIM1 with Orai1 attenuates SOCE controlled by the acidic Ca 2ϩ stores, thus suggesting a functional role for this interaction in SOCE in human platelets [72].
Importance of Orais and STIMs in tissues
Orai and STIM proteins are almost ubiquitously expressed in human and mouse tissues (Table 1; Refs. [73][74][75][76]). In humans, the strongest Orai1 expression has been found in a subset of cells located in primary and secondary lymphoid organs such as thymus and spleen, which is consistent with T cell expression. Other tissues that show Orai1 expression are endocrine and exocrine glands, hepatocytes, skeletal and cardiac muscle, skin, vascular endothelium, cells of the gastrointestinal tract, pneumocytes in the lung and kidney tubules [70,[73][74][75]. Interestingly, Orai1 staining is almost absent in brain, while Orai3 seems to be the Mouse STIM2 shares a 92% identity with human STIM2, while mouse STIM1 shares up to 97% identity in the amino acid sequence according the pairwise alignment generated by BLAST (http://blast.ncbi.nlm.nih.gov/). Their domain structure is also highly conserved. The amino-and the carboxyl-terminus are represented as N and C, respectively. Coloured boxes represent different domains. Numbers above and below the domains indicate their boundaries and the amino acid position.
(CC) pair of highly conserved cysteines. (G) glycosylation sites. Boundaries of EF-hand and SAM motifs in hSTIM were biophysically characterized by [66,149,150], while transmembrane, coiled-coil and Ser/Pro/His/lys regions were predicted by computer models and clustal protein sequence alignment (http://www.ebi.ac.uk/Tools/msa/clustalw2/). Boundaries of mSTIM were predicted by clustal protein alignment.
only isoform that is strongly expressed in this organ, at least at RNA level [70,[73][74][75]. Orai3 transcripts are also widely expressed in human tissues, showing a minor abundance in spleen and colon [74,75]. In contrast, Orai2 transcripts are prominently expressed in kidney, lung and spleen (Table 1; Refs. [74,75]). In mouse, Orai transcripts exhibit similar expression pattern than in human (Table 1; Refs. [70,[74][75][76]). A weak expression of Orai1 transcripts was detected in murine testis and brain, while completely absent expression was observed in cortical neurons [70], indicating that other brain cells might express Orai1. Instead, a strong expression of Orai2 was detected in these cells compared with the weak expression of Orai3, indicating that Orai2 might be the predominant isoform in murine cortical neurons (Table 1; Ref. [70]). STIMs are also ubiquitously expressed in human tissues (Table 1; Refs. [70,77]). STIM1 transcripts are mainly expressed in lymphocytes [78], skeletal muscle, heart, brain, pancreas, placenta and almost absent in kidney and lung. STIM2 is strongly expressed in brain, pancreas, placenta, heart and almost absent in skeletal muscle, kidney, liver and lung (Table 1; Ref. [77]). Despite some variations concerning STIM isoform abundance in certain tissues, similar expression pattern was observed in mouse (Table 1; Refs. [56,70,76,[79][80]). STIM1 is mainly expressed in murine skeletal muscle, cerebellum, spleen, thymus, lymph nodes and additionally in platelets, while is almost absent in brain and completely absent in kidney. STIM2 is mainly expressed in skeletal muscle, liver, spleen and lymph nodes, while is completely absent in kidney (Table 1; Ref. [70]). Densitometric analysis of protein abundance revealed that STIM2 is the predominant isoform in murine brain, while the ratio of STIM1 to STIM2 abundance is reversed in T cells [70,76,79]. In this organ, STIM isoforms seems to be separately distributed to certain areas, such as cerebellum or hippocampus. Regarding the different properties exhibited by STIM1-or STIM2-mediated ICRAC currents and SOCE [81][82], these separated distribution suggested different mechanisms and requirements of SOCE in these brain areas [76,79].
Expression levels referring to differences in mRNA or protein abundance in different tissues are not comparable among isoforms. Unknown abundance or unreported expression is represented as (؊). ᭡: High; ᭹: medium; ᭢: low; л: absent; Ϫ: unknown. 74,143]
Participation of Orai and STIM in human diseases
Few papers published over the last 6 years reported patients carrying homozygous point mutations for Orai1 or STIM1 genes [22,73,78,[83][84][85][86][87]. These patients suffered from pathologies, which started early during infancy, due to absence of functional Orai1 or STIM1 proteins, indicating the participation of altered SOCE in certain human diseases. The prognosis of these patients was poor, with fatal consequences mainly due to immune response failure, unless treatment with haematopoietic stem cell transplantation, indicating a major role of Orai1-and STIM1-mediated SOCE in cells of the immune system. In contrast, heterozygous carriers of mutated alleles did not present any alteration affecting their normal lives [22,73,78,[88][89][90], indicating that the presence of a single wild-type allele is sufficient to sustain functional but reduced SOCE. The low frequency of such genetic alterations documented until now and the severity of their absence remark the importance of Orai1-and STIM1-mediated SOCE for normal life. Diseases caused by altered Orai2, Orai3 and STIM2 function have not been reported in humans yet. This section pretends to highlight the most relevant data taken from the study of these patients (Table 2), which were extensively summarized in the following excellent reviews [88,89,91].
Orai1-deficient function and human disease
Different Orai1 mutated alleles were reported by Rao and Lewis groups in patients presenting a clinical phenotype characterized by an immunodeficiency similar to that observed in SCID patients ( Table 2; Refs. [22,83,84]). Orai1-mutated alleles presented single point mutations which led to abrogated Orai1 function due to expression of defective Orai1 proteins (mutant R91W) [22] or to impaired Orai1 gene expression (mutant A88EfsX25, A103E and L194P; Refs. [73,86,87]). Orai1-deficient patients also suffered from congenital myopathy, chronic pulmonary disease, anhydrotic ectodermal dysplasia and a defect in dental enamel calcification, which, initially, were not a threat to the patient's life [73,88,89,91]. The most relevant phenotype in Orai1-deficient patients was the severely compromised immune response, similar to SCID patients, as consequence of abrogated SOCE in peripheral T cells, resulting in impaired T cell activation, cytokine production and absent proliferative responses in vitro (Table 2; Refs. [22,73,[86][87][88][89][91][92][93]). Impaired SOCE was observed also in B cells, natural killer (NK) cytotoxic cells and fibroblasts isolated from these patients [22,73,88,89,91,94]. Normal immunoglobulin (Ig) levels were found in blood serum despite the absence of SOCE in these cells. However, Orai1-deficient patients failed to mount antigen-specific antibody responses upon vaccination or infection [89,91]. In contrast to most of SCID patients, Orai1-deficient patients presented a normal number of B cells and CD4 ϩ or CD8 ϩ T cells in peripheral blood, indicating normal development of these cells in the absence of Orai1-mediated SOCE [88,89,91].
The absence of Orai1 also led to defects in the skeletal muscle characterized by global muscular hypotonia with decreased head control, delayed ambulation, reduced muscle strength and endurance ( Table 2; Refs. [73,88,89]). Orai1-R91W mutation resulted in a predominance of type I fibres and atrophic type II fibres in these patients, suggesting a defect in fast twitch muscle fibre differentiation [73]. This defect might be explained by the requirement of SOCE during differentiation of human myoblasts, the precursors of adult skeletal muscle [95,96]. Chronic pulmonary disease was also reported as a consequence of defective respiratory muscle function [73,88,89,91]. The anhydrotic ectodermal dysplasia in Orai1-deficient patients was characterized by impaired sweat production, which results in dry skin and heat intolerance with recurrent fever [73,88,89,91]. Ca 2ϩ influx upon thapsigargin stimulation is required for secretion in sweat gland cells, indicating an important role of SOCE in sweat gland function [97,98]. The absence of Orai1-mediated SOCE therefore, could alter the normal function of sweat glands in these patients.
In summary, the clinical phenotype associated with Orai1 deficiency in patients was limited to certain tissues and associated with impaired SOCE, indicating a predominant role of Orai1mediated SOCE in a reduced number of cell types and tissues. Despite its severity, the limited clinical phenotype found in these patients contrasts with the wide Orai1 expression in a variety of cell types and tissues (Table 1). This could be explained by a minor relevance of SOCE in Ca 2ϩ entry in those unaffected cell types and tissues, or by the presence of additional molecules which might compensate the lack of Orai1 or have a more relevant function in SOCE regulation [73,88,89,91].
STIM1-deficient function and human disease
Different STIM1-mutated alleles were reported by Rao and Lewis groups in patients presenting a clinical phenotype very similar to those found in Orai1-deficient patients ( Table 2; Ref. [78]), as expected, because both genes play their roles in the same signalling pathway according to data previously obtained in transgenic mouse [99][100][101] and in vitro models [6,15,102,103]. Patients homozygous for single point mutations in STIM1 gene resulted in impaired STIM1 function due to a lack of STIM1 proteins (mutant E136X and mutant 1538-1GϾA) [78,85]. As a consequence, SOCE was severely impaired in cells from these patients.
The clinical phenotype was observed very early during infancy and was also limited to certain tissues similar to those observed in Orai1-deficient patients. It was characterized by immunodeficiency together with autoimmune disease, congenital myopathy and ectodermal dysplasia ( Table 2; Refs. [78,85,88,89,91]). SOCE was absent in T, B and NK cells, which led to severely compromised T cell function, defective T cell proliferation and reduced cytokine production. However, a normal number of these cells were found in peripheral blood, indicating normal cell development in the absence of STIM1-mediated SOCE [78,94]. Ig titres were normal for all subtypes in blood serum. In contrast, strongly reduced IgG titres were found in a patient due to nephrotic syndrome [78,88,89,91]. Defective Th17 cell function [113] Defective blood platelet function [106,107] Lymphoproliferative disease [99] Defective neuronal function [70] Defective blood platelet function [108,117] Cancer cells
Resistance to apoptosis in Pca cells [140]
Arrested MCF-7 cell cycle and proliferation [139] Diabetes Impaired association of human STIM1 with Orai1, TRPC1 and TRPC6 [103] Main clinical phenotype Human SCID-like Immunodeficiency [22,83,84] Immunodeficiency [78,85] Global muscular hypotonia [73] Autoimmune thrombocytopenia [78] Chronic pulmonary disease [73] Lymphoproliferative disease [78] Anhydrotic ectodermal dysplasia (impaired sweat production) [73] Ectodermal dysplasia [78] Continued STIM1-deficient patients also presented lymphoproliferative disease and an autoimmune response against blood platelets, which developed into thrombocytopenia ( found in peripheral blood might explain the immune thrombocytopenia observed in STIM1-deficient patients, because those cells regulate autoimmune responses [78,88,89,91]. Taking together, the severe immunodeficiency observed in STIM1-deficient patients is very similar to that observed in Orai1-deficient patients with the exception of autoimmunity and reduced numbers of Treg cells. STIM1-deficient patients also suffered from ectodermal dysplasia and congenital myopathy, similar to that observed in Orai1-deficient patients ( Table 2). Myopathy was characterized by non-progressive global muscular hypotonia and partial iris hypoplasia. In contrast to Orai1-deficient patients, histological abnormalities were not observed in skeletal muscle of STIM1deficient patients [78,88,89,91].
Orai1 and STIM1 in human diabetic platelets
The contribution of SOCE to platelet activation and the nature of SOC channels in these cells have remained controversial. Bleeding times in Orai1-and STIM1-deficient patients were only moderately prolonged or normal and patients lacked signs of an enhanced bleeding diathesis [73,78,88,89,91]. Recently, we reported reduced SOCE in platelets from type 2 diabetic patients, which is likely mediated by impairment of the association of STIM1 with the channel subunits Orai1, but also with hTRPC1 and hTRPC6, and might be involved in the pathogenesis of the altered platelet responsiveness observed in diabetic patients [104].
In summary, the clinical phenotypes found in Orai1-and STIM1-deficient patients indicate that Orai1-and STIM1-mediated SOCE plays very important roles mainly in cells of the immune system, skeletal muscle and some ectodermal-derived tissues such as sweat glands. Orai2, Orai3 and STIM2 co-exists with Orai1 or STIM1 together with other non-SOCE elements of Ca 2ϩ entry in many other tissues, which might compensate or minimize the lack of functional Orai1 and STIM1 in other unaffected tissues. So far, no Orai2-, Orai3-or STIM2-deficient patients have been identified yet. Further studies in Orai1-and STIM1-deficient clinical phenotype might give insights about additional roles of these proteins in other cell types or tissues.
Orai and STIM mutant mouse as models of disease
The mouse has shown to be an invaluable model organism to study mechanisms of human disease, because mouse is very similar to humans in both genetic and physiological aspects. Genetically engineered mice, which lack the function of known or unknown genes, are one of the most efficient ways to reveal their function in vivo.
Mice lacking Orai1, STIM1 and STIM2 expression have been generated over the last years by a number of laboratories [70,100,101,[105][106][107][108]. Comparison of both human and mouse Orai1-or STIM1-deficient phenotypes revealed interspecific similarities and discrepancies ( Table 2; Refs. [88,89,91]). The analysis of these mouse models, together with previous abundant in vitro data, helped to elucidate the cellular function of these proteins and contributed to underhand the clinical phenotype in patients lacking these proteins. The data obtained in these mouse models can give a clue of further analysis to be done in other resembling human diseases to reveal underlying mechanisms of human pathogenesis.
Sudden and perinatal mortality
Mice lacking the expression of functional Orai1 and STIM1 proteins die at perinatal and early postnatal periods [56, 101, 105, Summary of the most important molecular and phenotipic alterations in absence of Orai/STIM functions in human and mice.
Mouse
Perinatal death [101,107,151] Perinatal death [56,99,100,108] Sudden death [70] Immunodeficiency [101,151] Immunodeficiency [93,94,102] Altered spatial memory [70] Reduced procoagulant activity and thrombus formation [106,107] Reduced muscle cross-sectional area and mitochondriopathy [56] Reduced procoagulant activity and thrombus formation [108,117] Reviewed in Refs. [88,89,91] 107, 108]. Starting at 8 weeks after birth, sudden death of STIM2deficient mice was observed, and only ~10% of the animals reached the age of 30 weeks [70]. The precise cause of death is unclear in all cases. In contrast, spontaneous abortion, perinatal mortality or early neonatal death was not reported among families of Orai1-and STIM1-deficient patients [88,89,91]. However, the low number of patients identified until now makes not possible to determine the prevalence of perinatal mortality in these cases. Indeed, data obtained from these mouse models indicated that the altered function of Orai1, STIM1 and STIM2 could be a potential determinant of sudden, perinatal or early postnatal mortality in humans, which might be important to investigate.
Immunodeficiency
In line with the phenotype observed in STIM1-and Orai1-deficient patients, TCR-dependent and -independent T cell activation as well as B cell activation was severely impaired, while the number of T and B cells were normal in the blood stream of Orai1-and STIM1deficient mice ( Table 2; Refs. [78,[86][87]92]). Analysis of primary and secondary lymphoid organs of mutant mice revealed a possible explanation. Normal numbers of T and B cells were found in murine thymus and bone marrow [99][100][101]105], which indicates an unaltered T cell development. This finding was surprising, because TCR induced Ca 2ϩ signals and SOCE has been considered necessary for T cell development [109][110]. Further analysis of T cell development in these murine models will be crucial to clarify this point. Expression of cytokines was substantially reduced in Orai1-deficient patients [92,93], similar to the multiple cytokine expression defect found in T cells from Orai1-and STIM1-deficient mice, which involved a reduction in interleukin (IL)-2, interferon-␥ (IF-␥), IL-4 and IL-10 production [100,101]. Deeper analysis of these mice revealed as possible explanation an impaired SOCE-dependent nuclear translocation of the transcription factor NFAT, which, in turn, is necessary for cytokine production [89,91,93,100,111]. In addition, an impaired development of functional CD4 ϩ Foxp3 ϩ regulatory T cells was observed in STIM1-deficient patients [78] and double-deficient STIM1/STIM2 [78,100]. The further analysis of mutant mice offered a potential explanation [100,111]. The absence of both STIM1 and STIM2 in naive T cells abrogated the sustained Ca 2ϩ influx required for nuclear translocation of NFAT, which, in turn, impaired NFAT-dependent induction of Foxp3 expression and formation of a NFAT/Foxp3/DNA-binding complex. This complex has been proposed to be important for the initiation of Treg differentiation and regulation of their function [89,91,100,111]. Finally, STIM1-deficient patients and double-deficient STIM1/ STIM2 mice developed an autoimmune, lympho-myeloproliferative, phenotype characterized by hepatosplenomegaly and lymphadenopathy [78,100]. Beyersdorf et al. also reported a lymphoproliferative disease in STIM1-deficient mice [99]. This phenotype was prevented when wild-type Treg cells were transferred into double deficient STIM1/STIM2 mice, indicating that the lympho-myeloproliferative disease is mainly caused by decreased regulatory Treg function. In agreement with this, Orai1-deficient human patients and mice showed normal numbers of Treg cells and signs of autoimmunity and lymphoproliferation were not observed [22,101]. This might be explained by the residual SOCE detected in their T cells, which in turn could allow normal Treg differentiation and regulatory Treg function. This residual SOCE could be presumably mediated by other existing SOCE channels expressed in T cells such as Orai2 or Orai3 [22,88,89,91,101].
Autoimmune and inflammatory diseases
Human patients lacking STIM1 expression presented AIHA and thrombocytopenia [78], probably produced by an autoimmune response and functional macrophage-mediated phagocytosis of red blood cells and platelets. In contrast, STIM1-deficient mice injected with auto-antibodies against platelets or red blood cells were protected from thrombocytopenia and anaemia, which might be explained by the severely compromised Fc-gamma receptor (Fc␥R)-mediated SOCE and the abrogated function of STIM1deficient macrophages and Kupffer cells observed in these mice [112]. These results indicate interspecific differences in STIM1 function in macrophages. Functional macrophages indicated that STIM1 does not seem to be essential for Fc␥R-mediated response in humans while the absence of STIM1 severely impairs macrophage function in mouse [88,89,91].
The analysis of Orai1 and STIM deficiency in murine models already evidenced additional potential roles of these proteins in autoimmune and inflammatory responses. A crucial function of STIM1 and STIM2 has been reported as regulator of autoreactive T cell activation in a murine model of myelin-oligodendrocyte glycoprotein (MOG(35-55))-induced experimental autoimmune encephalomyelitis (EAE) [113]. STIM1 deficiency significantly impaired autoimmune responses mediated by Th1/Th17 cells against neuronal tissue in vivo, resulting in complete protection from EAE. Instead, mice lacking STIM2 developed an ameliorated EAE disease. Deficiency of STIM2 was associated with a reduction of IF-␥/ IL-17 production by neuroantigen-specific T cells, which might explain the reduced clinical peak at early stages of disease [113].
On the other hand, mast cells derived from Orai1-deficient mice showed severely impaired SOCE, degranulation and cytokine secretion upon Fc-epsilon receptor I (FceRI) stimulation and the allergic reactions elicited in vivo were inhibited in these mutant mice [105]. Taken together, these findings establish Orai1 and STIM as attractive new molecular therapeutic targets for the treatment of inflammatory and autoimmune disorders. Indeed, in addition to their roles in SOCE, the relevance of Orai and STIM proteins in autoimmune and inflammatory diseases could be discovered in the near future.
Skeletal muscle
The skeletal muscle defect in mice matches the congenital myopathy observed in Orai1-and STIM1-deficient patients ( Table 2; Refs. [22,78]). In addition, haematopoietic stem cell transplantation corrected immunodeficiency in surviving STIM1-deficient patients but still exhibited muscular hypotonia [78], suggesting that the myopathy is not a secondary effect to autoimmunity. STIM1-deficient mice showed reduced muscle cross-sectional area and mitochondriopathy [56]. The mechanism by which abrogated SOCE contributed to the pathogenesis of these myopathyes is unclear but most likely includes short term Ca 2ϩ responses, such as muscle contraction, altered Ca 2ϩ -dependent signalling pathways leading to altered gene expression such as NFATdependent gene regulation, disorders of metabolism and adverse remodelling [56,88,89,91]. Contraction of skeletal muscle fibres requires Ca 2ϩ release from the sarco/endoplasmic reticulum (S/ER) through RyR (reviewed in Ref. [114]). Absence of STIM1 abrogated SOCE, impaired refilling of S/ER and conferred reduced tetanic force and increased susceptibility to fatigue in adult STIM1-deficient mice [56]. Thus, STIM1 was required to refill internal S/ER Ca 2ϩ stores of myofibres subjected to repeated stimulation and increased motor nerve stimulation [56]. Although distinct mechanisms control myogenesis and muscle formation, additional studies reported that postnatal myogenesis critically relies on RyR-mediated store depletion [115] and Ca 2ϩ influx through SOCE [95,96,116]. Therefore, STIM1 might be also required to refill internal S/ER Ca 2ϩ stores in response to signals associated with muscle development and the absence of STIM1 function could lead to defective muscle differentiation [88,89,91].
Thrombosis and haemostasis
Studies in Orai1-and STIM1-deficient mice models showed that Orai1-and STIM1-mediated SOCE are essential for platelet activation, glycoprotein VI-and thrombin-dependent procoagulant activity in vitro and thrombus formation in vivo [106][107][108]. However, bleeding times were normal or moderately prolonged in Orai1-and STIM1-deficient patients, and they lacked signs of an enhanced bleeding diathesis [73,78,91,117]. Similar results were observed in the Orai1 mutant R93W knock-in (similar to the mutant R91W Orai1 gene in humans), Orai1-and STIM1-deficient mice after mechanical injury [106][107][108]. However, murine Orai1-and STIM1deficient platelets were unable to form stable thrombus in mice and failed to promote artery occlusion after chemical injury in arterial walls. These mutant mice were in turn significantly protected against ischaemic brain infarction or pulmonary thromboembolism [107,108]. These results established therefore, an important role of STIM1 and Orai1 in mechanisms underlying arterial thrombosis, but not haemostasis upon mechanical injury in mice. The impairment in thrombus formation can be partially explained by the reduced glycoprotein VI-and thrombin-dependent surface exposure of phosphatidylserine (PS) observed in these mutant mice, which accomplishes platelet procoagulant activity [117]. Despite the fact that STIM2 is expressed in these cells, STIM2-deficient platelets did not show defects in SOCE, procoagulant activity or thrombus formation [117]. The absence of studies concerning procoagulant activity in Orai1-and STIM1-deficient patients makes impossible to confirm the presence of similar mechanistic differences in humans, but gives a clue for further investigation in human platelets. Taken together, the results obtained in mice establish STIM1 and Orai1 as an important mediator in the pathogenesis of ischaemic cardio-and cerebrovascular events and potential targets for the design of novel anti-thrombotic therapies.
Neuronal system
SOCE is a major mechanism for Ca 2ϩ influx in non-electrically excitable cells. However, reports about the existence of SOCE, Orai and STIM function in electrically excitable cells such as skeletal muscle cells and neurons performed in genetically engineered mice and in vitro models offered an expanded view about the function of SOCE in cell physiology. Orai1-and STIM1-deficient patients did not show an altered cognitive or neuronal phenotype [22,73,78,88,89,91] and matches with the low expression or specific localization of these proteins reported in human neuronal tissues (Table 1; Refs. [79,118,119]). This might indicate a minor function of Orai1-and STIM1-mediated SOCE in neuronal physiology. SOCE has also been observed in neuronal cells [10,120,121] and STIM2 is the predominant isoform in murine cortical neurons [70]. STIM2-deficient neurons isolated from mutant mice showed severely abrogated SOCE and decreased basal Ca 2ϩ levels in the cytosol and intracellular Ca 2ϩ stores [70]. In contrast to those observed in cells of the immune system, no significant changes in SOCE were reported in murine Orai1-and STIM1-deficient neurons [70]. This data suggested that STIM2 is the main mediator of SOCE in these cells. STIM2-deficient mice showed impaired spatial learning similar to that observed after blockade of NMDA ionotropic glutamate receptors [70,122], which might be related with altered neurotransmitter release and synaptic plasticity [123]. Therefore, potentially altered STIM2 function might be expected in some patients showing familiar forms of mental disorders affecting cognitive functions, for instance memory processing. Moreover, STIM2 deficiency protected mice from neuronal damage after cerebral ischaemia, similar to those observed in STIM1 and Orai1-deficient mice [70]. However, while STIM1 and Orai1 deficiency conferred protection due to deficient platelet activation and impaired thrombus formation, which abrogated cerebral artery occlusion [107,108], the lack of STIM2 conferred protection to neurons against ischaemic neuronal death, which prevented ischaemic brain damage [70]. Cytotoxic Ca 2ϩ overload into the cell is considered the main factor of neuronal death during ischaemic conditions. The existing literature describes a reduction of SERCA re-uptake [124,125] and active Ca 2ϩ release from the ER through IP3R and RyR channels associated with the increased intracellular Ca 2ϩ levels observed in these conditions. Such Ca 2ϩ release from the ER is crucial for cellular Ca 2ϩ damage as evidenced by protection of neurons against excitotoxic injury through blockade of IP3R or RyR [126,127]. These events might lead to store depletion and Ca 2ϩ accumulation in the cytosol, the earlier inducing an additional Ca 2ϩ load into the cytosol via SOCE. SOCE may in turn, increase the release of glutamate and trigger an additional Ca 2ϩ influx by activation of ionotropic glutamate receptors [128]. Both SOCE and glutamatergic Ca 2ϩ entry might rapidly push the cytosolic Ca 2ϩ concentration to damaging levels. In the same line, STIM2-deficient neurons might be less sensitive to apoptosis due to the absence of SOCE and the lower Ca 2ϩ content observed in the cytosol and in the intracellular stores, which critically depends on a functional SOCE [10,70]. The decreased store content could limit the initial Ca 2ϩ release and might help to better utilize the remaining Ca 2ϩ sequestration ability of SERCA during the ischaemic event. It is not clear why neurons use STIM2 instead of STIM1 to regulate SOCE, probably because different requirements in terms of Ca 2ϩ influx dynamics which might depend of the cell type. Indeed, SOCE or ICRAC currents exhibit different properties depending on which STIM isoform regulate the process [81,82]. STIM1 enhances both Orai1-mediated SOCE and constitutive coupling to activate Orai1 channels while STIM2 attenuates Orai1-mediated SOCE and drastically slows storeinduced Orai1 channel activation. Additional studies reported a predominant function of STIM2 in other tissues [129,130]. Different knockout models for other proteins related with SOCE suggested an important role of this mechanism in neuronal function. For instance, the absence of PLC1 led to epileptic-type seizures in mice [131], which indicated an involvement of PLC1 in the development and control of brain inhibitory pathways. IP3R type I null mice exhibited severe neurological symptoms, including ataxia and epilepsy [132]. This body of evidence, together with STIM2 function in neurons, suggests an unexpectedly important role of SOCE in electrically excitable cells such as neurons. These findings may serve as a basis for the development of novel neuroprotective agents for the treatment of ischaemic stroke and other neurodegenerative disorders in which disturbances in cellular Ca 2ϩ homeostasis are considered a major pathophysiological component [133,134].
In summary, despite certain discrepancies in Orai1-and STIM1-deficient phenotype among species, mouse models have demonstrated to be important for understanding Orai1 or STIM1 function in cell physiology and disease, being suitable to investigate novel therapies which seek to modulate SOCE for the treatment of disorders related with disturbances in cellular Ca 2ϩ homeostasis. Certainly, more functions of SOCE will emerge from the study of Orai1-and STIM-deficient mice in the future.
Emerging studies of Orai and STIM in cancer and cell cycle
As mentioned earlier, SOCE mediated by STIM/Orai proteins is a ubiquitous pathway that controls a variety of important cell functions. Initial studies considered STIM1 as a molecule involved in growth arrest and degeneration in human G401 and RD cancer cell lines, suggesting a role in the pathogenesis of rhabdoid tumours [135,136]. However, the discovery of its function as a Ca 2ϩ sensor in the ER eclipsed further studies in these field. Current evidences support a role for STIMs and Orai1 in cell proliferation, with some differences depending on the cellular model investigated. In endothelial cells, knockdown of STIM1, STIM2 or Orai1 attenuated cell proliferation and induced cell cycle arrest at S and G2/M phase [54]. However, in HEK293 cells STIM1 has no role in cell proliferation, while silencing of Orai1 and STIM2 using siRNA resulted in SOCE inhibition and enhancement of cell population doubling time, thus suggesting that Orai1 and STIM2 are important for cell proliferation [82].
In addition to the involvement of SOCE in the regulation of cellular functions, emerging evidence suggests the involvement of the STIM/Orai pathway in certain types of cancer. A recent study showed that STIM1 gene expression is regulated by potential oncogenes such as Wilms tumour suppressor 1 (WT1) and early growth response (EGR) in human G401 rhabdoid tumour cells, thereby providing a molecular link between Ca 2ϩ signalling and cancer [137]. WT1 and EGR1 protein can bind to putative regulatory elements located upstream of the STIM1 gene, and overexpressed WT1 or down-regulation of EGR1 induced both reduction of STIM1 expression and decreased SOCE [137]. Trebak's group reported differences in SOCE and ICRAC in estrogen receptor-positive [ER(ϩ)] and estrogen receptor-negative [ER(Ϫ)] breast cancer cell lines. In ER(ϩ)-breast cancer cells, capacitative currents require STIM1/2 and Orai3 while SOCE in ER(Ϫ) breast cancer cells is mediated by the STIM1/Orai1 pathway [138]. In addition, isolated breast cancer tumours whose cells displayed higher STIM1/STIM2 rates had a significantly poorer prognosis [129]. The expression of Orai3 has been reported to be higher in breast cancer tissues and the MCF-7 breast cancer cell line than in normal tissues or mammary epithelial cell lines, which provide evidence for a significant effect of Orai3 on breast cancer cell growth [139]. In support of this hypothesis, down-regulation of Orai3 by siRNA has been reported to attenuate MCF-7 cell proliferation and arrest cell cycle at G1 phase [139]. Another study suggested that the resistance to apoptosis showed by human androgen-independent prostate cancer (Pca) cells is associated with their decreased Orai1 expression and SOCE [140]. Overexpressed Orai1 reestablished SOCE and restored the normal rate of apoptosis in these cells, indicating a critical role of down-regulated Orai1 function in the establishment of an apoptotic resistance in Pca cells [140]. The involvement of components of the SOCE pathway in cancer highlights a possible role of STIM/Orai as therapeutic targets in cancer therapy.
Concluding remarks
A great advances in the understanding of SOCE has been done over the last years. The discovery of Orai and STIM isoforms as essential players of SOCE, where the participation of TRPC proteins and IP3R has also been described [Refs. [141,142] ; Fig. 3), helped to unravel the function of this mechanism in cell physiology. The phenotypic analysis of patients lacking these proteins showed a major function of Orai1-and STIM1-dependent SOCE in cells of the immune system, skeletal muscle some ectodermal-derived tissues such as sweat glands and teeth. Interestingly, cardiomyopathies were not reported in these patients, indicating a more prominent role of Orai1-and STIM1dependent SOCE in skeletal muscle fibres than in cardiomyocytes. Studies in Orai1-and STIM1-deficient murine transgenic models performed in parallel complemented our knowledge of the mechanisms underlying disease in the absence of these proteins. The similar phenotype found in mouse and humans indicates that transgenic models could be suitable models to investigate novel therapies based on Orai, STIM and SOCE modulation. In addition, these models provided insights of new functions of SOCE in other tissues and pathological evens, such as ischaemic stroke and autoimmune diseases. The in vivo roles of their homologues Orai2, Orai3 and in a less extent STIM2 are still unclear, having overlapping functions which their respective isoforms in vitro. A major role for STIM1 and Orai1 in all tissues is unlikely, regarding the presence of SOCE in many cell types which did not show altered function in patients or mice lacking functional Orai1 and STIM1. Current evidence points out these molecules as new therapeutic targets, especially those related with immune disorders, severe T cell-dependent inflammatory diseases or cancer. Existing studies revealed that compared to current treatments (FK506, CsA and OKT3), Orai1 inhibitors could have a potential for higher efficacy without the need for expensive and side-effect-prone co-administration of additional immunosuppressants such as glucocorticoids (reviewed in Ref. [143]). However, the presence of immune-unrelated pathologies in the Orai1-or STIM1-deficient patients and the ubiquitous expression pattern of these molecules are issues that still have to be addressed for complete validation of these proteins as suitable therapeutic targets. In addition, members of the TRPC family were recently found to interact directly or indirectly to both STIM1 and Orai1 [55,144] (reviewed in Ref. [145]), indicating that such TRPC members could participate as SOCE components, or that Orai and STIM1 could be involved in regulation of TRPC-dependent Ca 2ϩ entry as well [144]. Because TRP channels are involved in a variety of physiological processes such as stress responses to noxious stimuli or thermo-and vasoregulation [146] (reviewed in Ref. [147]), the possibility that Orai or STIM inhibitors could elicit significant unwanted side effects by co-inhibition of other Orai-or STIM-interacting channels must be addressed as well [143]. of the Ca 2ϩ compartments to the store-operated channels in the plasma membrane, mostly consisting of Orai subunits and TRPC subfamily members. The latter have been reported to associate with IP3Rs, which regulates both Ca 2ϩ release and entry [141,142]. ER: endoplasmic reticulum; ERM: ezrin/radixin/moesin motif; SAM: sterile alpha motif; CIRB: calmodulin and IP3 receptor binding region.
|
v3-fos-license
|
2019-06-19T11:18:39.000Z
|
2019-06-19T00:00:00.000
|
195750880
|
{
"extfieldsofstudy": [
"Medicine",
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-019-54200-3.pdf",
"pdf_hash": "d28dc3e0607d038f18845b451b024cad989c44fe",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45554",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "0332a6d297710ab58f65f224de1335f2ed30f43b",
"year": 2019
}
|
pes2o/s2orc
|
Magnetic resonance imaging with optical preamplification and detection
Magnetic resonance (MR) imaging relies on conventional electronics that is increasingly challenged by the push for stronger magnetic fields and higher channel count. These problems can be avoided by utilizing optical technologies. As a replacement for the standard low-noise preamplifier, we have implemented a new transduction principle that upconverts an MR signal to the optical domain and imaged a phantom in a clinical 3 T scanner with signal-to-noise comparable to classical induction detection.
Magnetic resonance (MR) imaging relies on conventional electronics that is increasingly challenged by the push for stronger magnetic fields and higher channel count. These problems can be avoided by utilizing optical technologies. As a replacement for the standard low-noise preamplifier, we have implemented a new transduction principle that upconverts an MR signal to the optical domain and imaged a phantom in a clinical 3 T scanner with signal-to-noise comparable to classical induction detection.
Magnetic resonance (MR) imaging is a well-established and non-invasive tool for routine clinical diagnostics and basic research. Its utility would benefit from enhanced signal-to-noise and acquisition speed-the key performance parameters for MR imaging-and is the focus of considerable research efforts. Two trending developments are higher magnetic fields and more detection coils in arrays. Yet most technical innovation relies on the same principle of matching a detection coil to a low-noise preamplifier in the receiver chain of the scanner 1 similar to the diagram in Fig. 1 (lower right), a common practice that testifies to the extraordinary performance of conventional electronics obtained through decades of optimization. However, the electronics is continuously challenged by the ever-increasing magnetic field strengths in MR imaging 2 . And with the push for more coils, the standard approach leads to problems like cross-talk and electrical interference between the individual preamplifiers and cables for each coil, not to mention that dense arrays demands a miniaturization of the corresponding circuits 3 . These issues may be alleviated by leveraging optical communication and sensing. In particular, signal transmission over optical fiber is low-loss, compatible with high magnetic fields, immune to electrical interference, and considered MR safe 4 .
Converting the MR signal into an optical modulation has been done 5 , and is even commercially available (Phillips dStream, for instance), but those approaches still use the standard electrical preamplifier first and then convert the amplified signal. All-optical MR spectroscopy and imaging have also been done with magnetometers based on atomic vapor cells [6][7][8] and nitrogen-vacancy centers in diamond 9,10 : The vapor cells feature a high fundamental sensitivity and bandwidth for low-field MR, but standard induction detection with similar dimensions reaches a higher sensitivity for frequencies above 11 50 MHz; the diamond-based magnetometers feature a spatial resolution on the order of nanometers, but they are not ideal for imaging larger objects. In contrast, this work uses an optical detection scheme based on a micro-electro-mechanical system that is compatible with the industry-standard induction detection. Unlike the mechanical detection used in MR force microscopy 12 , we replace the conventional electrical preamplifier with a transducer 13-15 that up-converts the MR signal onto an optical carrier. The signal can then be analyzed at the other end of a long and low-loss optical fiber, thus moving the receiver electronics away from the coils and high magnetic field near the scanner. The idea is sketched in Fig. 1 where a signal induced in an MR coil is combined with a radio-frequency (RF) bias, and together they cause a micro-mechanical element to vibrate which, in turn, modulates the amplitude of reflected laser light. This interaction between signal and bias acts similar to the magnetic pump-field in MR force microscopy 16 . The transduction scheme can have a low noise-temperature 13 , can target any signal frequency, is compatible with telecom optical wavelengths, and has been successfully used to detect a nuclear magnetic resonance signal 14,17 . And with proper circuit design and protective steps, we here show for the first time that it can be integrated into a commercial, clinical MR scanner to acquire and reconstruct an MR image.
Results
Basic methods. Figure 1 shows a simplified version of our detection setup. A detailed diagram is in the methods section. Inside an MR imaging scanner (3 T GE MR750), a typical MR RF coil formed a resonant circuit together with a mechanically compliant capacitor-the transducer. The circuit was tuned to resonate at the Larmor precession frequency of the nuclear spins aligned along a strong magnetic field. In our case, the setup specifically involved 13 C nuclear spins and a 3 T magnetic field, i.e. a precession of 32 MHz. The scanner applied an RF pulse to excite the nuclear precession-the transmit pulse. Furthermore, it applied gradient magnetic fields that enable selective excitation of atoms and encode their phase through spatially-dependent time evolution. As the spins precess, they create an oscillating magnetic flux through the coil, thus inducing a voltage in the RF coil. That voltage is usually electronically amplified and processed to generate the MR image. Our new development consists of replacing the standard preamplifier with an optomechanical transducer that converts the MR signal to an amplitude modulation of laser light. The transducer was connected directly in parallel to the coil and consisted of a freely suspended membrane-equivalent to a tiny drum-skin-that simultaneously constitutes one side of a capacitor and one mirror in an optical cavity. Charges on the capacitor exert a force on the membrane and cause it to move, and this motion then changes the capacitance and thus also the charges that affects the motion. Hence, the mechanical and electrical resonance are parametrically coupled 16 . As the MR signal induces charges on the capacitor, the resulting motion proportionally changes the reflection of light from the optical cavity. This transduction is most sensitive when the membrane is driven near resonance, which a signal at any frequency can do in unison with an RF bias if the beatnote between the signal and the bias is at the membrane's resonance 18 . In our case, that meant the bias frequency had to be the difference between the Larmor precession and the membrane resonance, 32 MHz and 1.4 MHz respectively. Note that the bias and signal are non-degenerate which means interference and cross-talk between them can be removed through filtering and the mechanical response helps with that.
After the transduction, we downconverted the detected signal from optical modulation to the RF domain simply by measuring the reflected optical power with a detector. Although this output was frequency-shifted to the mechanical frequency, it corresponded to the voltage induced in the coil by the MR sequence and was consequently sufficient to reconstruct the MR image through post-processing. However, we chose, out of convenience, to leverage the scanner's dedicated imaging software and fed the transduced signal back to the scanner's receiver channel. That required the signal to be shifted back to the signal-frequency expected by the scanner, which we did by mixing the transducer signal with the RF bias-see the methods section for details.
The transducer's membrane was circular and made out of alumina and aluminum under tensile stress attached on top of a partially reflective mirror. See the methods section and 15 for details. Because aluminum reflects, the membrane and mirror formed an optical cavity with a length determined by the cleanroom fabrication procedure. We designed the stack such that the fixed wavelength of our laser was about one cavity linewidth (half width at half max) away from the resonance of the optical cavity. Laser light was routed to the sample by an optical network and coupled directly from fiber into the cavity through focusing optics. The same fiber also collected the reflection, and the optical network then routed the signal to a detector. To electrically connect the transducer and the circuit, we mounted and wirebonded the membrane chips to an 8-pin integrated circuit socket. Because the transducer needs vacuum to operate, we placed the assembly inside a vacuum chamber made of glass-fiber that could hold a pressure below 1 × 10 −3 mbar throughout the measurement series-see the method section. The chip and the circuit were then connected with a short RF cable via a vacuum feedthrough. In the sketch of the detection scheme, an external frequency bias powers the upconversion to light while optical fibers routes the light from laser to transducer (inset lower left) and from transducer to analysis. The coil can be any standard RF coil used for MR detection. In the standard MR detection scheme (lower right), the RF coil is matched to a low-noise preamplifier. A DC voltage source powers the amplifier and electrical cables carry the amplified signal out of the scanner for further processing.
www.nature.com/scientificreports www.nature.com/scientificreports/ We protected the transducer from electro-static discharges with a switch (see Fig. 1, lower left) that could short-circuit the capacitor pins. This was particularly important whenever we connected the transducer to circuitry or transported it. In addition, we took several protective steps beyond [13][14][15]17,19 to prevent the membrane from collapsing during the transmit pulse: first, two crossed diodes were added in parallel to the transducer to short it if the induced voltage exceed their threshold. Such protection is a common way to protect standard preamplifiers, but did not sufficiently protect the transducer on its own. In addition, it unfortunately limits the RF bias that can be applied, but we could anyway get satisfactory transduction performance. Second, the detection resonance was detuned to minimize the voltage and current induced in the coil by the transmit pulse. This is another standard technique in MR imaging 1 and the scanner supplied the trigger signal that controlled the detuning. For this purpose, the coil loop had a segmenting capacitor and additional components that together created a tank circuit when the trigger activated a PIN diode's forward-voltage threshold. The full circuit diagram is described in the methods section. Third, the RF bias was switched off by pulse-modulating the bias amplitude with the same trigger that detuned the circuit. The full setup is described and depicted in the methods section. Note that the RF bias and spin-flip pulse overlap a little because the long (8 m) RF cables to and from the scanner add a time delay (~100 ns), but this was not a problem with everything else in place.
In its first implementation, the transduction scheme had problems with excess noise which we addressed in the following ways: First, the RF bias was filtered twice to reduce its sideband noise at the detection frequency. Once through an external filter and then through a filter built into the detection circuit. Such sideband noise can limit sensitivity 14,17 . Second, there was considerable added noise when flexible cables connected the setup inside the scanner room with the setup outside. We believe the poor shielding of the cables allowed ambient noise to leak into the setup. Using instead semi-rigid cables with better shielding reduced the noise significantly. Third, electrical noise at the membrane-frequency could drive the mechanical motion because our sample had trapped charges 15 . We cancelled these out with a DC bias on the transducer which gave significant reduction in the overall transducer noise. With those noise-eliminating steps in place, we obtained the data shown in Fig. 2 with the optomechanical transduction.
MR image data. We imaged a phantom (Fig. 2a) consisting of a bottle filled with ethylene glycol (purity 99.8%; natural abundance, 1.1%, 13 C; triplet with J CH of 142 Hz). The recorded image (Fig. 2c) shows the spatial density of 13 C atoms in a cross-section through the bottle. As we expected, the image does not have a uniform www.nature.com/scientificreports www.nature.com/scientificreports/ signal-to-noise throughout the volume because we used a surface coil where the detected signal decays with increased distance to the coil (Fig. 2b). This effect is not corrected in the image processing.
To each image voxel corresponds a spectrum obtained from the spin-precession (Fig. 2d). For ethylene glycol, that spectrum has a characteristic triplet as shown in the reference measurement (Fig. 2d), obtained with a commercial coil but still using the same phantom and scanner. The spectrum obtained with the transducer (Fig. 2e) shows the same triplet. Additionally, the two detection schemes have different background noise; for the standard electronic amplifier the noise is flat in a broad window, but for the transducer it has a distinctive narrow Lorentzian lineshape plus an offset (Fig. 2e).
We believe the Lorentzian feature comes from the membrane's spectral response because the Lorentzian peak frequency and linewidth changed with the bias power as we expect from the electromechanical interaction 13,18 . Additionally, moving the bias frequency also shifted the detected signal with respect to the noise peak and scaled its amplitude following the peak shape. This is an expected behaviour if the MR signal drives the membrane's motion and therefore further corroborates that the spectral shape of the noise is related to the mechanical motion. Furthermore, the transducer's flat noise background scales with optical power which suggests that it originates from amplitude noise of the laser, most likely due to shot-noise of light. Finally, we have not found any other spectral feature in the circuit response, nor any spurious noise in the setup, that is consistent with the observed Lorentzian peak.
Discussion
In summary, we have implemented MR imaging with direct optical detection and amplification of the MR induction signal thus bringing the technical benefits of optical signal processing to the receiver chain of an MR scanner. To the best of our knowledge, this work represents the first successful implementation of direct electro-mechano-optical transduction in an MR system. We clearly see the expected image and spectrum for our phantom and coil, and a noise-background compatible with the mechanical motion's Lorentzian linewidth plus an offset from optical noise. The transducer's spectrum shows a signal-to-noise comparable with a commercial system, although still not as high, and a transduction bandwidth that is narrow compared to standard preamplifiers. However, both features can be improved greatly with straightforward improvements such as increasing the RF bias amplitude, reducing the capacitance in parallel with the transducer, and especially by decreasing the membrane-capacitor gap 14 . The noise can be reduced by cooling the system, increasing the mechanical quality factor, and using a lighter membrane 17 . Bandwidth can be increased by using multiple mechanical modes 19 . Improvements of the optical cavity will also benefit performance. Incidentally, our sample had a suboptimal cavity length for the specific bias voltage, so we can expect to reach better sensitivity using the present platform but with some fabrication optimization. Note that we deliberately reduced signal-to-noise slightly by upconverting the detected signal in order to send it back to the scanner. This step was not necessary, but convenient.
Our future work will aim to address the trapped charges in transducer fabrication in order to eliminate the DC bias, and to address the transducer's need for vacuum with cleanroom packaging. The required vacuum should be achievable with available techniques 20 . Furthermore, while the RF bias is necessary to power the transduction scheme, supplying it with cables is not a necessity. The bias could instead be delivered wirelessly 21 , potentially delivered by the same coil that generates the transmit pulses, and collected by the RF coil. Such a scheme would eliminate all electrical connections to the RF coil, leaving the optical fiber as the only physical connection to the circuitry. This result would enable a great increase in the density of elements in MR array coils, avoiding all-together the performance and safety issues that stems from the large number of electrical connections present in MR arrays based on current technology.
Methods
Transducer fabrication. We fabricated the transducer in a cleanroom using standard techniques and a process described in detail elsewhere 15 . Basically, the starting substrate was a fused silica wafer, 100 mm in diameter, with a partially reflective dielectric mirror on top. Alumina layers both protected the mirror and supported the membrane while aluminum defined the top and bottom electrodes of the membrane-capacitor. The membrane consisted of ~70 nm alumina and ~90 nm aluminum. We added tensile stress to the aluminum by annealing the wafers before releasing the membrane. Alumina got its tensile stress during deposition. A sacrificial silicon nitride layer was between the electrodes and it had a thickness designed to realize the desired cavity length. The layer also determines the gap between the capacitor electrodes. At the end of the fabrication, the layer is etched away to release the membrane.
The bottom electrode contained a hole, aligned to the center of the membrane, where light could pass through. Laser light was coupled into the cavity through this hole using a gradient refractive-index lens attached to the backside of the transducer chip. We achieved optical alignment by first centering the membrane with respect to a silicon chip attached to the backside of the transducer chip. The silicon die had a large, square, and centered hole that guided the lens and ensured alignment to the cavity. Light was supplied through a fiber-pigtail terminated in a glass ferrule, with the ferrule and lens aligned by a glass tube that snugly fitted both. By changing the distance between ferrule and lens, we maximized the light delivered by the fiber, focused through the lens, reflected from the sample, and collected back into the input fiber. Finally, all components were fixed permanently with glue.
circuit. The full circuit used to detect the imaging signal is shown in Fig. 3a and 3b. The RF coil was a flat spiral coil with four windings and an outer diameter of 50 mm, wound with a 1.6 mm diameter silver wire. It had an inductance estimated to be 490 nH from simulation. A segmenting capacitor was inserted 22 to form a trap circuit together with the parallel circuitry. The trap detuned the detection coil when activated by a PIN diode, i.e. when the diode's forward-bias threshold was exceeded by the transmit trigger voltage. The main circuit board also included a bandpass filter at the RF bias frequency. Its purpose was to filter the RF bias' sideband noise at the MR www.nature.com/scientificreports www.nature.com/scientificreports/ signal frequency and prevent the RF bias from loading the detection resonance. Notably, the filter was designed to allow a DC offset on the bias to pass through. Figure 3c shows the full setup diagram explained here. We mounted the RF coil and detection circuit right outside a cryostat made from glass-fiber. Although the cryostat can cool the transducer and circuit, we only used it as a vacuum chamber. It could maintain vacuum below 1 × 10 −3 mbar up to five hours after disconnecting from its pump because it contained two molecular sieves (activated charcoal and sodium aluminum silicate), both cooled to liquid nitrogen temperature (77 K). When mounting the transducer in the cryostat, we tried to align the membrane perpendicular to the main magnetic field. Without this alignment, the mechanical linewidth broadens, likely due to the Lorentz force on the charges in the membrane. Laser light came to the transducer through a custom fiber-feedthrough 23 for the cryostat, an optical 90/10 splitter, and a long single-mode fiber connected directly to the fiber-coupled 1064 nm laser outside the scanner room. The splitter only routed 10% of input light to the chip and dumped the remaining power. Light reflected from the transducer's cavity went back into the same splitter which then distributed 90% into another long single-mode fiber that lead back outside and connected to a custom-built detector.
MR setup.
Inside the scanner room, we connected the trigger signal directly to the detuning trap on the circuit and also sent the trigger outside to modulate the RF bias. That same cable also carried the transduced signal back into the scanner. Two bias-tees handled the routing by frequency discriminating the low-frequency trigger and the high-frequency signal. A separate cable carried the RF bias. Outside the scanner room, the RF bias had its www.nature.com/scientificreports www.nature.com/scientificreports/ pulse-modulation input connected to the trigger and its output split 50/50. One part became the bias drive and we amplified it before sending it to the detection circuit through an external filter. The other part became the local oscillator in a mixer that frequency-shift the optically detected signal back to the MR signal frequency. The output of this mixer was the input of the scanner's receiver channel. We added a DC bias onto the RF bias drive with another bias-tee right after the first bandpass filter and before the long cable going into the scanner room.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2012-10-31T00:00:00.000
|
16554605
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/1476-511X-11-146",
"pdf_hash": "3b2c93fb91dc4f6ed44de7128bf80152ee834b1e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45555",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "33d1d03d913ee4df19f88185aff75e270ffc4815",
"year": 2012
}
|
pes2o/s2orc
|
Fasting remnant lipoproteins can predict postprandial hyperlipidemia
Background Hypertriglyceridemia and postprandial hyperlipidemia is thought to play an important role in atherosclerosis, but to select patients at high-risk for cardiovascular diseases is difficult with triglycerides (TG) alone in these patients. Methods To predict postprandial hyperlipidemia without inconvenient test meal loading, we examined lipid concentrations before and after test meal loading and fasting adiponectin, and investigated which of these other than TG were significant during the fasting period in 45 healthy individuals (men: women, 26:19). Results TG, remnant-like particle-cholesterol and -triglyceride (RemL-C, RLP-C, and RLP-TG), and TG/apolipoprotein(apo)B were significantly elevated after loading and fasting values significantly and positively correlated with incremental area under the curve (iAUC) (r=0.80, r=0.79, r=0.63, r=0.58, r=0.54; p<0.0001). Fasting adiponectin positively correlated with fasting high-density lipoprotein-cholesterol (r=0.43, p<0.005) and apoA-I (r=0.34, p<0.05), and negatively correlated with iAUC of TG, RemL-C, RLP-C, RLP-TG, and TG/apoB (r=−0.37, r=−0.41, r=−0.37, r=−0.36, r=−0.37; p<0.05). We constructed the model of multivariable linear regression analysis without fasting TG. In the sex-, BMI-, age-, and waist circumference-adjusted analysis of postprandial TG elevation 2 h after test meal loading in all participants, RemL-C, RLP-C, RLP-TG, and TG/apoB were significant factors, but adiponectin was not. Conclusion Fasting triglyceride-rich lipoprotein-related values, especially RemL-C, RLP-C, RLP-TG, and TG/apoB are useful predictors of postprandial hyperlipidemia in young healthy individuals. Although fasting adiponectin concentration correlated with the iAUCs for TG, RemL-C, RLP-C, RLP-TG, and TG/apoB, it was not a significant predictor of postprandial hyperlipidemia in multivariable linear regression analysis.
Background
Epidemiological studies have recently shown that hypertriglyceridemia is associated with atherosclerosis, but the independence of the serum triglycerides (TG) concentration as a causal factor in promoting cardiovascular diseases (CVD) remains debatable and to select patients at high-risk for CVD is difficult with TG alone [1][2][3]. Individuals with mild hypertriglyceridemia without other metabolic disorders or severe hypertriglyceridemia such as primary chylomicronemia rarely have CVD.
Postprandial hyperlipidemia is thought to play an important role in atherosclerosis, and concentrations of non-fasting TG are superior to those of fasting TG for predicting CVD [4][5][6][7]. Many studies have revealed that triglyceride-rich lipoproteins (TRL), especially chylomicron and very-low-density lipoprotein (VLDL) remnants, are atherogenic and that delayed removal of chylomicron remnants from the bloodstream induces postprandial hyperlipidemia [8][9][10]. However, screening large numbers of individuals using fat loading tests is inconvenient and neither a definition nor a standard method for predicting postprandial hyperlipidemia besides postprandial TG elevation has been established. These circumstances present a challenge in terms of how to distinguish patients at high-risk of CVD based on fasting blood samples. We reported that fasting serum concentrations of remnant lipoproteins and apolipoprotein B-48 (apoB-48) besides TG might indicate postprandial hyperlipidemia even among normolipidemic individuals [11].
Adipocytes secrete adiponectin, which is a 224-aminoacid plasma protein [12,13]. Serum adiponectin concentrations are paradoxically reduced in individuals with a large visceral fat mass, and adiponectin plays a significant role in glucose and lipid metabolism [13][14][15]. However, few studies have examined the relationship between serum adiponectin concentrations and postprandial hyperlipidemia. Rubin et al. demonstrated that postprandial TG concentrations correlate with fasting adiponectin concentrations in 45 -65-year-old individuals, including those with metabolic syndrome [16]. Maruyama et al. reported that serum concentrations of high molecular weight (HMW) adiponectin are associated with those of TG and remnant-like particle-triglyceride (RLP-TG) in individuals with type 1 diabetes before and after test meal loading [17].
Thus, the significance of adiponectin in postprandial hyperlipidemia needs to be clarified. Here, we used multivariable linear regression analysis to identify which factors among remnant lipoproteins, adiponectin and other particles during fasting are significant for predicting postprandial hyperlipidemia.
Participants and physical examination
We recruited 45 healthy individuals (men/women, 26/19) who had never been treated for diseases or taken drugs for at least 3 months prior to the study. We used data from 24 participants that we had previously analyzed [11] and we added another 21 individuals for the present study. All of the recruits provided written, informed consent to participate in the study. The institutional ethics committee of Kobe Gakuin University (HEB080701-1) approved the study protocol, which proceeded according to the Declaration of Helsinki. We measured the waist circumference and computed the body mass index (BMI) of all participants by dividing body weight by the square of the height (kg/m 2 ).
Study protocol
The study proceeded as previously described [11]. Test meal A was developed by the Japanese Diabetes Society to assess both postprandial hyperglycemia and hyperlipidemia. This meal consisted of cream of chicken soup, a biscuit and custard pudding. The total 450 kcal of energy was derived from 57.6g of carbohydrate (51.4% of the energy balance), 17.2g of protein (15.3%), and 16.6 g of fat (33.3%), which is a slightly higher ratio of fat than that found in a typical Japanese breakfast (20% -25%). Blood samples were obtained between 9 and 10 AM after 12 h of fasting and at 1, 2, 4, 6 and 8 h after test meal loading.
Statistical analysis
Values are expressed as means ± SD. Data below the threshold of RLP-C (< 2.0mg/dl) or RLP-TG (< 15mg/dL) were treated as 2.0 or 15 mg/dL, respectively. TG, RemL-C, RLP-C, RLP-TG, TG/apoB, and adiponectin were transformed into logarithmic values. The statistical significance of the data was evaluated using Welch's t-test or repeated ANOVA with Dunnett's test. Postprandial changes in values of TG, RemL-C, RLP-C, RLP-TG, or TG/apoB were quantified by calculating the incremental area under the curve (iAUC), which was estimated as the difference between the area defined below the baseline concentration and the area under the curve in each factor or parameter from 1 h to 8 h after test meal loading.
Correlations between adiponectin and lipids as well as other factors were calculated using the formula for Pearson's correlation coefficient.
Independent predictors of TG at 2 h after test meal loading were identified by multivariable linear regression analysis adjusted with sex, BMI, age, waist circumference. Candidate predictors (RemL-C, RLP-C, RLP-TG, TG/apoB, and adiponectin) were analyzed separately because of correlation among candidate predictors. The purpose of our study was to elucidate the factors that predict postprandial hyperlipidemia besides TG, because it is difficult to select high-risk patients with TG alone. Therefore, we did not investigate fasting TG as candidate predictor in regression model. Data were statistically analyzed using SPSS Statistics 17.0 (IBM, Somers, NY, USA) and R version 2.12.1 (R Foundation for Statistical Computing, Vienna, Austria). Two-tailed values of p < 0.05 were considered statistically significant. Table 1 shows the characteristics of the 26 men and 19 women participants, except for LPL mass (total 21; 15 men and 6 women). Although all factors were within normal limits, the values of height, body weight, waist circumference, and hs-CRP concentration were higher in men than in women. On the other hand, adiponectin concentration was higher in women than in men.
Fasting and postprandial concentrations of lipids and their parameters
Tables 2A (all participants) shows the changes of lipid, glucose, and insulin concentrations before and after test meal loading. Since some metabolic factors in Table 1 differed between men and women, we also compared these data in Table 2B (men) and C (women).
The concentrations of all parameters were within normal limits or at low ranges during the fasting period (time 0). Fasting values of TG*, RemL-C*, RLP-C † , RLP-TG † , non-HDL-C/HDL-C*, LDL-C/HDL-C* and apoB/ apoA-I* were significantly greater in men than in women (*p < 0.05; † p < 0.005, Welch's t-test). Fasting RLP-TG concentrations in all women were undetectable (<15 mg/dL), and we set fasting RLP-TG concentration as 15 mg/dL in Table 2C. On the other hand, HDL-C concentrations were significantly lower in men than in women (p < 0.05, Welch's t-test).
Serum concentrations of TG, RLP-C, RLP-TG, insulin and plasma glucose concentrations were significantly elevated after, compared with before loading in all participants (Table 2A). Serum RemL-C concentrations peaked at 1 h and were restored within 4 h, but this elevation had no significance. On the other hand, TC, LDL-C, HDL-C, sd-LDL-C, oxidized LDL, apoA-I, apoA-II, apoB, apoC-II, apoC-III, and apoE concentrations were not elevated, which were consistent with our previous findings [11].
We also analyzed several atherogenic indicators and found that the values of non-HDL-C, non-HDL-C/ HDL-C, LDL-C/HDL-C, apoB/apoA-I, and non-HDL-C/ apoB were not significantly altered, whereas the value of TG/apoB was significantly elevated. Serum concentrations of TG, RLP-C, RLP-TG, TG/apoB and insulin separately analyzed in men and in women (Table 2B and C, respectively), were significantly elevated after, compared with before test meal loading.
Fasting serum values and the iAUC of each factor and parameter significantly and positively correlated in all participants ( Fig. 1). Fasting serum values for TG, RemL-C, RLP-C, RLP-TG and TG/apoB in men (n = 26) also significantly and positively correlated with the iAUC (r = 0.78, p < 0.0001; r = 0.78, p < 0.0001; r = 0.60, p < 0.005; r = 0.54, p < 0.005; r = 0.81, p < 0.0001, respectively). Fasting serum values for TG, RemL-C, and TG/apoB in women (n = 19) significantly and positively correlated with the iAUC (r = 0.77, r = 0.76, r = 0.79, respectively; all p < 0.0001), but relationships between fasting serum values of RLP-TG and the iAUC could not be estimated because fasting RLP-TG was undetectable in all of the women. Despite the small number of female participants, these results indicated that fasting values of RemL-C, RLP-C, RLP-TG, and TG/apoB in addition to TG are useful markers of postprandial hyperlipidemia in men and women.
Correlation between fasting adiponectin concentration and fasting lipids and their parameters
There was no significant correlation between adiponectin and several factors ( Table 3). As for LPL mass, although we had data from only 21 subjects (men 15, women 6) in the fasting period, a significant correlation with adiponectin was observed in all subjects. We examined the correlation between fasting adiponectin concentration and fasting lipids and their parameters (Table 4) We also examined the relationship between adiponectin and iAUC of lipids and their parameters. Fasting Apo, apolipoprotein; HDL-C, high-density lipoprotein cholesterol; LDL-C, low-density lipoprotein cholesterol; OxLDL, oxidized LDL; RemL-C, remnant lipoprotein cholesterol measured using "MetaboLead RemL-C"; RLP-C, remnant-like particle-cholesterol measured with "JIMRO II"; RLP-TG, remnant-like particle-triglycerides; Sd-LDL-C, small, dense-LDL cholesterol; TC, total cholesterol; TG, triglycerides. TG, RemL-C, RLP-C, RLP-TG, and TG/apoB were transformed into logarithmic values. Values are expressed as means ± SD. *p < 0.05, † p < 0.005, ‡ p < 0.0001, § p < 0.01, ‖ p < 0.001, vs. time 0 (repeated ANOVA with Dunnett's test).
Multivariable linear regression analysis for postprandial TG elevation
Because TG strongly correlated with TRL-related values such as RemL-C, RLP-C, RLP-TG, and TG/apoB (r=0.90, r=0.80, r=0.80, r=0.83, p<0.0001, respectively), we constructed the model of multivariable linear regression analysis without fasting TG and adjusted by sex, BMI, age, and waist circumference as we described in Methods. RemL-C, RLP-C, RLP-TG, and TG/apoB were significant factors, but adiponectin was not (Table 5). Especially, adjusted R-squared of RemL-C and TG/apoB were 0.63 and 0.55, respectively. Considering the correlation between fasting adiponectin and fasting lipids, we performed multivariable linear regression analysis without lipids and their parameters. However, we could not uncover a significant relationship between fasting adiponectin concentrations and postprandial TG elevation (p = 0.09).
Discussion
The present study demonstrated that fasting TRLrelated values of TG, RemL-C, RLP-C, RLP-TG, and TG/apoB were useful tools for predicting postprandial hyperlipidemia and that fasting adiponectin concentrations correlated with the fasting values of these lipids and parameters in young healthy individuals, although adiponectin was not a significant predictor in multivariable linear regression analysis. Many studies have examined the relationship between adiponectin and dyslipidemia. Baratta et al. reported that the relationship between adiponectin and fasting lipid values is independent of body fat mass [21]. Kazumi et al. and Heliövaara et al. demonstrated that hypoadiponectinemia is more closely related to adiposity and fasting dyslipidemia than insulin resistance in young healthy men [22,23] [17]. The present study demonstrated that fasting adiponectin concentration positively correlated with fasting HDL-C and apoA-I concentrations, and negatively correlated with fasting and postprandial values of TG, RemL-C, RLP-C, RLP-TG, non-HDL-C/HDL-C, LDL-C/HDL-C and TG/apoB. These are new findings compared with our previous study [11]. However, multivariable linear regression analysis showed that adiponectin was not significant for predicting postprandial hyperlipidemia, which is difficult to explain. Adiponectin participates in the metabolism of the visceral fat mass, glucose and lipids, but it might not reflect dynamic changes such as lipid concentrations that elevate after test meal loading. In addition, the small number of samples derived from only healthy subjects might have affected this finding. Patients with hyperlipidemia, diabetes or metabolic syndrome often have delayed TRL clearance [10]. Although we may find other results if we include older subjects or these patients, it is also necessary to elucidate the significant predictive factors in young healthy subjects, because our purpose of the present study is to elucidate the significant predictors of postprandial TG elevation. On the other hand, since adiponectin concentrations are considerably higher in women than in men, the significance of adiponectin needs to be separately analyzed in men and women.
A definition or standard method other than TG elevation has not been established for predicting postprandial hyperlipidemia. This complicates resolving which factor among those associated with postprandial hyperlipidemia is the most useful for predicting CVD. Some studies have demonstrated the superiority of non-fasting, over fasting TG concentrations for predicting CVD [6,7]. Oka et al. notably demonstrated that waist circumference is more closely related to postprandial, than to fasting TG [25]. Moreover, the lipid profile in metabolic syndrome includes elevated TG and remnant lipoproteins, de creased LDL particle size and low HDL-C concentrations [10]. Thus, not only fasting TG values, but also other parameters might be required to assess CVD risk. Remnant lipoproteins play an important role in atherogenesis and their concentrations are useful to understand metabolic disorders and to predict CVD [10,26,27]. Ai et al. reported that RLP-C and RLP-TG, but not the TG response to an oral fat load are significantly increased in hyperinsulinemic patients with type 2 diabetes [28]. We also found that fasting serum concentrations of remnant lipoproteins might be useful to detect postprandial hyperlipidemia even in normolipidemic individuals [11]. Here, we confirmed that remnant lipoproteins during the fasting period can predict postprandial hyperlipidemia. The purpose of our study was to elucidate the factors that predict postprandial hyperlipidemia besides TG, because it is difficult to select high-risk patients with TG alone. However, TG strongly correlated with TRL-related values including RemL-C, RLP-C, RLP-TG, and TG/apoB, so we constructed the regression model without fasting TG. Multivariable linear regression analysis identified RemL-C, RLP-C, RLP-TG, and TG/apoB as significant predictors of postprandial TG elevation. Taking these findings together, these TRL-related values might be clinically useful for predicting postprandial hyperlipidemia. Another novel outcome of the present study is that the amount of postprandial lipid elevation differs between men and women. Fasting values of TG, RemL-C, RLP-C, RLP-TG, non-HDL-C/HDL-C, LDL-C/HDL-C, and apoB/apoA-I were significantly greater in men than in women (Table 2B and C). Furthermore, the iAUCs of TG, RemL-C, RLP-C, RLP-TG, and TG/apoB were also significantly greater in men than in women. These results suggest that men are more susceptible to postprandial hyperlipidemia. The difference in lipoprotein metabolism between men and women may be caused by several mechanisms including lipoprotein lipase and lecithincholesterol acyltransferase activities (LCAT) activities, and gender specific hormonal effects [8][9][10]29]. Higher adiponectin concentrations in women may also influence TRL metabolism (Table 1) [14,15,[21][22][23][24]. Although we did not check the activities of enzymes and menstrual cycle of each woman in the present study, these differences in fasting and postprandial TRL-related lipid concentrations between men and women should be considered when distinguishing and predicting individuals at high-risk for postprandial hyperlipidemia.
We measured the serum concentrations of several lipid markers. Postprandial accumulation of TRL was strongly associated with the increased prevalence of sd-LDL in patients with myocardial infarction [30]. However, sd-LDL concentrations were decreased in the present study of young healthy individuals. Ogita et al. demonstrated that serum sd-LDL concentrations decrease after meals, increase during the night and peak just before breakfast [31]. Hirayama et al. also demonstrated a similar decrease in the sd-LDL concentrations after breakfast; they speculated that sd-LDL permeates the vascular walls more easily, and might be more susceptible than buoyant LDL to entrapment in vascular subendothelial spaces [32]. The precise mechanism should be addressed in future studies. Regardless, postprandial changes in sd-LDL might differ between healthy individuals and patients with CVD.
We also found that the oxidized LDL concentration did not significantly change. Although oxidized LDL in patients with CAD is postprandially elevated [33], concentrations in healthy individuals have not been investigated in detail. One study has found that oxidized LDL concentrations are not elevated in individuals with normal glucose tolerance during oral glucose tolerance test [34].
Postprandial changes in oxidized LDL concentrations should be examined and compared with those of patients with CAD.
Non HDL-C is an excellent predictor of atherosclerotic risk [35,36] and it is free of dietary variations [36,37]. Ogita et al. also demonstrated that serum concentrations of TC, HDL-C, and LDL-C do not change remarkably in healthy individuals [31]. The present study also did not identify significant changes in TC, HDL-C, LDL-C and non HDL-C concentrations among young healthy individuals. Some studies have demonstrated that LDL-C and HDL-C concentrations decrease during the day in a rhythmic circadian manner [38,39]. We also found decreased LDL-C and HDL-C concentrations, but the changes were not statistically significant. Thus, TC and non HDL-C concentrations were not significantly altered, although the postprandial value of remnant cholesterol was increased.
This study has some limitations. Although others have associated adiponectin concentrations with body weight, waist circumference, BMI, HbA1c and hs-CRP [40,41], we found no significant correlations because we measured these values in young individuals without metabolic syndrome or diabetes.
The RemL-C concentrations increased after test meal loading, albeit without significance, which is consistent with previous findings [11]. Lipoproteins targeted by both RLP-C and RemL-C include remnants of both chylomicrons and VLDL [10]. Concentrations of RemL-C and RLP-C closely correlate in patients with coronary artery disease, but the sensitivity of detecting chylomicron remnant (exogenous) and VLDL remnant (endogenous) lipoproteins might differ between analytical methods [10]. By detailed analysis with the high performance liquid chromatography method, Yoshida et al. demonstrated that they found higher concentrations of chylomicron cholesterol in serum samples with RemL-C < RLP-C, but high concentrations of intermediate-density lipoprotein (IDL)cholesterol (VLDL remnant cholesterol) in samples with RemL-C > RLP-C [42]. Similarly, we and others also reported that methods for measuring RLP-C and RLP-TG might be more sensitive to chylomicron remnantcholesterol and -triglycerides, whereas those for RemL-C might be more suitable for IDL-cholesterol [11,20,43]. This may cause a difference in elevation after test meal loading and in multivariable linear regression analysis between RemL-C and RLP-C in the present study.
We did not evaluate the effect of activities such as aerobic exercise that can decrease TG and remnant concentrations and increase adiponectin concentrations [44]. Factors that can predict postprandial hyperlipidemia should be investigated in larger populations including individuals with metabolic syndrome, diabetes and CVD.
|
v3-fos-license
|
2022-07-10T15:08:51.129Z
|
2022-07-01T00:00:00.000
|
250395562
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "ddb8150c3106cca76e6fc3701bfda32ab46ce1eb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45556",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "832ea14f4580014696cf3d30d46b6ee734ff543c",
"year": 2022
}
|
pes2o/s2orc
|
AI-enabled image fraud in scientific publications
Summary Destroying image integrity in scientific papers may result in serious consequences. Inappropriate duplication and fabrication of images are two common misconducts in this aspect. The rapid development of artificial-intelligence technology has brought to us promising image-generation models that can produce realistic fake images. Here, we show that such advanced generative models threaten the publishing system in academia as they may be used to generate fake scientific images that cannot be effectively identified. We demonstrate the disturbing potential of these generative models in synthesizing fake images, plagiarizing existing images, and deliberately modifying images. It is very difficult to identify images generated by these models by visual inspection, image-forensic tools, and detection tools due to the unique paradigm of the generative models for processing images. This perspective reveals vast risks and arouses the vigilance of the scientific community on fake scientific images generated by artificial intelligence (AI) models.
INTRODUCTION
Inappropriately duplicating and fabricating images in scientific papers would have serious consequences. Editors and reviewers may be deceived, scientific communities may be misled, and research resources may be wasted. To prevent this type of misconduct, people are motivated to search for efficient detection and forensic strategies. Recently, there is a high expectation that artificial intelligence (AI) may bring new techniques for automatic inspection of images fraud in academic publications. Despite controversies and difficulties, progress in this area is being made. 1 However, the whirlwind of progress in AI has not only produced a steady stream of advanced image-retrieval and fraud-detection techniques but has also brought about promising image-editing and -generation tools. [2][3][4][5][6][7] These tools can generate images that are increasingly indistinguishable for automated checking systems and even human judgment. A successful representative of image-generation techniques is the generative adversarial network (GAN). 8 The GAN takes the adversariness of two deep neural networks (a generator and a discriminator) as the training paradigm and then automatically generates high-fidelity images out of nothing. Advanced generative models may be potentially applied to many fields. When they are widely used, ''seeing is believing'' may no longer hold true. 9 It is not news that generative models are abused to a large extent and pose a threat to society. A typical example is Deepfake, 10 an algorithm that generates realistic fake images and videos in which a person in an existing image is replaced with someone else. News and videos produced by Deepfake can have tremendous implications. As more fields are involved, the threats brought about by these new technologies cannot be THE BIGGER PICTURE This perspective reports on the vast risk of potential image fraud based on artificial intelligence (AI) generative technologies in academic publications that have been neglected. This article discusses the scenarios, capabilities, and effects of AI algorithms used in academic fraud. The issue described in this perspective is not only relevant to computer scientists. As members of the scientific community, each of us will be deeply involved in the peer-review process. Each of us may be deceived by the AI image-fraud methods described in this article. Although the algorithm developing itself belongs to the field of computer science, its impact, as mentioned in this perspective, is more related to a wider range of scientific fields, such as biology, medicine, and natural science. Arousing their attention to this threat is a necessary condition to resist this threat. Combined with state-of-the-art AI research, this perspective also discusses possible preventive measures to respond to this potential threat.
ignored. An important issue that we need to be alerted to is that intelligent generative models are used to forge images of scientific evidence and thus threaten academic integrity in publishing. Although it has not been formally reported, due to the effectiveness and easy accessibility of these advanced technologies, such forgeries, some of which are not detectable at all, may become disturbingly common.
In this perspective, we reveal how these advanced generative models might be abused for scientific image fraud with examples. We also demonstrate the identification accuracy of this fraud by both human experts and AI techniques. Our examples and identification results show troubling signs that this type of image fraud is efficient and covert and is expected to pose a threat to academic publishing. At last, we explore possible responses to this threat. We anticipate that our article will attract the scientific community's attention and bring about discussions on this emerging issue so that better responses can be developed and implemented.
SCIENTIFIC IMAGE FRAUD USING GENERATED MODELS
Although the criteria of detecting misconduct in the scientific community are not uniform, the following three situations are acknowledged as severe cases: (1) fabrication of non-existent images, (2) falsification or manipulation of existing images, and (3) plagiarism. Among the cases that have been revealed above, inappropriate duplication and editing are the most common measures to commit these misconducts. 11 The duplication of images includes using multiple identical images to represent different experimental results, reusing or plagiarizing images in previous publications as new experimental evidence, or creating images by synthesizing existing ones using rotating, scaling, cropping, and splicing. The editing of images involves using image-processing software to modify or tamper with images to meet authors' expectations. However, both duplication and modification would leave traces, such as repetition coincidences that are impossible to get to appear naturally or traces of modification revealed by image-forensic tools such as inverse or false-color view.
In contrast to the above ''traditional'' methods, generative models generate images from scratch or regenerate existing images. The following scenarios are used to show how generative models are misused. Experienced researchers may collect many scientific images in a specific field first. The most general paradigm of generative models is to capture the underlying patterns in these scientific images and fit the distribution of the target data. Sampling using trained generative models can produce fake images that follow patterns similar to the real images. Images generated by these models are visually realistic and even scientifically self consistent (see Figure 1A). These images are meaningless in science, but one may use them as evidence to report experiments that have never been conducted. In any field where a large amount of image data can be obtained, such generated images may become a source of fake scientific images. Different from the above cases in which the models need to be trained using a large image dataset, another novel generative paradigm allows the model to be trained with a single image. The trained model can be used for image resampling or manipulation. SinGAN is an example of this paradigm. 13 It learns patch distribution hierarchically at different scales of an image and then regenerates high-quality, diverse images with the main style or with the content unchanged. The regenerated images preserve the statistical characteristics but have different local details compared with the original ones (see Figure 1B). This technique can be used to plagiarize published images or reuse existing images, such as reporting non-existent control-group experiments.
Apart from the fact that generated fake images may be used deliberately, modifications may also be used to produce images that meet authors' expectations in experiments. The generative models manipulate images by directly generating images featuring similar appearances but modified content. 13,14 For example, one may remove some cells from the image through an inpainting generative model or add new cells through an image harmonization model (see Figure 1C). In some cases, generative models are more remarkable for their ability to create images of different things that may not exist at all. The generative model may disentangle features of images during the training phase. Based on this, the model may mix these features and synthesize images that do not conform to the natural distribution of data, e.g., proteins that appear in a cell's image where they should not have appeared.
RISKS OF AI-ENABLED IMAGE FRAUD
The dangers of the fraud methods described above can be brought up in several ways, of which their difficult-to-detect nature is the most important one. Firstly, it is difficult for editors and reviewers to find such frauds through visual inspection during the peer-review process. A user study indicates that scientific images generated by generative models are likely to deceive the judgment of human experts (see Figure 2). The distribution of collected human ratings shows interesting patterns. It can be seen that humans tend to be more confident in the judgment of natural images, which is reflected in the fact that most of the ratings are either ''definitely real'' or ''definitely fake.'' For scientific images, their relatively simple image structure makes them easier to learn by generative models. The difference between the real and generated images is more subtle and imperceptible, so the average rating is biased toward ''real,'' and the ratings are also less confident. Secondly, the image-generation process is controlled by random noise, and different noise vectors create different images. The unnatural repetition between generated fake images no longer exists, which renders duplication inspection based on retrieving and comparing image details invalid. Third, as image generation is an end-to-end integrated process, there are no intrinsic irregularities of modification that existing image-forensic tools can detect. Detection of such generated images relies on features or fingerprints left by the generative model. This introduces very large uncertainties and difficulties for detection.
In response to the threats posed by fake scientific images, research on the quality and integrity in scientific literature has attracted significant attention. 15,16 The current forensic methods for scientific image fraud rely on unnatural repetitions found through visual inspection 11 or intrinsic irregularities visualized through forensic tools. On the research front, AI is also expected to bring about tools for efficient automatic image-fraud detection to address the difficulty of detecting such fraud. [17][18][19][20] Recent ll OPEN ACCESS studies suggest that images created by generative models may retain detectable systematic flaws that may distinguish them from authentic images. [21][22][23][24][25] AI forensic tools can be built to tell generated images from real ones. We test two state-of-the-art AI forensic tools by using them to analyze the fake scientific images described above. We include the image classifier provided by Wang et al., 21 which was trained on ProGAN 6 -generated images with careful pre-and post-processing and data augmentation, and the GAN image detector proposed by Gragnaniello et al., 26 which was developed based on a limited sub-sampling network architecture and a contrastive-learning paradigm. The results are shown in Figures 3A and 3B. Wang et al. 21 only achieved a similar accuracy performance to human visual in-spection, and Gragnaniello et al. 26 achieved generally better performance against Wang et al. 21 But neither method can make good enough detections, and relying on such accuracy is not enough to mitigate the threat of image forgery based on generative models. Imperfect automated forensic tools are also highly vulnerable. A malicious user may simply select a fake image that passes the detection threshold, as a single fake image is the only thing he/she needs to achieve his/her goal. The limitation of existing methods points to the fact that the detection and forensics of scientific image fraud is still open to questions.
Another equally dangerous thing is that, unlike manually modifying or forging images with software, the cost of using these advanced models is close to negligible. For one thing, researchers The result is that all these technologies for intelligent generative models may be shared with anyone defenselessly; for example, all of the techniques involved in this article are easily available on the Internet. This greatly lowers the barrier to entry for anyone trying this type of technology, which, in turn, further gives rise to the possibility for the abuse of these technologies. For another thing, many intelligent generative models can automatically process and generate images without human intervention. Making fake scientific images no longer requires complicated human labor but can be mass produced. This has the potential to make it easier for some ''paper factories'' 27 to systemically produce falsified research papers.
THE FIGHT AGAINST AI-ENABLED IMAGE FRAUD
There is an urgent need for effective measures to respond to this potential threat. Most critically, people need first to be subjectively prepared for the new risks brought by these new technologies. Although no cases of using such intelligent image technologies have been reported, a more worrying possibility is that this kind of misconduct has quietly occurred somewhere. The problem is that it has not yet been found. Nevertheless, a window of opportunity remains open to reduce the risks to a certain extent by improving the management system or process before such a high-tech fraud pervades in scientific publications.
In terms of all preventive measures that may be taken for the moment, asking authors to provide more detailed high-resolution raw image data is the most convenient one. Although impressive progress has been made, generative models are still straggled in generating large-size high-fidelity images. The high We conducted a human-opinion study. This figure shows the normalized histogram of votes per image type. The image used for evaluation consists of five categories: (1) natural images, (2) scanning micrographs of nano materials (nano-micrograph), (3) cell immunostaining images, (4) immunohistochemistry (IHC) images, and (5) histopathological images. In total, 800 images are involved, and each image is rated by at least ten medical experts. The voting scale was between 1 to 4 corresponding to the following: 1 -definitely fake, 2 -probably fake, 3probably real, and 4 -definitely real. Mean scores are shown as red dots. computational resources and algorithm complexity required to generate largesize fake images will increase the threshold of such frauds. In addition, we should continue to develop forensic tools for advanced image-generation and -processing models. Tools specialized for scientific images should also be given great importance, as we see that the detection accuracy is significantly better for natural images than for scientific images. An important reason for this is that the existing tools are developed based on natural images. Although the current situation is not optimistic, the advantage of these forensic tools lies in the ability to perform large-scale automatic screening. At last, when developing new image-generation technology, we must again consider the possible social impact of such technologies and attempt to eliminate the risk of such technologies being abused as much as possible. For example, when releasing the source code of generative models that may be used for improper purposes, we may annotate generated images through encryption or steganography.
CONCLUSION
Our discussion demonstrates that AI-enabled image fraud may pose serious challenges to the field of academic publishing. The difficult-to-detect nature, inexpensiveness, availability, and ease-of-use of advanced image generative models become major sources of threats when they are abused for scientific image fraud. We also explore responses to this type of fraud. However, the confrontation between new technologies and countermeasures that prevent them from being abused will become an enduring cat-and-mouse game. Perhaps when these advanced technologies are abused, our cost of obtaining the truth has been irretrievably increased.
Appendix A: Data acquisition
In this perspective, we discussed three methods for image fraud in the main text, namely image generation, image regeneration or resampling, and image editing. The images used for evaluation may be classified into five categories: (1) natural images, such as natural sceneries, architectures, flora, and fauna; (2) scanning micrographs of nano materials collected from Internet; (3) cell immunostaining confocal microscope images from the Human Protein Atlas dataset; 28 (4) immunohistochemistry (IHC) images collected from clinical and the ll OPEN ACCESS Human Protein Atlas datasets; 28 and (5) histopathological images from the breast cancer histopathological dataset (BreCaHAD). 29 Two generative models based on StyleGAN 12 were trained by using the cell immunostaining image dataset and the BreCaHAD histopathological image dataset. The generated images are 512 3 512 pixels. For the training of the StyleGAN generator, we follow the official suggestions. Eight NVIDIA V100 computing cards were used in the training, and the process lasted for 14 days. We used SinGAN 13 for the image-regeneration experiments. For each image category, we selected 10 images and regenerated 5 times for each trained model. The regenerated images are 512 3 512 pixels. We follow the official suggestions of applying SinGAN. We also used the NVIDIA V100 computing card for experiments. It takes about 5 h to calculate one image. For the edited images, we also employed SinGAN. SinGAN achieves image manipulation or image harmonization by regenerating images based on a modified input image. We demonstrated adding or removing cells or objects in the images by using cell immunostaining and IHC images.
Appendix B: User study A total of 800 images were involved in the user study. For each image category and each image-fraud method, we prepared at least 50 images. We also prepared 50 real images for each category as a comparison. Ten volunteers with rich experience in the fields of medicine and biology participated in the study. Each volunteer was asked to fill out a set of questionnaires, and each questionnaire was limited to 16 questions. In order to prevent volunteers from feeling exhausted, the questionnaire was conducted at different times during a week. In each questionnaire, volunteers saw a set of the above images. Volunteers were informed that these images may appear in some scientific papers, popular science articles, and reports. They were also informed that these images may contain a number of unknown false, edited, or forged content. Each image may appear multiple times, and the number of times each image appeared has nothing to do with its authenticity. We asked volunteers to evaluate the authenticity of each picture based on their professional knowledge and intuition. The voting scale was between 1 and 4: 1 -definitely fake, 2 -probably fake, 3 -probably real, and 4 -definitely real. Volunteers were invited to choose the most suitable option. 4. Chan, K.C., Wang, X., Xu, X., Gu, J., Loy, C., and Glean, C. Xinlei Wang is currently pursuing a PhD degree in engineering and IT with the University of Sydney. She received her BBA degree in finance in 2018 and her MS degree in data science in 2020 from the Chinese University of Hong Kong, Shenzhen. Currently, she is studying in electrical and information engineering at the University of Sydney, Australia. Her research interests focus on the electricity market mechanism and the Chinese emission trading market.
Chenang Li is currently a senior student in the Chinese University of Hong Kong, Shenzhen.
Dr. Junhua Zhao is an associate professor at CUHK(SZ), the Director of Energy Markets and Finance Lab, Shenzhen Finance Institute, and a scientist at Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS). He joined CUHKSZ in 2015. Before joining CUHKSZ, he was a senior lecturer and also acted as the principal research scientist of Center for Intelligent Electricity Networks, the University of Newcastle, Australia. He has 11 years of experience in the power industry in Australia. His research area includes smart grid, electricity market, energy economics, data mining, and AI. Dr. Jing Qiu is currently a senior lecturer in electrical engineering at the University of Sydney, Australia. He obtained his BEng degree in control engineering from Shandong University, China; his MSc degree in environmental policy and management, majoring in carbon financing in the power sector, from The University of Manchester, UK; and his PhD in electrical engineering from The University of Newcastle, Australia, in 2008, 2010, and 2014, respectively. His areas of interest include power-system operation and planning, energy economics, electricity markets, and risk management.
|
v3-fos-license
|
2021-05-20T13:12:28.627Z
|
2021-04-01T00:00:00.000
|
236657537
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.mdpi.com/2076-3417/11/9/4132/pdf",
"pdf_hash": "1734963282f7c6768144e6bdb977850045ce8829",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45558",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "3f2bb7092089357d3abce8b82ba71dd2d65ccd18",
"year": 2021
}
|
pes2o/s2orc
|
Machine Learning Methods with Noisy, Incomplete or Small Datasets
: In this article, we present a collection of fifteen novel contributions on machine learning methods with low-quality or imperfect datasets, which were accepted for publication in the special issue “Machine Learning Methods with Noisy, Incomplete or Small Datasets”, Applied Sciences (ISSN 2076-3417). These papers provide a variety of novel approaches to real-world machine learning problems where available datasets suffer from imperfections such as missing values, noise or artefacts. Contributions in applied sciences include medical applications, epidemic management tools, methodological work, and industrial applications, among others. We believe that this special issue will bring new ideas for solving this challenging problem, and will provide clear examples of application in real-world scenarios.
Introduction
In many machine learning applications, available datasets are sometimes incomplete, noisy or affected by artifacts. In supervised scenarios, it could happen that label information is of low quality, which includes unbalanced training sets, noisy labels and other problems. Moreover, in practice, it is very common that available data samples are not enough to derive useful supervised or unsupervised classifiers. All these issues are commonly referred as the low-quality data problem. Machine learning researchers and practitioners have been working on various strategies to correctly handle the low-quality problem in recent years. Far from being solved, this problem still represents a fundamental and classic challenge in the artificial intelligence community.
The aim of this Special Issue was to collect novel contributions on machine learning methods for low-quality datasets, to contribute to the dissemination of new ideas to solve this challenging problem, and to provide clear examples of application in real scenarios. Despite the COVID-19 crisis and lockdowns in most countries, this Special Issue attracted great attention among researchers worldwide. A total number of twenty-one papers were submitted and fifteen of them were accepted after appropriate revisions. We were pleasantly surprised by the diversity of nationalities of contributors and the variety of the addressed problems in applied sciences ranging from medical and health applications through specific industrial case study examples. The authors of the published papers are from nine countries located in Europe, America, Africa and Asia.
In the following sections, the accepted papers and their corresponding most relevant contributions are summarized, which are grouped in the following categories: medical applications, epidemics management tools, methodological papers, industrial applications, and others.
Medical Applications
Interestingly, the majority of the contributions are related to specific applications in medicine. Three papers addressed different problems or diseases in Neuroscience. For example, in [1], Caiafa et al. (Argentina-Spain-Japan) reviewed recent approaches to deal with incomplete or noisy measurements by applying signal decomposition methods and showed their usefulness in epileptic intracranial electroencephalogram (iEEG) signals classification, among other applications. Finding epileptic focus with iEEG is usually difficult mainly because available datasets labeled by expert medical doctors are scarce. In [2], Tong et al. (China-South Africa) proposed a few-shot learning method for the severity assessment of Parkinson's disease based on a small gait dataset. The proposed algorithm solves the small-data problem by using permutation-variable importance (PVI) and persistent entropy of topological imprints; as well as applying a support vector machine (SVM) classifier to achieve the severity classification of Parkinson disease patients. In [3], Wang et al. (China) addressed the problem of small and unbalanced datasets in functional magnetic resonance imaging (fMRI) for neuroscience studies. Their technique combines Independent Component Analysis (ICA) for dimensionality reduction, data augmentation to balance data and a convolution-gated recurrent unit (GRU) network. Results on episodic memory evaluation are reported.
The other papers that addressed medical applications are described as follows. In [4], Yasutomi et al. (Japan) introduced a deep learning method based on an auto-encoder architecture to detect and remove shadow artifacts in ultrasound images. The model can be trained on unlabeled data (unsupervised) or with few pixel labels available (semi supervised). The method has been applied to fetal heart diagnosis. In [5], Ahmad et al. (Saudi Arabia) investigated a machine learning approach to predict diabetes mellitus based on a handful set of features obtained by simple laboratory tests, allowing a costeffective and rapid screening tool. They compared different machine learning classifiers and provided a set of recommendations based on those analyses. In [6], Qiao et al. (China) proposed a method to measure the length of the root canal length, which is crucial for an effective treatment of endodontics and periapicalitis. The authors employed a neural network on multifrequency impedance measurements.
Epidemics Monitoring and Management Tools
Machine learning has been demonstrated to have an important role in dealing with infectious diseases and epidemics. In this collection, two contributions are devoted to the development of tools to deal with some aspects of COVID-19 and dengue epidemics. More specifically, in [7], Gibert Oliveras et al. (Spain) reported the results of a project developed in Catalonia, Spain, owing to help in the COVID-19 crisis. The project allowed for quick territory screening providing relevant information to support informed decision-making and strategy and policy design. The authors proposed a data-driven methodology in order to deal with small subgroups of the population for statistical secrecy preserving. In [8], Silitonga et al. (Indonesia) developed prediction models to estimate the severity level of dengue based on the laboratory test results of the corresponding patients using artificial neural network (ANN) and discriminant analysis (DA) applied to very small datasets.
Methodological Articles
Four contributions proposed general methods for machine learning with low-quality datasets. In [1], the authors provided a unified review of decomposition methods, which includes linear decomposition, low-rank matrix/tensor factorization, sparse matrix/tensor decomposition and empirical mode decomposition (EMD) models. This paper illustrates the ability of these decomposition models to impute missing features, denoising and to artificially generate additional data samples (data augmentation) with examples to the brain-computer interface (BCI) and epileptic EEG analysis, among others. In [9], Lee et al. (South Korea) developed feature extraction methods based on the non-negative matrix factorization (NMF) algorithm and it is applied in weakly supervised sound event detection.
The algorithm considers learning from strongly and weakly labeled data. On the other side, in [10], Gil et al. (Spain) investigated the use of optimization in the preprocessing step of time series joining. More specifically, the authors proposed an error function to measure the adequateness of the joining and demonstrated the effectiveness of the proposed method on the synthetical datasets and real industrial process scenario. Finally, in [11], Wang et al. (China-Japan) proposed a novel multi-label feature selection approach by embedding label correlations (dubbed ELCs) in order to eliminate irrelevant and redundant, features, also referred as noisy features.
Applications to the Industry
This Special Issue also includes two papers studying the application of machine learning to specific practical problems in different industries: the fishing and smart buildings industries. In [12], Marti-Puig et al. (Spain) addressed the problem of distinguishing between different Mediterranean demersal species of fish that share a remarkably similar form and that are also used for the evaluation of marine resources. The authors employed both a binary and a multi-class classification problem based on very small datasets with unreliable labels. In [13], Ge et al. (Japan-China) proposed a unified and practical framework for knowledge inference inside the smart building.
Other Applications
Two very important machine learning problems face recognition and natural language processing. These two problems were addressed in this Special Issue for cases with low-quality datasets. In [14], Lee et al. (Korea) studied the problem of training a facial recognition system provided that only one sample per identity is available. The authors proposed a data augmentation technique by introducing changes in pixels in face images associated with variations by extracting the binary weighted interpolation map (B-WIM) from neutral and variational images in the auxiliary set. In [1], the EMD method was applied to remove noise in face images, thus improving the classification accuracy of a machine learning classifier. Finally, in [15], Mouratidis et al. (Greece) provided an application to natural language processing. They developed a deep learning schema for machine translation evaluation (English-Greek and English-Italian), based on different categories of information (linguistic features, natural language processing metrics and embeddings), by using a model for machine learning based on noisy and small datasets.
Conclusions
The correct handling of noisy, incomplete or small datasets remains an open problem in the artificial intelligence community. However, this Special Issue collects fifteen research papers providing general approaches to some low-quality datasets problems and clear practical examples in different applied sciences disciplines. This collection of papers represents a good reference for the current state-of-the-arts, also providing an excellent starting point for developing new advanced methods in the future.
|
v3-fos-license
|
2018-11-16T12:37:58.843Z
|
2017-12-12T00:00:00.000
|
53690191
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://pubs.geoscienceworld.org/geology/article-pdf/46/2/147/4033079/147.pdf",
"pdf_hash": "d055dcfd68b5ceb0c1ffb41f1cc03471b6b38429",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45559",
"s2fieldsofstudy": [
"Geology",
"Environmental Science"
],
"sha1": "ae863f8b30602cf983c561d40758ce5728cd3700",
"year": 2017
}
|
pes2o/s2orc
|
A new high-resolution chronology for the late Maastrichtian warming event: Establishing robust temporal links with the onset of Deccan volcanism
The late Maastrichtian warming event was defined by a global temperature increase of ~2.5–5 °C that occurred ~150–300 k.y. before the Cretaceous-Paleogene (K-Pg) mass extinction. This transient warming event has traditionally been associated with a major pulse of Deccan Traps (west-central India) volcanism; however, large uncertainties associated with radiogenic dating methods have long hampered a definitive correlation. Here we present a new high-resolution, single species, benthic stable isotope record from the South Atlantic, calibrated to an updated orbitally tuned age model, to provide a revised chronology of the event, which we then correlate to the latest radio-genic dates of the main Deccan Traps eruption phases. Our data reveal that the initiation of deep-sea warming coincides, within uncertainty, with the onset of the main phase of Deccan volcanism, strongly suggesting a causal link. The onset of deep-sea warming is synchronous with a 405 k.y. eccentricity minimum, excluding a control by orbital forcing alone, although
INTRODUCTION
A period of rapid climate change, represented initially by a transient global warming event and followed by a global cooling, occurred during the last few hundred thousand years of the Maastrichtian and may have played an ancillary role in the ultimate demise of many terrestrial and marine biota at the Cretaceous-Paleogene (K-Pg) boundary (e.g., Keller et al., 2016).The so-called late Maastrichtian warming event was characterized by a transient global ~2.5-4 °C warming in the marine realm based on benthic δ 18 O and organic paleothermometer (TEX H 86 ) data (e.g., Li and Keller, 1998;Woelders et al., 2017), and ~5 °C warming in the terrestrial realm based on pedogenic carbonate δ 18 O and proportion of untoothed leaf margins in woody dicot plants (Nordt et al., 2003;Wilf et al., 2003).Enhanced deep-sea carbonate dissolution, most pronounced in the high latitudes (Henehan et al., 2016), and abrupt decreases in vertical temperature and carbon isotope gradients in the marine water column have also been documented (Li and Keller, 1998).
This transient warming event has previously been linked to a major pulse of Deccan Traps volcanism, centered in modern-day western India; however, until recently, the large uncertainties associated with radiogenic dating have hampered a robust correlation (e.g., Chenet et al., 2007).In recent years improvements in precision of radiogenic dating methods have allowed for a more robust correlation between pre-K-Pg climate change and volcanism (e.g., Renne et al., 2015;Schoene et al., 2015).To complement advances in dating of the volcanic sequences, we present the highest resolution (1.5-4 k.y.), complete single species benthic stable isotope record produced to date, calibrated to an updated orbitally tuned age model, for the final million years of the Maastrichtian and the first 500 k.y. of the Danian.This allows us to much more accurately correlate the major climatic shifts of the terminal Maastrichtian with Deccan volcanism, facilitating future work investigating the link between Deccaninduced climate change and the K-Pg mass extinction.
MATERIALS AND METHODS
A stratigraphically continuous late Maastrichtian-early Danian sedimentary section was recovered at Ocean Drilling Program (ODP) Site 1262 (Walvis Ridge, South Atlantic; 27°11.15′S,1°34.62′E;water depth 4759 m, Maastrichtian water depth ~3000 m; Shipboard Scientific Party, 2004), where the late Maastrichtian is represented by an expanded section of foraminifera-bearing, carbonate-rich nannofossil ooze with a mean sedimentation rate of 1.5-2 cm/k.y.We have constructed an updated orbitally tuned age model for this site based on recognition of the stable 405 k.y.eccentricity cycle in our high-resolution benthic carbon isotope (δ 13 C benthic ) data set, correlated to the La2010b solution of Laskar et al. (2011) and anchored to an astronomical K-Pg boundary age of 66.02 Ma (Dinarès-Turell et al., 2014).The key tie points used to create this age model are listed in Table DR2 in the GSA Data Repository 1 .All published data presented herein have also been migrated over to the same age model for comparison (Figs. 1 and 2; detailed methods are provided in the Data Repository).We generated δ 13 C and δ 18 O data using the epifaunal benthic foraminifera species Nuttallides truempyi on an IsoPrime 100 gas source isotope ratio mass spectrometer in dual inlet mode equipped with a Multiprep device at the Natural Environment Research Council Isotope Geosciences Facility (British Geological Survey).The internal standard KCM, calibrated against the international standard NBS-19, was used to place data on the Vienna Peedee belemnite (VPDB) scale, with average sample analytical precision (1σ) of 0.03‰ for δ 13 C and 0.05‰ for δ 18 O.The complete benthic stable isotope data set is available online in the PAN-GAEA database (https:// doi.pangaea.de/10.1594 /PANGAEA .881019).Bottom-water temperatures were calculated from δ 18 O benthic data by converting N. truempyi data to Cibicidoides values, then using Equation 1of Bemis et al. (1998).Stable isotope data were graphically detrended in KaleidaGraph 4.0 using a 15% running mean, to remove long-term trends, then bandpass filtering was conducted in AnalySeries 2.0 (Paillard et al., 1996) for 405 k.y.eccentricity at 0.002467 ± 0.000700 cycles/k.y. and 100 k.y.eccentricity at 0.010 ± 0.003 cycles/k.y.
RESULTS
The new stable isotope data show that relatively stable and cool temperatures persisted in the deep South Atlantic Ocean from 67.1 to 66.8 Ma, followed by the onset of a longer term gradual warming (1 °C) and decline in δ 13 C benthic values from 66.75 to 66.5 Ma (Fig. 1).The late Maastrichtian warming event initiated at ca. 66.34 Ma, ~300 k.y.before the K-Pg boundary, with peak warming of ~+4 °C (δ 18 O benthic excursion of ~0.8‰) attained between ca.66.27 and 66.18 Ma (Fig. 1).A more gradual, step-wise cooling to pre-excursion temperatures then took place over the next 200 k.y., terminating at the K-Pg boundary (Fig. 1).Conversely, the δ 13 C benthic record appears to show a muted response compared to the δ 18 O benthic record during the warming event, with only a minor negative excursion of ~0.5‰ noted between 66.3 and 66.2 Ma (Fig. 1).The magnitude and character of the excursions in δ 13 C benthic and δ 18 O benthic data at Site 1262 are similar to those reported in lower resolution data from Deep Sea Drilling Project (DSDP) Site 525 (Li and Keller, 1998; Fig. DR3), located at a shallower paleodepth of 1-1.5 km on Walvis Ridge, suggesting a similar magnitude of warming in deep and intermediate waters of the South Atlantic.Confirming that these characteristics are global, deep Pacific stable isotope data from ODP Site 1209 also show a coeval but somewhat smaller warming pulse, and a muted response in δ 13 C benthic values similar to those observed in the Atlantic (Fig. 2; Westerhold et al., 2011).The minor offset of Pacific δ 13 C benthic values by as much as -0.4‰ relative to the South Atlantic, suggests that an older water mass was bathing the equatorial Pacific site, consistent with previously reported Paleocene-Eocene trends (Littler et al., 2014;Fig. 2).The onset of the warming event in the Atlantic corresponds to a 405 k.y.eccentricity minimum, with the peak of the event occurring during a 100 k.y.eccentricity maximum but prior to a 405 k.y.eccentricity maximum.The δ 18 O benthic leads δ 13 C benthic (i.e., climate leads carbon cycle) by ~30-40 k.y.within the 405 k.y.band, consistent with late Paleocene-early Eocene trends recorded further upsection at this site (Littler et al., 2014).It is interesting that the δ 18 O benthic and δ 13 C benthic data become antiphase at the 100 k.y.frequency during the warming event, but are in phase with carbon lagging oxygen by ~10 k.y. earlier in the Maastrichtian and by ~5 k.y.during the earliest Danian (Fig. 1).
DISCUSSION
The new high-resolution benthic stable isotope data placed onto our updated orbitally tuned age model demonstrates that the late Maastrichtian warming event closely coincides with the onset of the main phase of Deccan volcanism, regardless of radiogenic dating technique used, strongly suggesting a causal link (Fig. 1).Furthermore, both the relatively long duration of the warming event and the initiation of the warming during a minimum in the 405 k.y.eccentricity cycle suggest that a control by orbital forcing alone is unlikely, and that Deccan volcanogenic CO 2 emissions were likely to be the primary climate driver over 100 k.y.time scales.Based on the distribution of red boles (weathering horizons) within the Deccan basalts, volcanism of the pre-K-Pg Kalsubai subgroup was characterized by more frequent eruptions of a smaller magnitude, likely leading to a larger cumulative atmospheric pCO 2 increase than post-K-Pg eruptions (Renne et al., 2015;Schoene et al., 2015).By contrast, Danian eruptions had longer hiatuses between large eruptive events, allowing for partial CO 2 sequestration by silicate weathering or organic burial.
Despite strong evidence for climatic warming and some evidence for elevated atmospheric pCO 2 (Barclay and Wing, 2016;Nordt et al., 2002Nordt et al., , 2003; Fig. 1), characteristic of many hyperthermals of the early Paleogene such as the Paleocene Eocene Thermal Maximum (e.g., McInerney and Wing, 2011), the C isotope records and lack of evidence for significant ocean acidification at Site 1262 (e.g., reduction in %CaCO 3 or increase in Fe concentration) suggest a relatively minor carbon cycle perturbation (Figs. 1 and 2).Given the comparatively heavy δ 13 C signature (-7‰) of volcanogenic CO 2 , voluminous Deccan emissions may not have created a major perturbation to the isotope composition of the global δ 13 C pool.The absence of a major negative carbon cycle perturbation suggests that sources of isotopically light carbon (e.g., biogenic methane or the oxidation of organic matter), were not destabilized and released in significant quantities during the event.This differential response between the δ 18 O benthic and δ 13 C benthic records, and the lack of evidence for significant global deep-ocean acidification (Fig. 1), may be due to rates of volcanogenic CO 2 emission and consequent background to peak warming, which occurred rather slowly over ~70-80 k.y.during the late Maastrichtian event, but was much more rapid, ~10-20 k.y., during Paleogene hyperthermals (e.g., McInerney and Wing, 2011;Zeebe et al., 2017).However, evidence for enhanced deep-sea dissolution during this event has been described from the high latitudes in %CaCO 3 records from ODP Site 690 (Henehan et al., 2016) and in orbitally tuned Fe intensity and magnetic susceptibility data from Integrated Ocean Drilling Program Site U1403 on the Newfoundland margin (Batenburg et al., 2017).These deep-sea sites may have been particularly sensitive to smaller carbon cycle perturbations during this time, with Site 690 located in the principle region of deep-water formation in the Southern Ocean and with Site U1403, at a paleodepth of ~4 km, being more sensitive to smaller fluctuations in the Maastrichtian calcite compensation depth than the shallower Site 1262 (Henehan et al., 2016).It is clear that more high-resolution pCO 2 proxy studies are urgently required to more confidently assess Deccan-induced perturbations to the global carbon cycle.The lag between the climate and carbon cycle response within the 405 k.y.band (Fig. 1), as seen throughout the Paleocene-Eocene (Littler et al., 2014), may suggest that small quantities of light carbon were released as a positive feedback to orbitally driven warming.The observed antiphase behavior between δ 13 C and δ 18 O within the 100 k.y.band during the warming event, but not before or after (Fig. 1), may result from the pulsed release of small amounts of isotopically light carbon superimposed on the longer (300 k.y.) scale warming imparted by the Deccan eruptions.In addition, amplified precession-scale (~21 k.y.) variability visible in the dissolution proxies (Fe and %CaCO 3 ) and δ 13 C records during the event, also suggest increased carbon cycle sensitivity, perhaps due to generally elevated CO 2 levels from Deccan activity (Fig. 1).
The limited available planktic stable isotope data (e.g., ODP Site 690) suggest that significant warming, ~2.5 °C, occurred in the southern high latitudes during the event (Fig. 2; Stott and Kennett, 1990).Organic paleothermometer TEX H 86 data from the Neuquén Basin, Argentina, also suggest significant warming of surface waters of ~3 °C in continental shelf settings at mid-latitudes (Fig. 1; Woelders et al., 2017).A negative bulk δ 18 O excursion of 1‰ has also been resolved from the Newfoundland margin, suggesting that a pronounced surface-water warming also occurred in the mid-northern latitudes during this time, although bulk δ 18 O values cannot reliably be converted into absolute surface-water temperatures (Batenburg et al., 2017).By contrast, there appears to have been very little change in surface-water temperatures at lower latitudes, although this interpretation is tentative based on the availability of only one fine fraction data set from DSDP Site 577 (Fig. 2).A much more significant bottom-water warming at mid-low latitudes created a dramatic reduction in the surface to deep temperature gradient and reduced thermal stratification of the water column (Li and Keller, 1998;Fig. 2).Taken together, these data suggest a possible polar amplification of surface-water warming during the late Maastrichtian warming event; however, more single species planktic isotope records over greater latitudinal coverage are required to fully evaluate latitudinal variations in surface temperature during this event.
CONCLUSIONS
Our revised chronology for the late Maastrichtian warming event, combined with the latest radiogenic dates for Deccan volcanism, point to the synchronous onset of the main phase of Deccan volcanism with the late Maastrichtian warming event ~300 k.y.before the K-Pg boundary.The onset of the warming is unlikely to have been orbitally controlled, further supporting volcanic CO 2 as the trigger.Increased carbon cycle sensitivity to orbital precession is evident during the greenhouse event, suggesting system sensitivity to background temperature conditions.Now that the environmental effects of Deccan volcanism have been more confidently established, future work should focus on evaluating the role of these precursor climatic changes in the K-Pg mass extinction.
Figure 2 .
Figure 2. Stable isotope data across the late Maastrichtian event.A: Benthic δ 13 C and δ 18 O data for Ocean Drilling Program (ODP) Site 1262 (this study) plotted against benthic data from Site 1209 (equatorial Pacific; Westerhold et al., 2011) for comparison.T-temperature; S-South; K-Pg-Cretaceous-Paleogene boundary.B: Planktic δ 13 C and δ 18 O data from Deep Sea Drilling Project (DSDP) Site 577, equatorial (Eq.)Pacific (Zachos et al., 1985), DSDP Site 525, South Atlantic (Li and Keller, 1998), and ODP Site 690, Southern Ocean (Stott and Kennett, 1990).N.-North.Planktic and bulk δ 18 O data have been normalized to a baseline of 0‰ for pre-event conditions to compare the magnitude of the warming event by latitude.C: Shallow to deep δ 13 C and temperature gradients at Site 525 (Li and Keller, 1998).
|
v3-fos-license
|
2024-06-12T06:17:43.417Z
|
2024-06-10T00:00:00.000
|
270377681
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "a8d243885c69be1fe5a282af45790fad2105cd79",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45563",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"sha1": "5282184103029e68de64d75b7a53ef9e7e389e90",
"year": 2024
}
|
pes2o/s2orc
|
Local structural modelling and local pair distribution function analysis for Zr–Pt metallic glass
In disordered glass structures, the structural modelling and analyses based on local experimental data are not yet established. Here we investigate the icosahedral short-range order (SRO) in a Zr–Pt metallic glass using local structural modelling, which is a reverse Monte Carlo simulation dedicated to two-dimensional angstrom-beam electron diffraction (ABED) patterns, and local pair distribution function (PDF) analysis. The local structural modelling invariably leads to the icosahedral SRO atomic configurations that are similarly distorted by starting from some different initial configurations. Furthermore, the SRO configurations with 11–13 coordination numbers reproduce almost identical ABED patterns, indicating that these SRO structures are similar to each other. Further local PDF analysis explicitly indicates the presence of the wide distribution of atomic bond distances, which is comparable to the global PDF profile, even at the SRO level. The SRO models based on the conventional MD simulation can be strengthened by comparison with those obtained by the present local structural modelling and local PDF analysis based on the ABED data.
www.nature.com/scientificreports/disordered structure, the three-dimensional total intensity for monatomic systems containing N atoms can be written as where Q is a diffraction vector and f (Q) is an atomic scattering factor.Because the intensity obtainable from con- ventional diffraction experiments are basically isotropic (see "global diffraction" in Fig. 1) in a three-dimensional space, the intensity naturally becomes one dimensional information where ρ(r) and ρ 0 are a density function and an average atomic density, respectively.Note that the scattering vector Q in the Eq. ( 1) is replaced by a scalar Q .In addition, for the purposes of clarity and emphasis on the mac- roscopic versus microscopic distinctions in the intensity analysis, Eqs.(1) and ( 2) are presented in a simplified form that assumes a single element system.By subtracting a background from the total intensity, it is possible to obtain a PDF profile using Fourier transform.Equation (2) can only be applied to global diffraction including structural information from a large number of atoms, because the intensity should be isotropic.However, for local diffraction intensity from a small number of atoms such as ABED, it is necessary to use Eq.(1) to understand the spotty directional intensity as seen in ABED data (see ABED experiment in Fig. 1).However, there is still no suitable method to obtain reasonable PDF profiles from the local diffraction intensity.
In this work, we apply the local structural modelling similar to a reverse Monte Carlo (RMC) simulation 15 (see Fig. 1) to the SRO (atomic coordination polyhedra) of metallic glasses based on the ABED experiment and also propose a local PDF analysis for the resultant local structure models.This local structural modelling for the (1) www.nature.com/scientificreports/ABED data is called local RMC modelling.Since it is difficult to construct PDFs for "anisotropic" SROs including only a dozen atoms as mentioned above, we use a kernel density estimation to solve the problem.The local PDFs for the resultant local structural models are compared with the conventionally used global PDF, which can be derived from the global diffraction data by Fourier transform.Based on the results, we discuss the structural features of icosahedral-like atomic configurations with different coordination numbers in the glass and the origin of the broadening of peaks in the PDFs.It should be noted that icosahedral atomic configurations in glasses or liquids have been extensively discussed by theoretical and experimental approaches [16][17][18][19][20][21] .
Results
Structural models of metallic glasses for global diffraction data have been confirmed by the complementary use of MD and RMC in the previous studies 3 We also investigated the dependence on the cooling rate, but it did not significantly affect the conclusions (see Fig. S1 in the supplementary material).The trend of distortion will be verified by the local RMC modelling and local PDF analyses as shown below.
The procedure of local RMC modelling based on the ABED experiment and local PDF analysis is summarised in Fig. 1.First, ABED patterns are obtained experimentally as two-dimensional diffraction data from sub-nanometer regions of glassy samples.Then, to obtain three-dimensional SRO structural models by local RMC, we set an initial atomic configuration confined to a spherical boundary whose size is comparable to the experimental beam size.The number density of atoms in the sphere is close to the average number density.The atomic displacement for a randomly selected atom is randomly assigned.A two-dimensional diffraction pattern is calculated for a given configuration by the atomic displacement and then compared with the experimental result by evaluating a judging function.This procedure is repeated until the model matches the experiment, as is the case in a conventional RMC simulation [22][23][24] .Finally we perform the local PDF analysis for the local structural models obtained by local RMC and compare them with the conventionally used global PDF, which is derived from global "isotropic" diffraction data (halo rings) using Fourier transform.The three-step procedure www.nature.com/scientificreports/allows us to obtain reasonable and smooth PDF profiles from local atomic configurations containing as few as a dozen atoms.
In general, the resultant structural models for the atomistic simulations strongly depend on the initial atomic configurations.Accordingly, we constructed local structural models that are consistent with the experimental diffraction pattern based on three different initial configurations.Three structures, each consisting of 13 atoms with crystalline face-centred cubic (fcc), perfect icosahedron, and structureless configurations were prepared for use in the local RMC modelling.A list of the fractions of final structures constructed by local RMC from the three different initial configurations is shown in Fig. 3a.In all cases, distorted icosahedra with a Voronoi index of <0 0 12 0> were formed, although their initial configurations were completely different from each other.An example of the fitting process for a Zr 80 Pt 20 metallic glass is shown in Fig. 3b,c.In this case, 13 atoms were placed at the origin as an initial configuration.A diffraction pattern for the initial structure exhibits no diffraction spots due to the structureless feature.The simulated pattern gradually approaches and eventually overlaps with the experimental pattern.The configuration eventually becomes a distorted icosahedron.
In the above local RMC modelling, the coordination number was fixed at 12. The coordination number, however, is usually not fixed, especially for metallic glass structures with relatively large average coordination numbers, as has been mentioned above.The wide distribution of the coordination numbers for a Zr 80 Pt 20 MD model obtained by an MD simulation have been already shown in Fig. 2e.We further investigated the effect of the change in coordination number for the fitting results of the local RMC modelling.Figure 4 shows a list of the final structures obtained using the three initial configurations with atomic numbers of 12, 13, and 14 corresponding to CN11, CN12, and CN13, respectively (CN denotes coordination number).All atoms in the initial configurations are placed at the origin.For CN11 and CN12, most attempts resulted in the formation of atomic clusters of <0 2 8 1> and <0 0 12 0> , respectively.For CN13, atomic clusters of <0 1 10 2> and <0 2 10 1> were formed with a total probability of 80%.These results imply that the clusters with <0 2 8 1> , <0 0 12 0> , and <0 1 10 2> (or <0 2 10 1>) adequately match the identical experimental diffraction pattern, indicating a close structural relationship among these atomic clusters.It should also be mentioned that these atomic clusters are frequently found in the MD model, as shown in Fig. 2.
The local pair distribution function (PDF) profiles for the structural models obtained by local RMC were calculated using a kernel density estimation method.This method allows us to draw reasonable PDF profiles even for atomic clusters consisting of a small number of atoms.Figure 5a shows the PDFs obtained from the The local PDF profiles for the cases of CN11 and CN13 discussed in Fig. 4 are also shown in Fig. 5b,c, respectively.The distributions of bond lengths for CN11 and CN13 are relatively broad, similar to that for CN12 (Fig. 5a).For reference, the local PDF for the perfect icosahedron without any distortion is shown in Fig. 5d.We can immediately see that the width of the peaks in the local PDFs for the metallic glass models are completely different from that of the perfect icosahedron.This implies that the breadth of the global PDF originates from the broad local PDFs of individual SROs, rather than from the broad distribution of coordination numbers, although the distribution for the larger coordination number is slightly broader than that for the smaller coordination number.It can be also interpreted from these facts that the distorted icosahedron with CN12 is www.nature.com/scientificreports/completely different from the perfect icosahedron and therefore is only an intermediate between the CN11 and CN13 structures and not a special configuration.Although most of the final structures have a Voronoi index of <0 0 12 0> for CN12, they are highly distorted and show a wide distribution of atomic bond distances.We then measured the standard deviation of the bond distances for 20 final structural models obtained independently.As shown in Fig. 5e, the standard deviations for all the models are found to be 0.28 ± 0.05 Å, which is much larger than that of a perfect icosahedron.This result implies that the modelling process is highly reproducible, although the atomic displacements are randomly generated in each process.In addition to this, to estimate the atomic displacements between a perfect and distorted icosahedron, we conducted a local RMC modelling started from a perfect icosahedron with small atomic displacements less than 0.1 Å for each step as shown in Fig. S2 in the supplementary material.Therefore, the icosahedral topology was maintained during the modelling process, unlike the cases in Fig. 3.The atomic coordinates before and after the local RMC modelling and the atomic displacements are also given in Table S1 in the supplementary material.The relatively large values of the atomic displacements imply that the icosahedron satisfying an experimental ABED pattern is heavily distorted from the perfect one.
Discussion
Local PDFs for all the icosahedron-related SRO (atomic coordination polyhedra) with CN11-13, as shown in Fig. 5, exhibit broad distributions which are comparable to those of global PDFs obtained by X-ray diffraction experiment.In other words, individual SRO structures, which have a glassy disordered nature even at a subnanometer level, form local PDFs similar to the global PDF, regardless of coordination number.This implies that the so-called icosahedron in metallic glasses should be far from the perfect icosahedron and should be closely related to the other icosahedral-like polyhedra with different coordination numbers.In fact, the <0 1 10 2> polyhedron can be transformed into <0 0 12 0> by allowing the outermost atom to be removed as shown in Fig. 6.Similarly, <0 0 12 0> can be transformed into <0 2 8 1> .The energy barriers between the polyhedra could be discussed in the context of the local energy landscape 11 .It should be mentioned that the perfect icosahedron seen in quasicrystals 26,27 never transforms into <0 2 8 1> or <0 1 10 2> polyhedra by removing or adding an atom (Fig. S3 in the supplementary material).The smooth linkage between the polyhedra with different coordination numbers in metallic glasses can be attributed to their better ductility than quasicrystals; moreover, this property plays a significant role in dynamic processes such as relaxation and deformation.Indeed, when we investigated the changes in atomic clusters during the structural relaxation at 900 K, which is below the glass transition temperature, we detected the sequence <0 0 12 0> → <0 2 8 1> → <0 1 10 2> , as shown in Fig. 7.This suggests that these atomic clusters easily transform into one another in the glass state.Such an imperfection of polyhedra could also be related to the origin of atomic level stresses in glasses 28 .However, the structure-property relationship in glasses based on the direct modelling remains to be solved.
The medium-range order (MRO) structures of metallic glasses are significant in relation to their mechanical properties and potential heterogeneities 29 .We are currently striving to obtain larger structures, related to possible heterogeneities using the local RMC approach.However, the current approach often leads to computational challenges, such as difficulties in eliminating overlapping atoms, highlighting the need for further development.
In parallel, we are developing a complementary method utilizing virtual angstrom-beam electron diffraction (ABED) to identify MRO structures within MD simulations 30,31 .By integrating these two approaches, we aim to achieve a more comprehensive understanding of the MRO structures in metallic glasses.This combination of methodologies is expected to significantly enhance our ability to study the structures and properties of glassy materials in detail.Finally, we briefly discuss the limitations of the local RMC modelling used in this study.It must be emphasized that it is currently impossible to quantitatively match the experimental diffraction intensities with those calculated from atomic clusters.In this work, we only discuss the positions in reciprocal space and the relative intensities of the diffraction peaks.This is because the atomic clusters are not solely responsible for generating the diffraction intensities of ABED patterns.Instead, the surrounding atoms may contribute to enhancing the diffraction intensities.Additionally, quantitative intensity measurements are not technically feasible.Furthermore, the fixed spherical boundaries used in this study may introduce biases to the results, and this aspect should be further examined.While these are challenges for future work, the significance of this study lies in deriving possible ABED patterns from a limited set of atoms.
In summary, we investigated icosahedron-related SROs in a Zr-Pt metallic glass through local RMC modelling and the local PDF analysis based on the ABED experiment.The distorted icosahedral SRO models are almost reproduced by the local RMC modelling, regardless of the initial configurations.The standard deviation of the bond distances in the SRO models obtained by local RMC are in the range 0.23-0.32Å.Furthermore, the SRO models with coordination numbers from 11 to 13 are able to reproduce the identical ABED data.These facts imply that the SRO models satisfying the ABED data can be determined within a specific range.The features of the obtained SRO models are well consistent with those frequently found in the classical MD models.Local RMC modelling, where the structure models are directly derived from the ABED data, supports the potential-based MD simulation and vice versa, and also helps us to interpret diffraction intensity in the ABED data.Additionally, local PDF analyses can provide smooth PDF profiles for the SRO models with 11-13 coordination numbers, which are well fit to the global PDF profile.This implies that the icosahedral SRO models proposed here have a disordered nature even at the SRO level.
Molecular dynamics simulation
The models of amorphous Zr 80 Pt 20 alloys were constructed using the MD method.Embedded atom method (EAM) potentials developed by Sheng were employed 32 .A cubic cell including 9600 Zr atoms and 2400 Pt was prepared with periodic boundary conditions.The initial configurations were kept at 2,500 K for 50 ps and subsequently cooled to 300 K at a cooling rate of 1.7 × 10 10 K/s.The system temperature and pressure were controlled using a Nosé-Hoover thermostat and Nosé-Hoover pressure barostat, respectively.
Angstrom-beam electron diffraction (ABED) experiment
A JEOL JEM-2100F transmission electron microscope with double spherical aberration correctors (operated at 200 kV) was utilised for the ABED measurements.All the ABED patterns were recorded using a CCD camera (Gatan, ES500W).A semi-parallel electron beam was produced with a specially designed small condenser lens aperture of 5 μm diameter.The convergence angle was estimated to be 3.3 mrad with a beam size of 0.4 nm.A large number of ABED patterns (more than 10,000 frames) were acquired from a thin film prepared by the ionmilling method with a cooling stage.
Local reverse Monte Carlo modelling
A local RMC modelling optimised for ABED experiments that basically follows the steps of a conventional RMC simulation [22][23][24] was developed by our group 15 .In this case, as shown in Fig. 1, a small number of atoms (12-14) were initially set inside a spherical boundary with a radius R. The R value was set to be 3.0 Å in this study.The Figure 7. Sequential structural changes of atomic clusters during isothermal relaxation at 900 K.The relaxation process was simulated using molecular dynamics.The isothermal process was conducted after cooling from the liquid to 900 K, and the holding times at 900 K are shown above each atomic cluster.Changes in the atomic environment of atoms with the same identifiers were tracked over time.The changes in both the upper and lower atomic clusters followed the sequence <0 0 12 0> → <0 2 8 1> → <0 1 10 2> , and such transitions were indeed detected during the relaxation process.
atoms were randomly moved one by one to reduce the difference between the experimental and calculated ABED intensities.The maximum atomic displacement was set at 2.0 Å. Atomic displacements deviating beyond the boundary was rejected.The fitting ranges were − 62.8 nm -1 ≤ Q x ≤ 62.8 nm −1 and − 62.8 nm −1 ≤ Q y ≤ 62.8 nm −1 (41 × 41 pixels) in reciprocal space, centred on the origin.Note that Q x and Q y represent the magnitudes of the x and y components of the scattering vector, respectively.The judging function is the sum of the square of the diffraction intensity differences between experiment and calculation at each pixel.Some specific constraints for the atomic movements and distances were also applied.The ABED patterns were simulated via a conventional multislice method 33 .
Local pair distribution function analysis
The pair distribution function (PDF) of each local structure was obtained by following the procedure given below.The bond lengths of all the pairs of atoms were computed for each local structure, e.g. the total bond length for CN12 was calculated to be 78.The issue of the double-count was avoided by considering a pair of atoms as a combination of two atoms.Conventionally, the PDF is obtained by considering the histogram of bond lengths to be a radial distribution function (RDF).However, we obtained the RDF based on a kernel density estimation, which is a way to estimate the probability distribution of a random variable in statistics.A Gaussian function was employed as the kernel.The bandwidth of the Gaussian function, corresponding to the bin width in the histogram, was optimised by the criterion proposed by Shimazaki and Shinomoto 34 .The 95% confidence interval of each RDF was calculated using the bootstrap method 35 .The above calculations were performed using the MATLAB code uploaded on the internet 36 .Finally, the PDF of each local structure was obtained by multiplying the RDF and the coordination number over the surface area with the radius of the bond length.The 95% confidence interval of each PDF was also obtained in the same manner.
4πr 2 Figure 1 .
Figure 1.Procedures of a structural analysis for local atomic structures in metallic glasses.The procedures are composed of three parts: 1. angstrom-beam electron diffraction (ABED) experiment, 2. local reverse Monte Carlo (RMC) modelling, and 3. local pair distribution function (PDF) analysis.These three steps are necessary for making experimental-based local PDFs.The local PDFs are compared with the conventionally used global PDFs obtainable from global diffraction data that contain three-dimensionally isotropic intensity distribution.
. We first confirmed a degree of distortion for icosahedral atomic configurations in an MD model.Distributions of coordination numbers and Voronoi polyhedral analyses for the Zr 80 Pt 20 MD model are shown in Fig. 2. As shown in Fig. 2a,b, the coordination numbers for Pt and Zr are distributed from 9 to 13 and from 10 to 15, respectively.The dominant Voronoi polyhedra are <0 0 12 0> and <0 2 8 1> for central Pt atoms, and <0 1 10 2> and <0 0 12 0> for central Zr atoms, as shown in Fig. 2c,d.Note that the Voronoi index of <0 0 12 0> indicates the presence of an icosahedral atomic configuration with coordination number 12.The standard deviation of atomic bond distances for a MD simulation model is also shown in Fig. 2e.A total of 60 icosahedral atomic clusters with the Voronoi index of <0 0 12 0> were randomly extracted from the Zr 80 Pt 20 model for statistical analysis.It is evident that the typical icosahedra in the MD model were severely distorted, even though their Voronoi indices are <0 0 12 0> .
Figure 2 .
Figure 2. Local atomic environments for molecular dynamics simulation models.Distributions for coordination numbers around (a) Pt atoms and (b) Zr atoms, and lists of Voronoi indices around (c) Pt atoms and (d) Zr atoms.Coordination number and Voronoi polyhedral analyses were performed for the classical MD model including 9600 Zr and 2400 Pt atoms.(e) Distribution of standard deviation of atomic bond lengths in icosahedral atomic clusters obtained by a MD simulation.A total of 60 icosahedral atomic clusters with Voronoi index of <0 0 12 0> were randomly extracted from the Zr 80 Pt 20 glass model.
Figure 3 .
Figure 3. Initial atomic configuration dependency on local RMC modelling.(a) Three different initial atomic configurations are prepared: 13 atoms placed at an origin, face-centered cubic (fcc) cluster with 13 atoms, and perfect icosahedral cluster with 13 atoms.The local RMC modeling starts with these three initial configurations and the resultant configurations are listed for the three cases in the graph.The number of trials is 50 for each modeling."Not converged" means that the calculation got stuck and failed to fit the simulated pattern to the experimental one.(b) Formation process of the atomic-structure model satisfying the experiment.(c) Simulated ABED patterns calculated from the corresponding structural models of (b).
Figure 4 .
Figure 4. Local reverse Monte Carlo modelling for atomic clusters with different coordination numbers.The final atomic structures after the local RMC modelling started with models consisting of 12, 13, and 14 atoms, corresponding to CN11, CN12, and CN13 atomic clusters, respectively.The number of trials was 20 for each case.
|
v3-fos-license
|
2022-03-08T14:36:54.577Z
|
2022-03-07T00:00:00.000
|
247255480
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ro-journal.biomedcentral.com/track/pdf/10.1186/s13014-022-02015-4",
"pdf_hash": "1f9be5430304db1f68b75cbb516d20343f60b94e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45564",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "45ecbac345ca43d8e6332db2492c4cbfb6492846",
"year": 2022
}
|
pes2o/s2orc
|
Photobiomodulation as a treatment for dermatitis caused by chemoradiotherapy for squamous cell anal carcinoma: case report and literature review
In-field dermatitis is a severe and common adverse effect of radiation therapy, that can cause significant pain and treatment interruptions in patients with squamous cell anal carcinoma (SCAC) being treated with radical chemoradiation protocols. There are no established therapies for the treatment of radiation induced dermatitis. Photobiomodulation (PBM) is an effective and low-cost treatment for radiation induced mucositis, but have recently been explored to treat in-field dermatitis. We present a case report of the successful use of PBM for the treatment of dermatitis in the anal area in a patient with SCAC treated with concomitant chemoradiation with curative intent and follow with a literature review of the recent advances and possibilities of the use of PBM as a promising strategy. PBM therapy proved to be efficient in the radiodermatitis treatment, both in relieving the symptoms and controlling dermatitis, in addition to improving the patient's quality of life.
Introduction
Squamous cell anal carcinoma (SCAC) is considered a rare tumor type. The Human papillomavirus infection (HPV infection) is the major risk factor, especially the HPV subtype 16. Currently, the first-line treatment with curative intent consists of chemotherapy (CT) and radiation therapy (RT) combined, followed by perineal surgery only as a salvage therapy in case of relapse or residual tumors [1][2][3].
The intensity-modulated radiation therapy (IMRT) is the chosen radiation therapy type for treating cancer of the anal canal due to its advanced technology that reduces exposure of healthy tissues to radiation, preserving the patient's bladder, intestine and sexual function [4][5][6].
Despite technological advances there are still significant side effects that can cause morbidity and interfere with treatment, causing pain and also treatment interruptions, such as in-field dermatitis. Singh and cols. demonstrated the frequency of dermatitis in patients undergoing RT in the breast which may affect as much as 95% of the patients. The intensity varies from mild erythema to dry or wet desquamation, which can affect the quality of life and cause treatment delays [8].
Radiation induced dermatitis derives to the fact that the target is closer to the skin and receives a high radiation dose [9].
Total radiation dose, dose fractionation scheme, type of external beam used, radiosensitivity, concomitant chemotherapy, volume and area to be treated can be associated risk factors [7].
Open Access
*Correspondence: fabianahottz@yahoo.com.br 1 Department of Physiotherapy, Oncologia D'Or, Rio de Janeiro, Brazil Full list of author information is available at the end of the article Acute radiodermatitis in anal canal cancer is considered to be highly prevalent, occurs in 99.1%. Severe acute radiodermatitis cancer of the anal canal has an incidence rate of approximately 34.5% and may lead to treatment interruption [10,11].
Treatment of radiodermatitis is not well established. Skin care should be advised to prevent injury and infection from reaching a greater grade [12,13].
PBM, is a treatment already mentioned in studies for preventing and controlling radiodermatitis in patients undergoing RT for breast cancer is PBM, which is appeared safe, non-invasive, low-cost resource, having already been widely used in head and neck tumors to prevent mucositis, morbidity that impairs functionality and negatively impacts quality of life, in addition to increasing treatment costs [14][15][16].
Prevention of OM in patients undergoing RT for head and neck tumors suggest that PBM does not interfere with the tumor or the results of treatment and overall survival [17,18].
This study presents a case of PBM application related to radiotherapy in a patient undergoing combined chemoradiotherapy for squamous cell anal canal carcinoma and it also reviews previous studies using this therapy as treatment and/or prevention of skin reactions caused by radiotherapy.
Case report
A 62-year-old man, presented a lesion in the anal and perianal region, followed by weight loss, tenesmus, urgency and discrete fecal incontinence, and was diagnosed with cancer of anus and anal canal, identified as invasive squamous cell carcinoma (SCAC) in October 2019. After a magnetic resonance imaging (MRI) of the abdomen and pelvis, in addition to computed tomography of the chest, he was staged II.
The patient underwent treatment with capecitabine at a dose of 850 mg/m 2 bid during days of radiotherapy (tumor bed and drainage chains). The choice of capecitabine was due to his recent HIV diagnosis and the concomitant use of antiretroviral therapies. IMRT was performed in a total of 30 fractions, energy (photon beam), RapidArc ™ technique, 6MV, 5400 cGy, totaling 05 weeks of treatment (10/22/2019 to 12/12/2019) to the anal lesion and pelvic nodes.
He was referred to physiotherapy before starting combined therapy. Upon examination, it was found the presence of an anal fistula and perianal tumor wound ( Fig. 1) with no associated pain. On D18 of RT he presented with a grade 3 perianal radiodermatitis, according to the Radiation Therapy Oncology Group (RTOG) criteria, which was related to radiotherapy. He complained of burning pain in the anal mucosa when defecating, and there was an increase in defecatory frequency with a predominance of pasty to liquid stools, with a 6/7 classification on the Bristol scale [9,19]. PBM application with low-level laser (DMC EC POTENCIA brand; 100 MW, spot size = 0.03 mm) was used, in red light length, with a dose of 2 J (joules) of energy in the irradiated perianal area and anal region twice a week-with an interval of 48 h between sessions for 4 weeks, until D7 after RT completion, which resulted in a relieve of pain and burning symptoms when defecating, evaluated by the visual analog scale (VAS) of pain from 9 to 3, and decrease of the radiodermatitis grade from 3 to 2. The relief of the symptoms of radiodermatitis happened in the first application, in the seventh application (T7) of PBM, the patient referred VAS 3, and from eighty application (T8), asymptomatic.
Of note, patients using antiretroviral therapy seem to have greater toxicity to the treatment and may present greater toxicity on their skin and gastrointestinal when compared to patients not infected with HIV. Occasionally, they need to suspend or decrease radiotherapy dose [20,21].
Discussion
Anal canal cancer is relatively rare. Radiation therapy is often the chosen modality for treating this disease. Radiodermatitis is one of the side effects of radiation therapy that can interfere with treatment adherence, and also a cause of serve pain and morbidity during and after therapy. It is a tissue inflammatory response that can progress to ulceration or tissue necrosis [22,23].
In fact, Aragüés and colleagues suggest that acute radiodermatitis appears between 10 and 14 days after radiation therapy. However, radiodermatitis cases have been decreasing due to better treatment planning and the use IMRT. According to Han and collaborators, RD is considered the most common acute adverse effect of RT, with a frequency between 10%, 46% and 57% in anal and perianal cancer. In another retrospective study on breast cancer, the incidence of radiodermatitis was 81.19% and score 2 more prevalent [24,25].
A prospective study, carried out by Kachnic et al. [26] evaluated the dose-painted intensity-modulated radiation therapy (DP-IMRT) method, in which it allows the allocation of different dose targets for the treatment, creating a different dose distribution for each location, as a result, Radiation Therapy Oncology Group (RTOG) 0529, showed shorter treatment interruption and significant savings in grade 3 acute dermatology toxicities (49%) compared to RTOG 9811 (23%), without using DP-IMRT, in addition to showing that 45% of patients discontinued treatment for acute dermatitis, of these, 88% duo to pain.
Chronic radiodermatitis can appear approximately 90 days after radiotherapy and tissue repair chronification process induces fibrosis formation, and consequently, it can be followed by as pain, evacuating difficulty, pelvic dyssynergy and anal canal stenosis [2,8].
Radiodermatitis development mechanism is largely linked to the inflammatory response associated with oxidative stress. Cellular damage induced by radiation, especially in the mitotic phase, triggers an inflammatory cascade that, when it becomes chronic associated with oxidative stress, leads to a modification of cytokines, alteration of the cell cycle and also promotes DNA damage. These changes support the inflammatory cascade, and consequently lead to disorderly tissue repair [27].
Currently, radiodermatitis treatment and prevention is based on polytherapies individualized to each service. Literature has shown effectiveness to mitigate radiodermatitis grade in breast cancer [28]. Robijins et al. [29] demonstrated that PBM can prevent acute radiodermatitis in patients with breast cancer submitted to RT. Additionally, that study, demonstrated that 12% of the patients in the treatment group had radiodermatitis grade 2, while 44.4% of the patients in the control group had radiodermatitis grade 2 or greater grades, concluding that the severity of skin reactions were significantly less in the group which PBM was performed, which means that this is an effective tool for preventing acute radiodermatitis, thus improving the quality of life of the patients.
In other study, the Dermishead trial, Robijins et al., Selected 46 head and neck cancer patients who underwent radiotherapy (RT) with or without concomitant chemotherapy to receive PBM or placebo treatment and investigate if PBM could be effective. As a result, 77.8% of the patients in the control group had grade 2-3 radiodermatitis compared to 28.3% of the PBM. There was a 49% reduction in severe radiodermatitis in the PBM group. This randomized study demonstrated the effectiveness of PBM for the prevention and management of radiodermatitis.
There are only two reported cases of low-level laser application to treat radiodermatitis in the anal area. The first one described in the laser approach for chronic radiodermatitis and the second one was described for the treatment of radiodermatitis in rectal cancer during RT. In both cases there was a noticeable relieve in pain symptoms in the perianal region and tissue mucosa, allowing the patient to return to their daily activities [30]. A randomized prospective study evaluated patients with head and neck cancer and showed that the use of PBM to treat oral mucositis (OM) was associated to a higher complete response rate to treatment. Patients who were followed for 40.3 months had a statistically greater complete response in the PBM group compared to the placebo group (89.1% vs. 67.4%), in addition to an increase in progression-free survival (61.7% vs. 40.4%) and a tendency for better overall survival (57.4% vs. 40.4%). Patients who received PBM had a lower incidence of grade 3-4 OM (6.3% vs. 48%), thus, reduced gastrostomy, less interruption of treatment and use of opioids. In addition to having a positive impact on therapy adverse-events and a major impact on quality of life, positive results in response and survival reinforce the use of this therapy as part of the multidisciplinary approach for patients with head and neck cancer [31].
Radiodermatitis, in addition to severely affecting the patient's quality of life, can also cause a radiotherapy treatment interruption. Therefore, an effective approach is needed to treat and prevent this common effect.
PBM promotes tissue regeneration due its cellular and anti-inflammatory biomodulation action, relieving pain and promoting healing [32]. This case report reinforces the results of the aforementioned studies. The use of lowlevel laser with red light length is effective in the control of radiodermatitis.
Despite the limitations above, our approach was satisfactory, since there was a pain relief on the visual analog scale, tissue healing and reduction of radiodermatitis according to the classification of RTOG from grade 3 to grade 2 in the anal region in four weeks. PBM is an effective, safe and low-cost therapeutic resource and not interfere with therapy efficacy.
Conclusion
PBM in the anal region during RT treatment enabled symptom relief, radiodermatitis control and improved the patient's quality of life. In addition to being an innovative, safe and low-cost therapeutic option. We believe there is a need randomized clinical trials to better define the parameters and introduce this resource as a treatment protocol for radiodermatitis in the anal region.
|
v3-fos-license
|
2023-11-06T16:05:37.124Z
|
2023-11-01T00:00:00.000
|
265022229
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.ijscr.2023.109017",
"pdf_hash": "1d17fe73ae3f79d2a4c7682689d5ccd1d644d2c5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45566",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7550c2871a69ddfdbf3840d2d76e74e84183ff16",
"year": 2023
}
|
pes2o/s2orc
|
Apocrine adenocarcinoma of the head and neck district: Our experience with two cases
Introduction Apocrine adenocarcinoma (AA) is a rare gland cancer that appears in the elderly, especially males. Surgery is considered the first option for the management of this tumor. Case presentation We report two cases of AA that occurred at our Unit of Maxillofacial Surgery. Precisely a case of a woman with AA with a usual presence at the eyelid level and a case of a man with AA with an unusual presence at the neck level. Discussion This cancer generally arises in some specific areas of the body that present high concentrations of apocrine glands (such as in Case No.2). But it can also occur in less typical areas, such as the neck (such as in Case No.1). Conclusion We discuss the surgical management of our cases: both based on our experience and literature data, we recommend extensive surgical excision.
Introduction
Apocrine adenocarcinoma (AA) is a rare tumor that arises from apocrine glands; it presents an increased incidence among older males [1,2].The apocrine glands are a subtype of exocrine glands located at the lower dermis junction and the subcutaneous fat [3].Their duct is formed by a single layer of secretory cells presenting a pathognomonic secretion type: secretion occurs by decapitation of part of the apical cytoplasm of glandular cells [4].This process involves 3 distinct phases.First, the apical cap is formed.This is followed by the formation of a dividing membrane at the base of the apical cap.Finally, tubules form parallel to the dividing membrane, which creates both a base for the secreted apical cap, as well as a roof for the remaining secretory cell.
These glandular cells present an eosinophilic cytoplasm and are surrounded by an external layer of myoepithelial cells that helps in the secretory process.Histologically, apocrine glands can be viewed using light microscopy with hematoxylin and eosin staining.Apocrine glands are nonfunctional before puberty, at which time they grow and commence secretion.Some apocrine glands have specific names, for example, those on the eyelids are referred to as Moll's glands, and those on the external auditory meatus are termed ceruminous glands.While they can be found in many locations on the body, they secrete specific products at each distinct location.They secrete a specific product which depends on their localization.Although the exact function of apocrine glands varies depending on the gland's location, apocrine glands are believed to be an evolutionary remnant of an odorous organ of animals.For example, the scent glands of the skunk are modified apocrine-type structures.The anatomical regions most affected by AA, as they have a higher concentration of this specific glandular type, are the armpits, the ear canal, the eyelid region, and the perianal and periumbilical regions [5,6].It is very infrequent to find AA in other body regions, such as the face skin [7].In the literature, there are few studies on AA at the level of the neck and on its surgical management.Below, we present two cases of unusual localization of AAs treated with surgery.The work has been reported in line with the PROCESS criteria [8].
Case no.1
A 77-year-old white man presented to the Maxillofacial Unit for a specialist surgical consultation; he presented an itchy nodule on the anterior surface of the neck, which had increased in size in the last six months.The patient's medical history was positive for arterial hypertension and chronic renal failure.Moreover, he did not present a personal or a family history of cancer.On physical examination, the patient was in poor health.It had a heteroplasic nodule of about 3.0 × 2.2 cm in diameter, with a large central ulcer with jagged and detected margins (Fig. 1).
There was no active bleeding.The tumor was surrounded by pilosebaceous units and presented with a solid texture without associated pain.The patient reported the appearance of a single nodule, grown over time, and underwent topical therapy based on cortisone and antibiotics.Such treatment had, however, led to a generalized irritation of the skin with the subsequent formation of a non-bleeding ulcer.Due to the patient's clinical condition (dialysis and bedridden), only an ultrasound of the neck regions was performed, negative for cervical lymphadenopathies.It showed the presence of poorly defined margins neoformation in the subcutaneous fat.Due to the lesion's ambiguous characteristics and the patient's poor physical condition, an excisional biopsy of the neoformation was chosen (Fig. 2).
Excision was performed by ensuring about 1 cm of margins around the neoformation (the underlying platysma muscle has been displayed) to prevent margin infiltration in case of a malignant pattern.A primary wound closure was performed, with no other reconstructive techniques.Due to the poor patient's state of health, other more invasive surgical methods on the neck were not performed.Histopathological examination revealed the presence of AA infiltrating the dermis and infiltrating the hypodermis (EMA+, CAM 5.2+, Cytokeratin 5-6+) (Fig. 3).
All resection margins were free of neoplasia.The patient underwent periodic follow-up.After about 50 days he returned to check presenting a nodular neoformation during a scar from previous surgery.The nodule had an irregular morphology, poorly defined margins, a diameter of about 2 × 1 cm, a hard consistency, and pain.The patient underwent surgery again: the nodule was removed with an abundant portion of the subcutaneous fat, releasing the underlying muscular plane, which appeared macroscopically free from disease.Due to the abundant laxity of the neck tissues, a closure of the site was performed for first intention.Histopathological examination showed a Dermo-hypodermic recurrence of AA with ductal-papillary differentiation.Excision margins were free from neoplasia.
Unfortunately, due to the patient's poor clinical condition and worsening kidney disease, the patient died about 1 month after the second surgery.
Case no.2
A 75-year-old white woman presented to the Maxillofacial Unit for a specialist surgical consultation; she presented a recurrence of a right orbit tumor.The patient was treated in 2018, in another hospital, for the surgical removal of an AA close to the external eye chant of the right eye.The patient underwent regular clinical follow-ups.After 6 months she noticed a neoformation at the orbital level, with a progressive increase in size in the following months.The volumetric increase was rapid and enough to dislocate the eye below (Fig. 4B).
It was suggested to perform a computed tomography (CT) with a contrast medium.It showed the presence of a 22 × 18 mm heteroplasia that occupied the lateral portion of the right orbit, with oval morphology, and regular margins.It was contiguous with the right lateral rectus muscle and medially dislocated the eyeball, without invading it (Fig. 4A).The test was also negative for lymph node involvement.The surgical management of this case has been oriented only to the removal of the tumor without lymphadenectomy and with preservation of the orbital structures (respecting the patient's desire not to be subjected to invasive treatments on the eye).The tumor was approached with a superior-lateral skin incision (at the level of the onethird lateral of the right eyebrow arch).After the soft tissue dissection and the surgical exposure of the cortical bone, we tried to remove the neoplasm from the lateral face of the right orbit with blunt surgical instruments.The tumor was closely attached to the bone surface, which showed superficial erosion.We decided to perform an ostectomy of the upper orbital frame.Histopathological examination showed the presence of an AA (EMA+, CAM 5.2+, and Cytokeratin 5-6+) (Fig. 5).
An oncological and radiotherapy consultation was requested; no indication was given for adjuvant treatments.The patient carries out periodic clinical and instrumental follow-ups.At thirty-three months after surgery, the patient is free from disease.
Discussion
AA is a rare malignant neoplasm that affects the apocrine glands.It affects the elderly population, with an average age of 67 years, with a mild prevalence in male sex, and without any racial preference [9].AA is typical of some anatomical areas (armpits, eyelids, ear canal, perianal and periumbilical regions), while it is rare at the level of the skin of the face and neck [10].AAs can present different clinical behaviors, ranging from indolent models to more aggressive ones [11].Their severity depends on their degree of differentiation.This tumor may also be localized or metastatic.In literature, the incidence of metastases occurs from 30 % to 50 % of cases and may present an invasion of lymph nodes [12,13].Very often, especially in the most indolent forms, it can be erroneously diagnosed with a benign injury.The differential diagnosis is made with both benign and malignant tumors and metastases [13].In the literature, as in a proposed case (No.1), skin irritation with subsequent ulceration after using ointments is also described [14].To date, there are no specific guidelines for the management of AA.Surgical excision remains the first therapeutic option.Large local excision (extending 1-2 cm beyond the macroscopic end of the tumor) is always suggested [15].Neck dissection should be done if there is evidence of lymph node involvement.Prophylactic neck dissection is controversial.There seems to be no evidence of benefit [16].Some other studies suggest the importance of sentinel node evaluation in deciding the most appropriate intraoperative therapeutic choice [17].It is recommended in case of unclear lymph node involvement because it may identify the clinically hidden disease [6,9,18].Surgery may be associated with adjuvant treatments, such as radiation therapy and/or chemotherapy.They are used when lymph node involvement is not well defined or when lymph nodes are positive, in the case of advanced stages of pathology or G3/G4 classification [11,14].The role of chemotherapy is unclear because AA is considered by some authors to be resistant [19].Radiotherapy is recommended as a valid treatment both in the primitive forms and in the recurrences.It should be used when the resection margins are positive, in the case of high-classification tumors, and especially in the case of vascular and/or lymphatic invasion [14,20].The absence of metastasis and lymph node involvement provides a better survival rate.Lymph node involvement is the parameter that best defines the prognosis [9].
Conclusion
Although the data in the literature are scarce and do not allow us to make unambiguous recommendations for proper surgical management of AA, it seems reasonable to recommend extensive excision of the neoplasm.Other therapeutic strategies can be performed depending on the location, size, stage, and lymph node involvement, from lymphadenectomy to medical treatment with chemotherapy/radiation therapy.
Consent
Informed consent was obtained from the patient for presentation of the details of this case, along with the images for the purposes of publication.No personal identification information has been displayed in the images.A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
F
.Ferragina et al.
Fig. 2 .
Fig. 2. A) Intraoperative vision of AA removal.The wide margins of resection at the level of the subcutaneous fat and the underlying platysma muscle are highlighted.B) Anatomical specimen of apocrine adenocarcinoma of the neck.
Fig. 3 .
Fig. 3. Invasive component of AA with staining with hematoxylin and eosin.
Fig. 4 .
Fig. 4. A) Computed Tomography: axial image showing a heterogeneous hyperintense mass, close to the lateral rectus muscle with inferior medial dislocation of the right eyeball.B) Preoperative view, showing ectropion with inferior medial dislocation of the right eyeball.
Fig. 5 .
Fig. 5. High lesion power showing AA lobular architecture and secretion by decapitation.
|
v3-fos-license
|
2017-09-30T03:02:52.000Z
|
2016-04-14T00:00:00.000
|
88520330
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1214/17-ejs1325",
"pdf_hash": "b959d3dee832a69798b60ea8d19525898091b1fb",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45568",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"sha1": "b959d3dee832a69798b60ea8d19525898091b1fb",
"year": 2016
}
|
pes2o/s2orc
|
Sparse transition matrix estimation for high-dimensional and locally stationary vector autoregressive models
We consider the estimation of the transition matrix in the high-dimensional time-varying vector autoregression (TV-VAR) models. Our model builds on a general class of locally stationary VAR processes that evolve smoothly in time. We propose a hybridized kernel smoothing and $\ell^1$-regularized method to directly estimate the sequence of time-varying transition matrices. Under the sparsity assumption on the transition matrix, we establish the rate of convergence of the proposed estimator and show that the convergence rate depends on the smoothness of the locally stationary VAR processes only through the smoothness of the transition matrix function. In addition, for our estimator followed by thresholding, we prove that the false positive rate (type I error) and false negative rate (type II error) in the pattern recovery can asymptotically vanish in the presence of weak signals without assuming the minimum nonzero signal strength condition. Favorable finite sample performances over the $\ell^2$-penalized least-squares estimator and the unstructured maximum likelihood estimator are shown on simulated data. We also provide two real examples on estimating the dependence structures on financial stock prices and economic exchange rates datasets.
Introduction
Vector autoregression (VAR) is a basic tool in multivariate time series analysis and it has been extensively used to model the cross-sectional and serial dependence in various applications from economics and finance [31,3,4,10,32,14,34,13,1].There are two fundamental limitations of the vector-autoregressive models.First, conventional methods to estimate the transition matrix of a VAR model are based on the least squares (LS) estimator and the maximum likelihood estimator (MLE), in which the parameter estimation is consistent when the sample size increases and the model size is fixed [8,25].Since the number of parameters grows quadratically in the number of time series variables, the VAR model typically includes no more than ten variables in many real applications [18].However, due to the recent explosive data enrichment, analysis of panel data with a few hundred variables is often encountered, in which the LS and MLE are not suitable even for a moderate problem size.Second, stationarity plays a major role in the VAR model.Therefore, the stationary VAR does not capture the time-varying underlying data generation structures, which have been observed in a broad regime of applications in economics and finance [2,27].Motivated by the limitations of the VAR, this paper studies the estimation problems of the time-varying VAR (TV-VAR) model for high-dimensional time series data.Let X d×n = (x 1 , • • • , x n ) be a sequence of d-dimensional observations generated by a mean-zero TV-VAR of order 1 (TV-VAR(1)) where A(t), t ∈ [0, 1], is a d × d matrix-valued deterministic function consisting of the transition matrices A i := A(i/n) at evenly spaced time points and e i are independent and identically distributed (iid) mean-zero random errors, i.e. innovations.In this paper, our main focus is to estimate the transition matrices A i for the TV-VAR(1) and the extension to higher-order VAR is straightforward.Indeed, for a general TV-VAR of order k ≥ 1 imsart-ejs ver.2014/10/16 file: tvvar_Sep-29-2017_EJS1325.tex date: October 3, 2017 we can rewrite z i = (x i , • • • , x i−k+1 ) at time i = k, k + 1, • • • , n, as a TV-VAR(1) in the augmented space where . . .
, I d×d is the d × d identity matrix and 0 d×d is the d × d zero matrix.Then, we need only to estimate the first d rows in A i .
To make the estimation problem feasible for high-dimensional time series when d is large, it is crucial to carefully regularize the coefficient matrices A i .The main idea is to use certain low-dimensional structures in A i and the degree of freedom of (1.1) can be dramatically reduced.In our problem, we assume two key structures in A i .First, since our goal is to estimate a sequence of transition matrices, which can be viewed as the discrete version of a matrixvalued function, it is natural to impose the smoothness condition to A(t).In this case, (1.1) is closely related to the locally stationary processes, a general class of non-stationary processes proposed by [15].Examples of other linear and nonlinear locally stationary processes can be found in [11].In particular, let i 0 = 1, • • • , n, be any time point and t 0 = i 0 /n.Then, under suitable regularity conditions [16], there exists a stationary process xi (t 0 ) such that for all j = 1, where X ij and Xij (t 0 ) are the j-th element of x i and xi (t 0 ) respectively, and Therefore, x i is an approximately stationary VAR process xi (t 0 ) in a small neighborhood of t 0 .Second, at each time point i = 1, • • • , n, we need to estimate a d × d matrix.There have been different structural options in literature, such as the sparse [21,35,33,30], banded [19], and low-rank [22] transition matrix, all of which only considered the stationary VAR model.In this paper, we consider a sequence of sparse transition matrices and we allow the sparsity patterns to change over time.Note that our problem is also different from the emerging literature for estimating the high-dimensional covariance matrix and its related functionals of time series data [11,12,37,5], since our goal is to directly estimate the data generation mechanism specified by A(•).
To simultaneously address the two issues, we propose a hybridized method of the nonparametric smoothing technique and 1 -regularization to estimate the imsart-ejs ver.2014/10/16 file: tvvar_Sep-29-2017_EJS1325.tex date: October 3, 2017 sparse transition matrix of the locally stationary VAR.The proposed method is equivalent to solving a sequence of large numbers of linear programs and therefore the estimates can be efficiently obtained by using the high-performance (i.e.parallel) computing technology; see Section 2 for details.In Section 3, we establish the rate of convergence under suitable assumptions on the smoothness of the covariance function and the sparsity of the transition matrix function A(•).Specifically, the dimension d is permitted to increase sub-exponentially fast in the sample size n to obtain consistent estimation, i.e. d = o(exp(n)).In addition, we also prove that when our estimator is followed by thresholding, type I and type II errors in the pattern recovery asymptotically vanish in the presence of weak signals.In contrast with the existing literature on consistent model selection, we do not require the minimum nonzero signal strength to be bounded away from zero.Simulation studies in Section 4 and two real data examples in Section 5 demonstrate favorable performances and more interpretable results of the proposed TV-VAR model.Technical proofs are deferred to Section 6.
We fix the notations that will be used in the rest of the paper.Let M be a generic matrix, I ⊂ N is an index subset, [M ] * I and [M ] I * be the submatrix of M with columns and rows indexed by is the support of M and |S| is the number of nonzeros in the support set S. For q > 0 and a random variable (r.v.) X, X q = (E|X| q ) 1/q and we say that X ∈ L q iff X q < ∞.Write X = X 2 .Denote a ∧ b = min(a, b) and a ∨ b = max(a, b).If a and b are vectors, then the maximum and minimum are interpreted element-wise.
Method and algorithm
Therefore, for any estimator, say Âi , that is reasonably close to the true coefficient matrices A i , we must have naive estimator for A i would be constructed to invert the sample versions of (2.1).Estimators of this kind do not have good statistical properties in high-dimensions because of the ill-conditioning and the dependence information in A i is not directly used in estimation.If A i is known to be sparse as a priori, we may consider the following constrained minimization program Because Σ i,1 and Σ i,0 are unknown, the solution of (2.2) is an oracle estimator and therefore it is not implementable in practice.
Let A(•) be the continuous version of , and fix a t ∈ (0, 1).To estimate A(t), we first estimate Σ i,1 and Σ i,0 in (2.1) by their empirical versions.Let be the kernel smoothed estimator of Σ i,0 , Σ i,1 and Σ i,−1 , where w(t, m) are the nonnegative weights.Here, we consider the Nadaraya-Watson smoothed estimator where K(•) is a nonnegative bounded kernel function with support in [−1, 1] such that 1 −1 K(v)dv = 1 and b n is the bandwidth parameter.We assume throughout the paper that the bandwidth satisfies the natural conditions: b n = o(1) and n −1 = o(b n ).Then, our estimator Â(t) is defined as the transpose of the solution of Observe that (2.5) is equivalent to the following d optimization sub-problems βj (t) = arg min in that [ Â(t)] j * = βj (t) .Since the d sub-problems (2.6) can be independently solved, we can efficiently compute the solution of (2.5) by parallelizing the optimizations of (2.6).In addition, each sub-problem in (2.6) can be recasted as a linear program (LP) To establish an asymptotic theory for estimating the continuous matrix-valued transition function A(•) of the non-stationary VAR, it is more convenient to model the time series as realizations from a continuous d-dimensional process.
Following the framework of [17], such that for all m = 1, and the constant C > 0 does not depend on d, m, and n.
Note that the approximating stationary VAR process in Definition 3.1 generally depends on t.As suggested by the Yule-Walker equations (2.1), given a fixed time of interest, estimation of A(t) relies on an estimate of Σ t, .Consider = 0 and let Σ(t) = Σ t,0 be the matrix-valued covariance function at lag-zero.With a finite number of observations x 1 , • • • , x n , it is unclear how to define the covariance function Σ(•) off the n points t i = i/n.Nevertheless, the weakly locally stationary VAR processes provide a natural framework for extending Σ(t i ) to Σ(t) for all t ∈ (0, 1).
Since A(•) is continuous on [0, 1], the stationary VAR process in (3.1) is defined for all t ∈ (0, 1).Let Σ(t) = E[x m (t)x m (t) ].By (3.2) and the Cauchy-Schwarz inequality, we have for each where the constant C here is uniform in j Letting n → ∞ and by the continuity of A(•), we can extend Σ(t) = Σ(t) for all t ∈ (0, 1).Similar extension can be done for Σ t, for = ±1.In Section 3.2, it will be shown that the asymptotic theory of estimating A(t), t ∈ (0, 1) depends on the smoothness of the weakly locally stationary VAR processes only through the smoothness of Σ(t) and therefore A(t).
Rate of convergence
In this section, we characterize the rate of convergence of our estimator (2.5) under various matrix norms.We assume d ≥ 2. To study the asymptotic properties of the proposed estimator, we make the following assumptions on model (1.1).
1.The coefficient matrices are sparse: let 0 where 2. The marginal and lag-one covariance matrix processes {Σ 0 (t)} t∈[0,1] and ) is the class of functions defined on [0, 1] that are twice differentiable with bounded derivatives uniformly in j, k = 1, • • • , d. 3. The random innovations e i = (e i1 , • • • , e id ) have iid components and sub-Gaussian tail: Before proceeding, we discuss the above assumptions.(3.3) requires that the transition matrices are sparse in both columns and rows at all time points.A similar matrix class defined by (3.3) was first proposed in [7] for symmetric matrices and it has been widely used for estimating high-dimensional covariance and precision matrix; see e.g.[9,11].If α = 0, then the maximum number of nonzeros in columns and rows of A i is at most s.Assumption 2 requires that the marginal and lag-one covariance matrices evolve smoothly in time.The smoothness is not defined directly on A(•) for the ease of theorem statements.In view of (3.2), Assumption 2 is implied by the smoothness on A i under extra regularity conditions.For a generic matrix M (t) parameterized by t ∈ (0, 1), we let Ṁ (t) and M (t) be the first two element-wise derivatives of M (t) w.r.t.t. and ) function and therefore Assumption 2 is fulfilled.
Assumption 3 specifies the tail probability of the innovations e i .In [21], e i 's follow iid N (0, Ψ) for some error covariance matrix Ψ.A simple transformation by Ψ −1/2 will reduce to the case that e i has iid components with the standard normal distribution, a special case of Assumption 3, which covers the sub-Gaussian innovations.
Then, with probability at least 1−2d −1 , we have the estimator Âi with tuning parameter From Theorem 3.2, the bandwidth b n ((log d)/n) −1/5 gives the optimal rate of convergence and the resulting tuning parameter τ ≥ C(1+M d )((log d)/n) −2/5 .When d is fixed, it is known that the optimal bandwidth in the nonparametric kernel density estimation is n −1/5 for twice continuously differentiable functions under the mean integrated square error.So in the high-dimensional context, the dimension only has a logarithm impact on the choice of optimal bandwidth.
Pattern recovery
We also study the recovery of time-varying patterns using the estimator (2.5).Let S i = supp(A i ) be the nonzero positions of A i .If the nonzero entries of A i are small enough, then it is impossible to accurately distinguish the small nonzeros and the zeros.Therefore, the best hope we can expect is that the nonzero entries of A i with large magnitudes can be well separated from the zeros in A i .Let where τ is determined in Theorem 3.2.We use the thresholded version of (2.5) as an estimator of Theorem 3.3 states that, with high probability, the zeros in A i can be identified and the nonzero entries in A i with strong signal strength above 2u can be recovered by Ŝi .Therefore, the false positives (type I error) of the estimator (3.10) are asymptotically controlled; see Theorem 3.5 for precise statement.However, Theorem 3.3 does not provide too much information regarding the false negatives since there is no characterization of the signal strength in (0, 2u ).
Let β > 0 and u 0 ∈ (0, 1).We introduce the following d × d matrix class , the parameters β and L d control the proportion of small entries in the support of A. If β is large and L d grows slowly, then the fraction of weak signals in A is small and therefore the false negatives (type II error) can also be well controlled.Below, we shall give such an example.
Example 3.1 (A spatial design).Let 0 < r < 1.Consider a d × d symmetric matrix A = (A mk ) d×d that is generated by the covariance function of a spatial process Z 1 , Z 2 , ..., Z d , which is a random vector that is observed at sites h where and f is a real-value covariance function.Here, we consider the rational quadratic covariance function [11,36] (3.14) In this example, and For any fixed distance parameter r ∈ (0, 1), the weak signal parameter L d has a natural dependence on γ.If γ is smaller, then the covariance function f decays to zero slower and there is a less fraction of weak signals in A. This can allow L d to grow slowly in d.Note that class G α,β (s, M d , L d ) is much less stringent than the widely used condition for support recovery and model selection in literature, which requires that the minimal nonzero signal strength is uniformly bounded away from zero [28].To quantify the error in the pattern recovery, we use the following two error rate measures.Definition 3.2.The false positive rate (FPR) and false negative rate (FNR) of Ŝi are defined as By convention, if S c i = ∅, then FPR i = 0; if S i = ∅, then FNR i = 0.If FPR i = FNR i = 0 with probability tending to one, which is a very strong requirement, then we have the pattern recovery consistency P( Ŝi = S i ) → 1. Below, we show that both FPR and FNR are asymptotically controlled in the presence of weak signals.
Theorem 3.5.Assume that Assumption 1,2,3 are satisfied.Fix an i ∈ Since u = o(1), the FNR vanishes with probability tending to one if
Simulation studies
In this section, we present some numerical results on simulated datasets.We compare the following five methods: (i) Our TV-VAR estimator (2.5).
Data generation
We consider different setups of n = 100 and d = 20, 30, 40 and 50.For each setup (n, d), the data are generated by the following procedure.First, the baseline coefficient matrices A 01 and A 02 are generated by using the sugm.generator() in flare R package [24].We consider four graph structures defined in flare: hub, cluster, band and random for A 01 and A 02 .Examples of these four structures are shown in Appendix B. Then we normalize ρ(A 01 ) = 0.2, ρ(A 02 ) = 1 and smoothly interpolate on the intermediate values Following [21], we specify the innovation covariance matrix Ψ = Σ − A 01 ΣA 01 where Σ = I d .[21] showed that the choice of Σ does not affect the numeric performance significantly.In our simulation studies, we use the Epanechnikov kernel K(v) = 0.75(1 − v 2 )I(|v| ≤ 1) and fix the model order k = 1.The bandwidth b n is set to be b n = 0.8n −1/5 for (i), (iii) and (iv).
Tuning parameter selection
For the tuning parameter selection in (i)-(iv), we propose a data-driven procedure that minimizes the one-step-ahead prediction errors as follows.
1. Choose a grid for the tuning parameter (say τ ) and the number n 1 of training data points.2. For each τ , perform the one-step-ahead prediction on the testing set by estimating A t with Ât (τ ) and then predict X t by Xt = Ât (τ )X t−1 , where t = n 1 +1, ..., n.Then calculate the prediction error at time t as Err From Table 1 to Table 4, larger d often results in larger errors.In general, the unstructured MLE performs the worst almost under all matrix norms.Although the ridge method shrinks the coefficients in the transition matrix to zeros, the values of those coefficients are not exactly zeros.Thus, the estimated transition matrices by the ridge method are not sparse which lead to higher estimation errors compared with the TV-VAR, time-varying lasso and stationary VAR.Stationary VAR always perform worse than TV-VAR and time-varying lasso, since it cannot capture the dynamic structure.The time-varying lasso performs better than any other methods except TV-VAR.The proposed TV-VAR performs the best almost under all matrix norms.
Pattern recovery
For the TV-VAR, we also report the FPR t,l,nn and FNR t,l,nn in (3.17We also calculate u by using the true value of Σ in (3.9) and the resulting ROC curves are shown in Appendix C. Those ROC curves are similar to those based on u = 10 −3 .
Finance data: stock prices
In this section, we compare the aforementioned estimators on a real financial dataset.The dataset is from Yahoo! Finance (finance.yahoo.com).The data matrix contains daily closing prices of 452 stocks that are consistently in the imsart-ejs ver.2014/10/16 file: tvvar_Sep-29-2017_EJS1325.tex date: October 3, 2017 S&P 500 index between January 1st, 2003 and January 1st, 2008.We choose such time range to avoid the effects of the two financial crises in the year of 2001 and 2008, which could make the stock prices non-smooth with sharp drops and pick-ups.In total, there are 1,258 time points.We first standardize the data to zero mean and unit variance and detrend the data.We then fit the detrended data for those stocks which are most smoothed (without obvious change points) using AR(1) model and perform the Ljung-Box Tests by using Box.test() in R. We keep those stocks with nonzero coefficients at significant level 0.05.30 stocks are finally selected and 10 of them are shown in Table 9.The ten selected stocks in Table 9 are from six sectors including: Consumer Staples, Consumer Discretionary, Industrials, Financials, Utilities and Energy.The resulting data matrix is denoted as X ∈ R 1258×30 .We use the Priestley-Subba Rao (PSR) stationarity test (such as stationarity() in R package fractal) to some of the stocks such as Kellogg Co. and Target Corp. to find that these time series are not stationary at significant level 0.05.Thus, it is inappropriate to model the data with the stationary VAR model in [21].Finally we fit the data X into the sparse TV-VAR model with order k = 1, stationary VAR [21] with order k = 1, time-varying lasso method [23], time-varying ridge method [20] and time-varying MLE.
In order to compare the performance of the five methods, we consider the one-step-ahead prediction for Xt , t = 1159, ..., 1258, by using XJt, * , where J t = {j : t−1158 ≤ j ≤ t−1} as the training set.We estimate the transition matrices Ât (λ) where λ is the regularization parameter such as the τ in the TV-VAR or the shrinkage parameter in the ridge method.We use the Epanechnikov kernel K(v) = 0.75(1 − v 2 )I(|v| ≤ 1) with bandwidth b n = 0.3, which means that only the last 30% data in the training set are used for prediction for the TV-VAR, time-varying lasso method, time-varying ridge method and time-varying MLE.Since stationary VAR [21] model uses all training data, in order to make fair comparisons, we only treat the last 30% time points (347 observations) in XJt, * as the training set for the stationary VAR.The averaged prediction error for a specific λ is measured by: The smallest averaged one-step-ahead prediction errors min λ * Err(λ) for five methods were shown in Table 5.In Figure 2, we show an example for the predicted closing prices from the five methods and the detrended true closing prices of Target Corp.Clearly, the methods with time-varying structures perform similarly but outperform stationary VAR.
One advantage of the TV-VAR compared with the stationary VAR is that it can capture the time-varying data structures.If we treat a transition matrix as an adjacency matrix for an undirected weighted graph, we can visualize the graph structures at different time points.the edges represent the cross-sectional dependence between stocks.Bolder lines indicate stronger dependence and the stocks in the same sector are in the same color.In order to illustrate the time-varying structure clearly, we only consider the 10 stocks in Table 9.From these four figures, we observe that stocks in the same and closely related sectors are often connected.For examples, CME Group Inc. and Hartford Financial Svc.GP from Financials sector show a dynamic dependence structure, and Boeing Company in Industrials shows the consistent correlation with Exxon Mobil Corp. in Energy sector.
Economic data: exchange rates
We also apply the exchange rates data on international currencies and study how the exchange rates evolve over time.We use the exchange rates data from the Federal Reserve Bank at St. Louis and choose the date range from January 1st, 2003 to January 1st, 2008 to avoid the financial crises' effects on exchange rates.In total, there are 1,303 time points.We choose 15 major currencies from the continents including Europe, Australia, North America, Asian, South America and we normalize each currency by the exchange rate of the U.S. dollar shown in Table 10.As in Section 5.1, we standardize and detrend the data.The exchange rates for all currencies to the U.S. dollars are smoothed and in addition the PSR test rejects the stationarity hypothesis of the exchange rate Euro/U.S. at the significant level 0.05.Thus, stationary VAR is also not appropriate in this application, which is also supported by our empirical findings in Table 6 and Figure 4. We compare the performance of the five methods by the one-stepahead prediction errors for the last 50 data and use the same prediction metric in Section 5.1.The bandwidth here is b n = 0.3.We do not include the prediction curve of the stationary VAR in Figure 4 since its errors are too large.Table 6 and Figure 4 show that methods with time-varying structure performs similarly but much better than the stationary VAR.
Then, (3.4) follows from the assumption that sup t∈[0,1] |A(t)| 1 < 1 and that Σ(t) is symmetric.(3.5) follows from the differentiation of Σ(t) w.r.t.t.Lemma 6.1.Let A, B, C be matrices of compatible dimension for the product ABC.Then Proof of Lemma 6.1.The first and second inequalities follow from The third inequality follows from the second one by considering (ABC) = C B A and Recall the TV-VAR(1) model x i = A i x i−1 + e i .The following key lemma presents a large deviation bound for the marginal and lag-one autocovariance matrices with sub-Gaussian innovations.Lemma 6.2.Suppose ρ = sup i≥0 ρ(A i ) < 1 and for some absolute constant C 0 < ∞.If e i = (e i1 , • • • , e id ) has iid sub-Gaussian components, i.e. e ij q ≤ C 1 q 1/2 for all q ≥ 1, then, for any fixed t ∈ [b n , 1−b n ], we have with probability at least Then, the bias part is controlled by Now, we deal with the stochastic part I. Let Y Then, sup i ρ(B i,m ) ≤ ρm and we have the movingaverage (MA) representation of )) be n × n diagonal matrix, and .
Then, we can write .
Observe that ( B(j) ) W t B(k) has the same nonzero eigenvalues as the n × n matrix W we get and by the definition of matrix spectral norm Therefore, (*) is bounded by By the Cauchy-Schwartz inequality and the spectral norm bound sup i ρ(B i,m ) ≤ ρm , we have imsart-ejs ver.2014/10/16 file: tvvar_Sep-29-2017_EJS1325.tex date: October 3, 2017 and ρ(W Therefore, we have that for any x > 0 sup Now, by (6.2) and using the union bound applied to (6.3), we obtain that there exists a constant C which only depends on ρ, C 0 , C 1 such that holds with probability ≥ 1 − 2d −1 .Similar argument applied to m = 1 shows that the lag-one autocovariance matrix obeys the same bound in (6.1).
Proof of Lemma 3.4.Denote l = d/2.For 0 < α < 1, note that max Then, we have from which the theorem follows.
), where l = 1, .., 30 which are the indexes of 30 possible tuning parameters from 0.001 to 0.45.Then, we calculate the averaged FPR l = t,l,nn }.Following[9], we set the u to 10 −3 , which is considered to be numerical nonzero.The ROC curves for all possible values of the sparsity control parameter are plotted in Fig 6a, Fig 6b, Fig 6c and Fig 6d.Based on the ROC curves, the TV-VAR method has better discrimination power for band or random patterns than hub or cluster patterns.
Fig 2 :
Fig 2: Comparison of the predicted closing prices and the true detrended closing prices of Target Corp.
By Theorem 3.2, the maximal fluctuation | Âi − A i | ∞ is controlled by u with probability at least 1 − 2d −1 .So if we apply a thresholding procedure for Âi at the level u , then we expect that the zeros and non-zeros with magnitudes larger than 2u in A i can be identified by the thresholded support of Âi .Precisely, we have the instantaneous partial recovery consistency.
Table 1
Comparison of estimation errors under different setups.The standard deviations are shown in parentheses.Here ρ, F , 1 and ∞ are the spectral, Frobenius, 1 and ∞ matrix norm resp.The pattern of transition matrix is 'hub'.
Table 2
Comparison of estimation errors under different setups.The standard deviations are shown in parentheses.Here ρ, F , 1 and ∞ are the spectral, Frobenius, 1 and ∞ matrix norm resp.The pattern of transition matrix is 'cluster'.
Table 3
Comparison of estimation errors under different setups.The standard deviations are shown in parentheses.Here ρ, F , 1 and ∞ are the spectral, Frobenius, 1 and ∞ matrix norm resp.The pattern of transition matrix is 'band'.
Table 4
Comparison of estimation errors under different setups.The standard deviations are shown in parentheses.Here ρ, F , 1 and ∞ are the spectral, Frobenius, 1 and ∞ matrix norm resp.The pattern of transition matrix is 'random'.
Table 5
The prediction errors for the five methods on the stock price data.The standard deviations are shown in parentheses.
Table 6
The prediction errors for the five methods on the exchange rates data.The standard deviations are shown in parentheses.
|
v3-fos-license
|
2020-10-20T05:13:12.060Z
|
2020-10-06T00:00:00.000
|
224724578
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/mi/2020/8868107.pdf",
"pdf_hash": "2a342002bac3bfed498490a16c19f0d6cf481d42",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45571",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "2a342002bac3bfed498490a16c19f0d6cf481d42",
"year": 2020
}
|
pes2o/s2orc
|
Essential Oil and Juice from Bergamot and Sweet Orange Improve Acne Vulgaris Caused by Excessive Androgen Secretion
Acne vulgaris is one of the most common chronic inflammatory skin diseases. Bergamot and sweet orange are rich in nutritional and functional components, which exhibit antioxidant, anti-inflammatory, and antiapoptotic effect. The aim of this study was to evaluate the potential effect of bergamot and sweet orange (juice and essential oil) on acne vulgaris caused by excessive secretion of androgen. Eighty male golden hamsters were randomly divided into 10 groups and received low or high dose of bergamot and sweet orange juice and essential oil, physiological saline, and positive drugs for four weeks, respectively. Results showed that all interventions could improve acne vulgaris by reducing the growth rate of sebaceous gland spots, inhibiting TG accumulation, decreasing the release of inflammatory cytokines (notably reducing IL-1α levels), promoting apoptosis in the sebaceous gland, and decreasing the ratio of T/E2. Among them, bergamot and orange essential oil may have better effects (dose dependent) on alleviating acne vulgaris than the corresponding juice. In view of the large population of acne patients and the widespread use of sweet orange and bergamot, this study is likely to exert an extensive and far-reaching influence.
Introduction
Acne vulgaris is a prevalent dermatologic disease, mainly distributed in the pilosebaceous units including the face, neck, chest, back, and shoulders. The clinical manifestations of acne vulgaris are mainly seborrheic, noninflammatory skin lesions, inflammatory lesions, and varying degrees of scarring [1,2]. More than 85% of young individuals are affected by acne vulgaris worldwide and can suffer from the disease into adulthood [3]. Although acne vulgaris is not life-threatening, this disease can have a huge impact on patients' psychosocial and physical health. Acne vulgaris results from the androgeninduced sebum production, altered keratinization, inflammation, and colonization of Propionibacterium acnes (P. acnes) on the pilosebaceous follicles [4]. Among them, inflammations are present in all acne vulgaris lesions-including microcomedones, inflammatory lesions, hyperpigmentation, and scarring [5,6]. Also, acne vulgaris is associated with diet, occupation, climate, and psychological and lifestyle factors [7,8]. Thus, more and more researchers are getting interested in preventing acne vulgaris. According to recent dermatologic guidelines, the current treatments for acne vulgaris are divided into conventional pharmacological and nonpharmacological therapies. The former therapies include antibiotics, retinoids, hormonal agents, and benzoyl peroxide while laser and light-based therapies, chemical peels, microneedling, (micro) dermabrasion, and (mechanical) lesion removal are the latter therapies most commonly applied [9,10]. Nevertheless, none of these therapies is free of side effect [11]. Furthermore, more investigations are needed to seek alternative and complementary medicine, including medicinal plants.
Bergamot (Citrus medica L. var. sarcodactylis) has been applied as a medicinal plant just because of its stomachic, antifungal, and bacteriostatic properties [12]. Bergamot, also called finger citron, is one of the species of Citrus [13]. The carpels of finger citron split, causing a finger-like fruit shape [14]. The flowers, leaves, and fruits of bergamot can be used as medicine, which play roles in soothing the liver and relieving depression [15]. Bergamot peel is mostly used to distill bergamot essential oil (BEO) while bergamot juice (BJ) is obtained by squeezing the endocarp of the fruits [16]. Evidences indicated that BEO could show some properties like anti-inflammation, immunomodulatory, and wound healing [17]. The flavonoid-rich fraction of BJ could also exhibit anti-inflammatory and antioxidant activities [16]. Sweet orange (Citrus sinensis (L.) Osbeck) is another member of the Citrus genus fruit which becomes more and more popular in recent years. According to the previous research, sweet orange was widely consumed as fresh fruit and juice, while the peel was also rich in essential oils [18]. Sweet orange juice is one of the natural sources of large amounts of vitamin C, flavonoids, and other bioactive compounds with potential effects on the inflammatory response [19]. Sweet orange essential oil is one of the important natural plant essential oils, with an attractive orange flavor, which was reported to have stress-relief, antifungal, anticarcinogenic, and radical-scavenging properties [20][21][22]. Moreover, formulations based on orange and sweet basil oils were effective in treating acne vulgaris [23]. Based on the potential physiological activities of bergamot and sweet orange, it is interesting to figure out whether they could show effects on alleviating acne vulgaris. Is it the juice or essential oil that plays roles in improving acne vulgaris? What is the difference between bergamot and sweet orange in ameliorating acne vulgaris? Therefore, this study was aimed at investigating the effects of different doses of bergamot essential oil, bergamot juice, sweet orange essential oil, and sweet orange juice on acne vulgaris caused by excessive androgen secretion. Since acne vulgaris is associated with elevated levels of androgen, the golden hamster animal model was used in our research, which was frequently used to study acne vulgaris based on the flank sebaceous gland [24][25][26].
Preparation of Bergamot Juice.
After fresh bergamot was cleaned and peeled, peeled bergamot was cut into small pieces of 1 cm × 1 cm × 1 cm to be crushed with the juice extractor. Finally, the bergamot juice was obtained by filtering after placing at 4°C overnight.
Preparation of Bergamot Essential
Oil. The peel fractions were diced into small pieces of 8 mm × 8 mm × 1 mm, which were mixed with distilled water at a ratio of 1 : 4 (m/v). After subjecting small peel fractions to steam distillation for 2 h, the obtained mixtures were dehydrated with anhydrous sodium sulfate. Then, the remaining mixture was strained to remove anhydrous sodium sulfate. Subsequently, the bergamot essential oil was collected after filtration.
Preparation of Sweet Orange Juice.
After fresh sweet oranges were cleaned and peeled, peeled oranges were cut into small pieces of 4 cm × 4 cm × 4 cm to be crushed with the juice extractor. Finally, the sweet orange juice was obtained by filtering after placing at 4°C overnight.
Preparation of Sweet
Orange Essential Oil. The peel fractions were diced into small pieces of 8 mm × 8 mm × 8 mm, which were mixed with distilled water at a ratio of 1 : 6 (m/v). After subjecting small peel fractions to steam distillation for 6 h, the obtained mixtures were dehydrated with anhydrous sodium sulfate. Then, the remaining mixture was strained to remove anhydrous sodium sulfate. Subsequently, the sweet orange essential oil was collected after filtration.
2.6. Preparation of Solution for Gavage. The prepared essential oil was added to an aqueous solution with 0.2% Tween 80 in a ratio of 1 : 7 (m/v). Subsequently, the mixtures were ultrasonicated for 30 min to obtain uniform essential oil emulsion. The compound pearl acne capsules were dissolved with distilled water in a ratio of 1 : 6 (m/v) to prepare an aqueous solution.
2.7. Animals and Treatments. Eighty male golden hamsters (120 ± 10 g) were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China) (Certificate SCXK (Beijing) 2012-0001). The hamsters were acclimatized with a daily 12 h light/12 h dark cycle at 23 ± 1°C room temperature and 50 ± 2% relative humidity. After 1 week of adaptation, eighty golden hamsters were randomly divided into 10 groups: model group (MG), positive control group (PG), low-dose group of sweet orange juice (LOJ), high-dose group of sweet orange juice (HOJ), low-dose group of sweet orange essential oil (LOO), high-dose group of sweet orange essential oil (HOO), low-dose group of bergamot juice (LBJ), high-dose group of bergamot juice (HBJ), lowdose group of bergamot essential oil (LBO), and high-dose group of bergamot essential oil (HBO). The entire experiment lasted for four weeks. The MG was given physiological saline by oral route. The PG was administered 0.375 mg/kg BW of compound pearl acne capsules. The LOJ and HOJ 2 Mediators of Inflammation were, respectively, orally given 14 and 17.5 mL/kg BW of sweet orange juice. The LOO and HOO received 0.21 and 0.33 mL/kg BW of sweet orange essential oil, respectively. The LBJ and HBJ were, respectively, treated with 14 and 17.5 mL/kg BW of bergamot juice. The LBO and HBO received 0.21 and 0.33 mL/kg BW of sweet bergamot essential oil, respectively. Juice and essential oil that contain a low dose and high dose, respectively, were from fresh fruits in the same weight. The gastric volume of each intervention was 10 mL/kg BW. At the end of the experiment, the golden hamster was fasted for 12 h (free drinking water) and then deeply comatose with diethyl ether. After taking blood from the eyelids with a capillary tube, hamsters were sacrificed. Blood was stored at 4°C for 4 h and then centrifuged at 4000 g for 10 min in a refrigerated centrifuge. Finally, the supernatant was collected and stored at -80°C for further detection. At the same time, the golden hamsters were dissected and the spleen, testis, and sebaceous gland were taken to weigh them. Part of the sebaceous gland was precooled in liquid nitrogen and then stored in an ultra-low-temperature refrigerator for later detection of related indicators. All experiment procedures in our study involving animals were allowed by the Ethics Committee of the Beijing Key Laboratory of Functional Food from Plant Resources and carried out in accordance with the guidelines for the use and care of laboratory animals of the National Institutes of Health.
Determination of the Growth Rate of Sebaceous Gland
Spots. Before the experiment started, the size of sebaceous gland spots on the lateral abdomen of golden hamsters was measured. And the sebaceous gland spots were measured every 7 days. The sebaceous gland spots were calculated as follows: The growth rate of sebaceous gland spots on both sides was calculated as follows: where a, b, S n , and S n+1 refer to the maximum transverse diameter of the sebaceous gland, the maximum longitudinal diameter of the sebaceous gland, the sebaceous gland area at the n-th week, and the sebaceous gland area at the n + 1 -th week, respectively.
2.9. Determination of the Organ Index. After the golden hamsters were weighed and killed, the spleen and testicle weights were measured. The organ index of golden hamsters was calculated as follows: 2.10. TG Analysis. About 0.1 g of sebaceous gland was mixed with physiological saline at a ratio of 1 : 9 (m/v) and then ground with a homogenizer. The tissue homogenate was prepared in a glass homogenizer and centrifuged at 3000 g for 10 min in a refrigerated centrifuge. The supernatant was taken to detect TG levels following the manufacturer's protocol of the commercial kit.
2.11. ELISA Analysis. The serum T, E 2 , IL-1α, IL-6, TNF-α, MMP-2, and MMP-9 levels and the activity of caspase-3 in sebaceous gland tissue were detected on a SpectraMax M2e enzyme microplate reader (Thermo Fisher Scientific, USA) using the corresponding ELISA kits following the manufacturer's instructions.
2.12. Statistical Analysis. All diagrams were generated using GraphPad Prism 8.0 (GraphPad Software, San Diego, CA, USA). Statistical data were analyzed by one-way analysis of variance (ANOVA) followed by Tukey's test using SPSS 25.0 software (IBM Corporation, Armonk, NY, USA). Results were expressed as mean ± standard deviation (SD) with significance accepted at P < 0:05.
Effect of Different Treatments on the Growth Rate of
Sebaceous Gland Spots in the Golden Hamster. As shown in Figure 1(a), all groups exhibited better inhibitory effect than MG on the growth rate of sebaceous gland spots (P < 0:05).
The growth rate of the sebaceous gland spots in bergamot essential oil groups was remarkably decreased compared with the corresponding dose of juice groups. HBO, LBO, and HOO could significantly inhibit the growth rate of the leftside sebaceous gland spots of hamsters compared with the PG group (P < 0:05). The HBO, LBO, and HBJ groups significantly decreased the growth rate of the right-side sebaceous gland spots in golden hamsters to 2.21%, 1.13%, and 1.09%, respectively (P < 0:05). The growth rate of the sebaceous gland spots in bergamot essential oil groups was clearly decreased in contrast with the corresponding dose of sweet orange groups (P < 0:05). In Figure 1(b), each intervention group clearly reduced the growth rate of the left-side sebaceous gland spots compared with MG (P < 0:05). Only the groups of bergamots, LOO, HOO, and HOJ, could significantly inhibit the growth rate of right-side sebaceous gland spots in comparison with MG (P < 0:05). The growth rate of sebaceous gland spots in essential oil groups was lower than that in the corresponding dose of juice groups. According to Figure 1(c), the growth rate of the right-side sebaceous gland of each intervention group was significantly different from that of the MG (P < 0:05), while the growth rate of the left-side sebaceous gland of LBO, HBO, HBJ, and sweet orange groups was significantly lower than that of the MG (P < 0:05). The growth rate of the sebaceous gland spots in sweet orange essential oil groups was obviously decreased in contrast with the corresponding dose of juice groups (P < 0:05). In Figure 1(d), compared with the MG, each intervention group significantly reduced the growth rate of the left-side sebaceous gland spots of the golden hamster (P < 0:05). In contrast, only the groups of LBO, HBO, and HBJ could significantly decrease the growth rate of the 3 Mediators of Inflammation right-side sebaceous gland spots in contrast with MG (P < 0:05). Treatment with sweet orange essential oil could significantly decrease the growth rate of the sebaceous gland spots compared with the corresponding dose of juice groups (P < 0:05). Both of bergamot and sweet orange groups exhibited the effect of decreasing the growth rate of sebaceous gland spots with a dose-effect relationship.
Effect of Different
Treatments on the Organ Index in the Golden Hamster. As listed in Table 1, there was no significant difference in the testicular index and spleen index between the intervention groups and MG. These results indicated that the intervention substances did not have adverse effects on the testicular and spleen indexes in golden hamsters.
Effect of Different
Treatments on the Level of Serum T and E 2 in the Golden Hamster. The changes in T level of golden hamsters are shown in Figure 2(a). Compared with the MG, the content of T in each intervention group showed a certain degree of reduction. However, only the bergamot essential oil groups significantly reduced the T level in comparison with MG (P < 0:05). Besides, all intervention groups increased the E 2 content compared with MG (shown in Figure 2(b)). But only HOO and HBJ could remarkably increase the E 2 content compared with MG (P < 0:05). These results did not show a statistical difference between the sweet orange groups. In Figure 2(c), data showed that each intervention group lowered the ratio of T/E 2 in comparison with MG. Among them, the ratio of T/E 2 in high-dose intervention groups was significantly decreased in comparison with that in MG (P < 0:05). The ratio of T/E 2 in essential oil groups was lower than that in the corresponding dose of juice groups. Both of bergamot and sweet orange groups had shown a dosage-effect relationship.
Effect of Different Treatments on the Level of TG of Sebaceous Gland Tissue in the Golden
Hamster. The results of TG level are shown in Figure 3. Compared with the MG, each intervention group significantly reduced the content of TG in the sebaceous gland of golden hamsters (P < 0:05). As can be seen from Figure 3, the TG level in sweet orange essential oil groups was obviously decreased in contrast with the corresponding dose of juice groups (P < 0:05). The content of TG in bergamot essential oil groups was lower than that in the corresponding dose of juice groups. Both of bergamot and sweet orange groups exhibited effects on attenuating TG level with a dose-effect relationship. Mediators of Inflammation
Effect of Different Treatments on Inflammatory Factors in the Golden
Hamster. In this part, we studied the effect of different treatments in the golden hamster on the level of the serum inflammatory factor (shown in Figure 4). In Figure 4(a), data showed that each intervention group clearly lowered the serum IL-1α content in comparison with MG (P < 0:05). The HBO even lowered the IL-1α level to about one-fifth of the MG (83 pg/mL and 433 pg/mL, respectively). The IL-1α level in HOO was obviously reduced relative to that in HOJ. Meanwhile, bergamot groups showed similar results. As shown in Figure 4(b), the content of IL-6 was clearly decreased in HBO, LBO, and HOO groups in contrast with MG (P < 0:05). Groups with essential oil could significantly decrease IL-6 level compared with the corresponding dose of juice groups (P < 0:05). As shown in Figure 4(c), in addition to LOJ, all other intervention groups significantly reduced TNF-α level (P < 0:05). Treatment with low-dose sweet essential oil could significantly reduce TNF-α level relative to that with LOJ (P < 0:05). The TNF-α level in essential oil groups was lower than that in the corresponding dose of juice groups. Both of bergamot and sweet orange groups had antiinflammation effect by a quantity-effect relationship.
Effect of Different Treatments on the Level of Serum
MMPs in the Golden Hamster. >We found that each intervention group significantly reduced the MMP-2 content in comparison with the MG (P < 0:05). As shown in Figure 5(a), the content of MMP-2 in HOO groups was obviously decreased in contrast with that in HOJ (P < 0:05). Groups with bergamot essential oil could significantly decrease MMP-2 level compared with the corresponding dose of juice groups (P < 0:05).
In Figure 5(b), the content of MMP-9 in the essential oil groups could be reduced to about half of that in the MG. Groups with essential oil could significantly reduce MMP-9 level in contrast with the corresponding dose of juice groups (P < 0:05). Both of bergamot and sweet orange groups had attenuating effects on MMP level with a dose-effect relationship.
Effect of Different
Treatments on the Activity of Caspase-3 of Sebaceous Gland Tissue in the Golden Hamster. Caspase-3 levels were determined to assess the effect of different treat-ments on apoptosis of sebaceous gland cells. As shown in Figure 6, we found that each intervention group significantly increased the relative activity of caspase-3 in comparison with the MG (P < 0:05). Groups with essential oil could significantly increase the relative activity in comparison with the corresponding dose of juice groups (P < 0:05). Bergamot and sweet orange groups could increase the relative activity of caspase-3 with a dosage-effect relationship.
Discussion
Golden hamsters could be used as a model of acne vulgaris caused by the excessive secretion of androgen, which appeared in two sebaceous glands in the lateral abdomen [27]. Therefore, determining the area and thickness of the sebaceous gland can reflect the severity of acne vulgaris in golden hamsters, to a certain extent. It was reported that Lactobacillus-fermented C. obtuse significantly decreased sebum secretion and size of the sebaceous gland compared with baseline (0:24 ± 0:09 mm 2 vs. 0:38 ± 0:11 mm 2 , respectively) [28]. In our study, the size of the sebaceous gland was not reduced with the treatment with bergamot and sweet orange, and the growth rate of sebaceous gland spots decreased instead. It was reported that activating the expression of apoptotic proteins through a series of reactions could ultimately lead to a reduction in the area or thickness of the sebaceous gland [29]. Caspase-3 plays an executing role in the process of apoptosis, and it is also the most important downstream protease of the apoptotic effect and execution process [30]. Inhibition of caspase-3 was reported to attenuate sebaceous gland cell proliferation and organ size [31]. In this study, the growth rate of sebaceous gland spots and caspase-3 activity in golden hamsters with different doses of intervention substances were analyzed. It was found that after 4 weeks of intervention, the growth rate of sebaceous gland spots in each intervention group was significantly decreased compared with that in MG, and caspase-3 activity was significantly increased in comparison with that in MG (Figures 1 and 6). Moreover, the effect of each intervention substance was positively correlated with the dose, and the bergamot essential oil had the best improvement effect. Therefore, it is suggested that bergamot essential oil, bergamot juice, sweet orange essential oil, and sweet orange juice may alleviate acne vulgaris by promoting the apoptosis of sebaceous gland cells in the golden hamster through enhancing the activity of caspase-3. The secretion of androgen plays an important role in the occurrence and development of acne lesions [11]. Excessive androgen secretion or imbalance of androgen and estrogen levels can lead to sebaceous gland hyperplasia, excessive sebum secretion, and abnormal keratinization of the hair follicle sebaceous gland, resulting in the development and continuous occurrence of acne vulgaris [32]. High-dose estrogen exerted a negative feedback on the gonadal axis, which could result in the reduction of sebum formation [33]. Evidence showed that higher content of free T with lower level of E 2 was significantly in favor of severe acne vulgaris [34]. Thus, balancing the ratio between androgens and estrogens could help alleviate or treat acne vulgaris. In Values are expressed as the mean ± SD (n = 8). Data with different letters on the same line indicate significant differences (P < 0:05).
Mediators of Inflammation
our study, the bergamot essential oil significantly reduced the serum T content in golden hamsters, while HOO and HOJ groups markedly increased the serum E 2 level (Figures 2(a) and 2(b)). This indicated that sweet orange essential oil and bergamot essential oil could improve acne lesions via directly reducing the androgen level or decreasing the androgen/estrogen ratio (Figure 2(c)).
Sebum is synthesized and secreted by sebaceous gland cells, consisting of TG, nonesterified fatty acid, wax ester, squalene, and cholesterol ester [35]. Since sebum contains approximately 30% TG, the amount of sebum synthesis and secretion was reflected in the study by measuring the TG content in the sebaceous gland tissue of golden hamsters. The study found that bergamot essential oil, bergamot juice, sweet orange essential oil, and sweet orange juice can significantly reduce the TG content of golden hamsters, indicating that each intervention substance can improve acne vulgaris via restraining TG level to a certain extent ( Figure 3). Besides, the bergamot essential oil, bergamot juice, sweet orange juice, and sweet orange essential oil exhibited a dose-dependent effect on inhibiting TG accumulation.
Inflammation is an initial host immune reaction which is mediated by inflammatory factors [36]. The induction of inflammation mediators is associated with P. acnes [37]. P. acnes and associated cellular membrane lipopolysaccharides could induce the expression of IL-8, TNF, and IL-1α in cultured sebocytes [38]. The underlying mechanisms are that sebocytes can be recognized and activated by P. acnes, which further trigger the release of inflammatory cytokines [39]. TNF-α is an extensively studied cytokine associated with many inflammatory diseases [40]. Evidences indicated that IL-1α is one of the main inflammatory factors, which could induce the proliferation of keratinocytes and sebaceous gland cells and eventually promote the formation of acne vulgaris [41]. Moreover, increased IL-1 contents could further promote the expression of MMPs and lipid synthesis [42,43] the development of different types of acne lesions [44][45][46]. Keratinocytes are an important source of MMPs in acne vulgaris, and P. acnes can induce the expression of several kinds of MMPs, including MMP-2 and MMP-9 [47]. Kim et al. reported that Citrus obovoides and Citrus natsudaidai essential oils could reduce P. acnes-induced secretion of IL-8 and TNF-α [48]. In one study, a Lactobacillus-fermented Chamaecyparis obtusa leaf extract was reported to have a great effect on inflammatory markers, such as IL-8, IL-1, and NF-κB [28]. Also, previous studies reported that essential oil and juice could alleviate acne vulgaris based on their antibacterial effect [49,50]. In our study, results showed that bergamot essential oil, bergamot juice, sweet orange essential oil, and sweet orange juice could reduce the levels of IL-6, TNF-α, MMP-2, MMP-9, and especially IL-1α, indicating Mediators of Inflammation that bergamot essential oil, bergamot juice, sweet orange essential oil, and sweet orange juice could improve acne vulgaris through alleviating inflammatory response and suppressing P. acnes in golden hamsters (Figures 4 and 5).
Conclusions
In summary, bergamot essential oil, bergamot juice, sweet orange essential oil, and sweet orange juice could improve acne vulgaris caused by excessive secretion of androgen via reducing the growth rate of the sebaceous gland, inhibiting TG accumulation and inflammatory cytokines release in the sebaceous gland, promoting apoptosis in the sebaceous gland, and decreasing the ratio of T/E 2 . In general, bergamot essential oil and sweet orange essential oil may have better effects on alleviating acne vulgaris than the corresponding juice. Among them, the levels of IL-1α in the sebaceous gland could be best reduced in both bergamot and sweet orange. This implies that bergamot and sweet orange may improve acne lesions via alleviating inflammatory response and suppressing P. acnes, but further studies of the mechanism are needed.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2013-10-02T00:00:00.000
|
6233666
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/criu/2013/576146.pdf",
"pdf_hash": "973a1a0bc2a1a502e8060224f1b28135fbd6114f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45572",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "13efbc8116e93474c2ce6357ccd5b96708899765",
"year": 2013
}
|
pes2o/s2orc
|
A Unique Case of Penile Necrotizing Fasciitis Secondary to Spontaneous Corpus Cavernosal Abscess
Corpus cavernosal abscess and necrotizing fasciitis occur rarely, and precipitating factors can usually be elicited with careful history and examination. Whilst both conditions share common risk factors such as diabetes mellitus, this is the first reported case of penile necrotizing fasciitis secondary to spontaneous corpus cavernosal abscess in an otherwise healthy patient. A 32-year-old man presented with 4-day history of swollen, painful penis, with ultrasound confirming corpus cavernosal abscess. Biopsies were taken and the cavity aspirated, but, despite intravenous antibiotics, he developed penile necrotizing fasciitis necessitating open cavernostomy and debridement. The overlying skin defect healed by secondary intention, but the patient experienced persistent postoperative erectile dysfunction, so he was referred for penile prosthesis insertion.
Introduction
Spontaneous abscesses of the corpus cavernosum are rare, with few previously described idiopathic cases. The most frequently identified risk factors include relative immunosuppression (e.g., diabetes mellitus) or preceding local or distant infection [1][2][3]. Penile instrumentation, injection, and trauma have also been described as precipitating factors [4][5][6][7]. In this paper we report a case of spontaneous primary corpus cavernosal abscess with subsequent development of penile necrotizing fasciitis. Whilst both conditions share common risk factors [8], to the best of our knowledge this clinical course has not been described in an otherwise healthy patient.
Case Presentation
A thirty-two-year-old man presented with a four-day history of a swollen and painful penis and left testicle with associated rigors, lower back pain, nausea, and vomiting. There was no history of lower urinary tract symptoms, haematuria, penile discharge, trauma, or unprotected sexual intercourse. His past medical history included mild asthma and hay fever, but he took no regular medications. Abdominal examination was unremarkable, and his genitalia were very swollen and tender on palpation. The patient was pyrexial (38.4 ∘ C) and tachycardic (rate 120 b.p.m.) on admission, but blood pressure was stable. Urine dipstick was positive for blood and ketones only. White cell count was 16.24 × 10 9 /L, C-reactive protein was 158 mg/L but urea and electrolytes, liver function tests, glucose and haemoglobin were all within their normal range.
Empirical intravenous amoxicillin and gentamycin treatment was commenced as per local protocol after discussion with microbiology. Ultrasound showed a 3.5 × 2.5 × 2 cm irregular mass lesion within the base of the left corpus cavernosum with no blood flow, suggestive of either thick pus or a solid lesion ( Figure 1). A small volume of blood-stained fluid was aspirated, and tru-cut core biopsies were taken in theatre the following day.
The patient's clinical condition and inflammatory markers initially improved thereafter, but pyrexia and breakdown of the penile skin with purulent discharge were noted 4 days later despite continued intravenous antibiotics. These were changed to tazobactam and piperacillin, and repeat ultrasound scan identified persistent abscess and possible thrombosis of the corpus cavernosum.
He was taken back to theatre, where an abnormal area on the left ventral aspect of the base of the penis with a line of overlying skin demarcation was excised. Features were typical of necrotizing fasciitis without involvement of the scrotum or perineum. A large abscess cavity was opened, necrotic tissue was debrided to bleeding edges, and the cavity was packed. Cystoscopy was normal.
The patient recovered after open cavernostomy and debridement ( Figure 2) and was discharged 24 days after admission. Cultures of abscess fluid, urine, and blood were negative. Histopathology of biopsy material showed acute inflammation only. He had a 5 × 4 cm defect at the base of his penis overlying the left corpus cavernosum and was assessed by the plastic surgeons with a view to skin grafting. This proved unnecessary as the wound healed well by secondary intention, but he required penile prosthesis insertion due to postoperative erectile dysfunction.
Discussion
Abscesses of the corpus cavernosum occur from the neonatal period onwards, which may be unilateral or bilateral and can be associated with priapism at the time of presentation [4,9]. They are uncommon, and spontaneous cases are of even greater rarity. The most frequently identified risk factors are diabetes mellitus [3], preceding infection [1][2][3], intracavernosal injection [4,5], and intravenous drug use involving the external genitalia [6]. Occurrence following insertion of a penile prosthesis [3], "penile fracture" (rupture of the tunica albuginea in the erect penis) [7], and intermittent self catheterisation has also been reported [5].
Initial investigations should include culture of urine, blood, and any discharge or pus prior to antibiotic therapy to maximise the probability of identification of causative organisms, including gonorrhoea, skin, and gastrointestinal commensal organisms. The occasional culture of oral cavity commensals has been associated with distant infection from dental caries [1] and cross-infection from fellatio [2]. Candidal infection has been related to immunocompromise and relative ischaemia due to small vessel disease in a diabetic patient [3]. In our case, there were no positive microbiological cultures, which mirrors around half of those previously reported.
Once the diagnosis has been confirmed, treatment consists of antibiotic therapy, which is usually combined with open cavernotomy to debride necrotic tissue and drain the abscess [1,4]. When present, any foreign body (e.g., prosthesis) is removed [3]. Suprapubic catheterisation at the time of cavernotomy is also occasionally performed [1]. Percutaneous drainage has been described, but as our case highlights, either repeated aspiration or subsequent cavernotomy and more extensive debridement may be required. Erectile dysfunction and penile curvature secondary to fibrosis rates may, however, be lower with percutaneous management [5].
Erectile dysfunction and penile curvature are the most frequently reported sequelae of corpus cavernosal abscess [1,10]. Extensive debridement of cavernosal tissue, as with our case, commonly precipitates erectile dysfunction and may be treated with a penile prosthesis. Abscess recurrence may occur several months after primary treatment [3], and scrotal abscess [1] and urethral sinuses have also been reported. To the best of our knowledge there has been only one reported case of corpus cavernosal abscess with subsequent development of penile necrotizing fasciitis. This occurred three weeks after false penile fracture, and early haematoma evacuation is recommended in such cases. The patient was therefore at increased risk of local infection due to his delayed presentation and penile trauma [7]. In contrast, in our case no risk or precipitating factors were present.
In conclusion, our case illustrates that the serious complication of necrotizing fasciitis may occur after corpus cavernosal abscess in an otherwise healthy patient. Early surgical intervention is therefore recommended as definitive treatment in all cases of corpus cavernosal abscess to prevent its development.
|
v3-fos-license
|
2018-10-24T13:02:08.859Z
|
2018-10-24T00:00:00.000
|
53035964
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2018.00620/pdf",
"pdf_hash": "e34a41977757782a103a3309d98174b64d0b6bd1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45574",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "e34a41977757782a103a3309d98174b64d0b6bd1",
"year": 2018
}
|
pes2o/s2orc
|
Pharmaceutical Impact of Houttuynia Cordata and Metformin Combination on High-Fat-Diet-Induced Metabolic Disorders: Link to Intestinal Microbiota and Metabolic Endotoxemia
Purpose: Metformin and Houttuynia cordata are representative anti-diabetic therapeutic agents in western and oriental medicinal fields, respectively. The present study examined the therapeutic effects of houttuynia cordata extract (HCE) and metformin in combination in a dysmetabolic mouse model. Methods: Metabolic disorders were induced in C57BL/6J mice by high fat diet (HFD) for 14 weeks. Results: Combination of metformin and HCE significantly lowered body weight, abdominal fat, perirenal fat, liver and kidney weights, but did not change epididymal fat in HFD-fed animals. Metformin + HCE treatment markedly attenuated the elevated serum levels of TG, TC, AST, ALT, and endotoxin and restored the depleted HDL level. Both HCE and metformin + HCE treatment ameliorated glucose tolerance and high level of fasting blood glucose in association with AMPK activation. Moreover, treatment with HCE + metformin dramatically suppressed inflammation in HFD-fed animals via inhibition of proinflammatory cytokines (MCP-1 and IL-6) and LPS receptor (TLR4). Histopathological findings showed that exposure of HFD-treated animals to metformin + HCE ameliorated fatty liver, shrinkage of intestinal villi and adipocytes enlargement. Furthermore, HCE and metformin + HCE treatments markedly modulated the abundance of gut Gram-negative bacteria, including Escherichia coli and Bacteriodetes fragilis, but not universal Gram-positive bacteria. Conclusions: Overall, HCE and metformin cooperatively exert their therapeutic effects via modulation of gut microbiota, especially reduction of Gram-negative bacteria, resulting in alleviation of endotoxemia.
INTRODUCTION
Metabolism is an essential biochemical event in the body that keeps one alive and healthy. However, morbidity due to metabolic diseases such as obesity and diabetes have been continuously increasing and epidemics of these conditions are occurring in both developed and developing countries (1). Many factors, main including genetic and environmental conditions, can disrupt the normal physiological homeostasis, resulting in metabolic disorders (2). Excessive consumption of high-fatdiet (HFD) is one of the main factors that leads to metabolic disorders (3); however, energy imbalance and hereditary reasons do not completely account for the current epidemic status.
Recently, increasing studies have reported that the genetic background determines the predisposition of metabolic disorders (4). Metabolic disorders are widely viewed as chronic systemic diseases because they sustain low-grade inflammation due to gut microbial dysbiosis (5). Therefore, intestinal commensal microbiota become another vital factor during the development of metabolic disorders, especially obesity and type 2 diabetes. HFD-altered gut microbiota obviously improve obesity and inflammation via the toll-like receptor 4 signaling pathway (6). In addition, HFD increases intestinal permeability, which leads to elevated serum lipopolysaccharide (LPS) levels because of gut microbiota dysbiosis (5).
Houttuynia cordata (HC) is a medicinal and edible herb with an aromatic smell that has long been used in Asia to treat pneumonia, hypertension, constipation, and hyperglycemia via detoxification, reduction of heat and diuretic action. There is accumulating evidence of multiple pharmaceutical effects of HC, such as anti-cancer (7), anaphylactic inhibitory (8), antimutagenic (9), anti-inflammatory (10), anti-allergic (11), antioxidative (12), anti-viral (13), anti-bacterial (14), anti-obesity (15), and anti-diabetic (16) activities. Moreover, metformin, a well-known biguanide antidiabetic agent that has been used for more than 60 years, exerts multiple-properties such as inhibition of hepatic gluconeogenesis, enhancement of insulin sensitivity and augmentation of peripheral glucose uptake (17,18). Despite its beneficial impacts, metformin produces a large number of side effects, such as diarrhea, nausea, cramps, vomiting, bloating, lactic acidosis, and abdominal pain, which usually occur in clinics (19). The best-known mechanism of action of metformin is regulation of AMP-activated protein kinase (AMPK) and its downstream signaling pathway (20). Metformin has also been found to reduce hepatic gluconeogenesis and hyperglycemia independently of the AMPK pathway (21). Moreover, metformin induced augmentation of Akkermansia muciniphila was shown to improve glucose homeostasis in a HFD induced obese model (22). Although both HC and metformin have beneficial impacts on metabolic disorders, their combination has not been evaluated to date. Therefore, we examined an innovative agent that was formulated by combining HC with metformin to synergistically enhance the therapeutic efficacy and/or decrease side effects relative to HC or metformin alone. Specifically, the therapeutic effects of Houttuynia cordata extract (HCE) and metformin in combination were investigated using high-fat-diet (HFD) induced metabolic dysfunction of mice model. We also explored the corresponding potential mechanisms, especially regarding alteration of gut microbiota and systemic endotoxemia.
Houttuynia Cordata Extract (HCE) and Metformin
Houttuynia cordata was obtained from the pharmacy of Dongguk University Ilsan International Hospital (Goyang, South Korea). After grinding, powder of Houttuynia cordata was extracted by 5 L ethanol recycling reflux for 4 h. The extract was then filtered and vacuum lyophilized at −70 • C, which gave a 5.82% yield. The HCE contained 3.63% quercitrin, 0.45% quercetin and 0.99% of isoquercitrin (23). Metformin was purchased from Sigma-Aldrich (St. Louis, MO, USA).
Animals and Experimental Schedule
The animal study was approved by the Institutional Animal Care and Use Committee (IACUC-2015-037) of Dongguk University and conducted in accordance with the Guide for the Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources, Commission on Life Sciences, National Research Council, USA; National Academy Press: Washington D.C., 1996). Specific-pathogen-free (SPF) C57BL/6j male mice were obtained from Koatech (Gyeonggi-do, South Korea). After 1 week of acclimatization, 40 mice were equally divided into five groups by average body weight. The normal group was fed a control diet (Table S1) (AIN-93G diet) for 14 weeks, while the other four groups were continuously fed 60% calorie high fat diet (HFD) (Table S1) for 14 weeks (Figure 1A). From week five to 14, among the HFD-fed mice, eight were treated with metformin (100 mg/kg/day; metformin group), eight with HCE (400 mg/kg/day), eight were treated with a combination of metformin (50 mg/kg/day) and HCE (200 mg/kg/day) and the remaining eight were administrated distilled water as a negative control group. The experimental doses of metformin and HCE were determined based on their clinical dosages and the Guidance for Industry (2005). On the last experimental day, fresh stool samples were collected, and after 12 h of fasting all the animals were weighed and anesthetized using Zoletil (tiletamine-zolazepam, Virbac, Carros, France) and Rompun (xylazine-hydrochloride, Bayer, Leverkusen, Germany) in a 1:1 v/v combination. Blood was then collected from the ventral aorta and rapidly transferred into a BD Vacutainer (Franklin Lakes, NJ, USA) for serum separation. Liver, intestine and fat tissues were removed, weighed and rapidly stored in liquid nitrogen for future analysis.
Oral Glucose Tolerance Test (OGTT)
In the last week of the animal experiment, rats were fasted for 12 h, and then orally dosed with glucose solution (2 g/kg, Sigma-Aldrich, St. Louis, MO, USA). The blood glucose levels were then measured by ACCU-CHEK Active (Mannheim, Germany) using blood collected from the tail vain at 0, 30, 60, 90, 120 min post-glucose dosing. The OGTT results were also expressed as areas under the curves (AUC) to evaluate the degree of glucose tolerance impairment. . Data were expressed as the means ± SD and evaluated using one-way ANOVA followed by the LSD post-hoc test. ## P < 0.01 compared to the normal group; *P < 0.05 compared to the HFD group; **P < 0.01 compared to the HFD group (n = 7). "ns" means none statistic significant.
Serum Biochemical Analysis
Blood collected from the ventral aorta was centrifuged at 3,000 × g for 15 min to separate the serum. The serum levels of triglyceride (TG), total cholesterol (TC), high density lipoprotein (HDL), aspartate transaminase (AST), and alanine transaminase (ALT) were subsequently determined using commercial enzymatic assay kits (Asan Pharmaceutical Co., Seoul, Korea) according to the manufacturer's instructions.
Serum Endotoxin Analysis
Serum endotoxin levels was measured using a Limulus Amebocyte Lysate (LAL) kit (ENDOSAFE, SC, USA) according to the kit manufacturer's instructions. Briefly, 10× dilutions of mice serum samples were added to the kit supplied plate and wells were spiked with 5 EU/mL standard. Following the addition of 100 µL of LAL reagent, the kinetic absorbance of the mixture was measured at 405 nm and the reaction onset times of the samples were compared to the standard curve.
Oil Red O and H&E Staining
Liver, jejunum and adipose tissues were embedded in FSC 22 frozen section compound (Leica Biosystems, Richmond, IL, USA), then frozen and sectioned at 5 mm using a Leica CM1860 Cryostat (Leica Microsystems, Nussloch, Germany). Sections were then stained with oil red O solution or hematoxylin and eosin (Cayman chemical, USA), after which they were mounted on silicone-coated slides (Leica, USA) and examined using an Olympus BX61 microscope (Tokyo, Japan) and photographed using an Olympus DP70 digital camera (Tokyo, Japan).
Real-Time PCR for Analyzing Gene Expression in Liver Tissue
Total RNA was isolated from liver tissues using TRIsure TM (BIOLINE, MA, USA). cDNA was synthesized using an AccuPower RT premix kit (Bioneer, Daejeon, Korea) and realtime PCR amplification reactions were conducted with the corresponding primers (Table S2) using a LightCycler R FastStart DNA Master SYBR Green kit and a LightCycler instrument (Roche Applied Science, Indianapolis, ID, USA). The reaction was conducted in a total reaction volume of 20 µl consisting of PCR mix, 1 µl of cDNA, and gene-specific primers (10 pmol each). The relative gene expression was represented by 2 − Ct using β-actin as a housekeeping gene for normalization, where Ct is the crossing threshold value and Ct = Ct (target gene) -Ct (β-actin).
Western Blot Analysis
Mice liver tissues were homogenized in RIPA buffer (Abcam, USA) containing protease and phosphatase inhibitors (Abcam, USA). The supernatant was isolated, and total protein concentrations was measured using a BCA kit (Thermo Scientific, USA). Denatured proteins were separated in 10% SDS-PAGE gel, then transferred to polyvinylidene fluoride (PVDF) membrane (GE Healthcare Life Science, Germany) using the Mini-PROTEAN Tetra Cell System (BioRad Laboratories Inc., CA, USA). The membranes were blocked by 5% skim milk with TBST and Tris-buffered saline, then washed with Tween 20 for 1 h and treated with primary antibody (1:10,000) overnight at 4 • C. Samples were subsequently incubated with horseradish peroxidase-conjugated secondary antibodies (1:2,000, beta actin manufactured by Santa Cruz, USA; AMPK, phosphorylated-AMPK and GLUT2 manufactured by Cell Signaling, USA) for 1 h. Detailed information regarding the antibodies is shown in Table S3. Finally, the band on membranes were detected using SUPEX ECL solution and photographed using a FUJIFILM LAS3000 Image Analyzer (FUJI, Japan).
Fecal Microbial Analysis Using RFLP (Restriction Fragment Length Polymorphism) and Real-Time PCR
Fecal genomic DNA was isolated using a QIAamp DNA Stool Mini Kit (Qiagen, CA, USA) for RFLP and realtime PCR analyses. The 16S rRNA genes were PCR amplified using the universal bacterial primers 27F (5 ′ -AGAGTTTGATCCTGGCTCAG-3 ′ ), which were 5 ′ end-labeled with 5-FAM and 1492R (5 ′ -GGTTACCTTGTTACGACTT-3 ′ ). PCR amplification was conducted using an initial denaturation step at 94 • C for 3 min, followed by 30 cycles of 1 min at 94 • C, 45 s at 53 • C and 2 min at 72 • C. The reaction was completed with a final primer elongation step at 72 • C for 10 min. Following confirmation by agarose gel electrophoresis, PCR products were digested with the MspI restriction enzyme (TaKaRa, Shiga, Japan). The DNA samples containing the extension products were then added to Hi-Di formamide (Applied Biosystems) and GeneScan TM 1200 LIZ R Size Standard (Applied Biosystems, Foster City, CA, USA). The mixture was subsequently incubated at 95 • C for 5 min, placed on ice for 5 min, then analyzed using a 3730XL DNA analyzer (Applied Biosystems, Foster City, CA, USA). Next, T-RFLP electropherograms were imaged using GeneMapper R v5.0 and the Peak Scanner 2 software (Applied Biosystems). The relative peak areas of each terminal restriction fragment (TRF) were determined by dividing the area of the peak of interest by the total area of peaks within the following threshold values: lower threshold = 50 bp; upper threshold = 500 bp. Data were normalized by applying a threshold value for relative abundance at 0.5% and only TRFs with higher relative abundances were included in the remaining analyses.
Roche LightCycler FastStart DNA Master SYBR Green was used to conduct real-time PCR using the LightCycler 480 system (Roche Applied Science, Indianapolis, IN, USA). The primer sequences targeting the 16S rRNA gene of the bacteria are listed in Table S2. The standard conditions for the PCR amplification reactions were applied as previously described (23). The relative quantification of bacterial abundance is shown by 2 −Ct calculations (Ct, threshold cycle). The final results are expressed as normalized fold values relative to the normal group.
Cells Culture and Viability Assay
All cell lines were cultured in an incubator at 37 • C in presence of humidified air of 5% CO 2 . Mouse myoblasts (C2C12; Korea Cell Line Bank, Seoul, Korea) were cultured in DMEM or RPMI-1640 (GIBCO, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (FBS, GIBCO, CA, USA), 2 mM L-glutamine (GIBCO, Carlsbad, CA, USA), 100 U/ml penicillin (GIBCO, Carlsbad, CA, USA), and 100 µg/ml streptomycin (GIBCO, Carlsbad, CA, USA). The cell viability was determined using an EZ-cytox enhanced cell viability assay kit (DOGEN, Seoul, Korea). Briefly, after achieving approximately 80% confluency, the cells were treated for 24 h with quercitrin or quercetin (Sigma, USA) at 1, 5, 10, 20, 50, or 100 µM concentrations. EZ-Cytox was added to the cells 2 h prior to the end of the treatment schedule. Following completion of the reaction, the culture media were transferred to a fresh 96-well microplate. The absorbance of the wells was then read at 450 nm (650 nm as a reference wavelength) (Spectramax Plus, Molecular Devices, CA, USA). The viability of the control cells, in terms of their absorbance, was set to 100%.
Statistical Analysis
All experimental data were analyzed by one-way ANOVA followed by the LSD (least significant difference) post-hoc test using SPSS 17.0 (Chicago, IL, USA). The results were expressed as the means ± standard deviations (SD) and a P < 0.05 was considered statistically significant.
Reduction of Body, Organ, and Fat Weights
Following termination of the experimental schedule at week 14, the body, fat, liver, and kidney weights of HFD-fed mice were significantly higher compared to animals fed normal diet treatment, as expected. Treatment of HFD-fed animals with both HCE and metformin + HCE markedly reduced the body, fat, liver and kidney weights. Moreover, exposure of HFDfed mice to metformin reduced the abdominal fat weight, but less significantly than the HCE and metformin + HCE treatments (Figures 1B-D, Table 1). Furthermore, metformin treatment did not produce any significant effect on body, perirenal, epididymal or total fat of HFD-fed animals. Although not statistically significant, combination of metformin and HCE showed greater anti-obesity effects than either compound alone ( Table 1).
Amelioration of Serum Lipid Parameters and Hepatic Transaminases
As expected, treatment with HFD significantly increased the levels of serum TG, TC, AST and ALT, and markedly decreased the levels of serum HDL. Combined metformin and HCE treatment significantly attenuated the levels of TG, TC, AST and ALT, and significantly increased the levels of serum HDL in the HFD group. Metformin treatment alone only significantly lowered the level of serum ALT, while HCE treatment alone markedly lowered the levels of serum TC and ALT relative to the HFD group. Overall, combination of metformin and HCE group ameliorated the serum lipid profile and liver transaminases to a greater extent than metformin or HCE alone ( Table 2).
Improvement of Hyperglycemia and Glucose Tolerance in vivo and Glucose Uptake in vitro
As anticipated, HFD treatment significantly increased the fasting glucose relative to the normal group. Both metformin and HCE alone and in combination notably lowered the high fasting blood glucose (FBG) relative to HFD treatment. Combined treatment with metformin and HCE showed more efficient reduction of hyperglycemia than treatment with metformin and HCE alone. In addition, OGTT (AUC) was markedly increased by HFD treatment relative to the normal group, while HCE and HCE + metformin treatment significantly reduced the OGTT (AUC) relative to HFD treatment. Finally, HCE + metformin treatment showed more effective amelioration of glucose tolerance than HCE alone (Figures 2A-C, Table S4).
The in vitro results showed that treatment of C2C12 cells with either metformin alone or metformin + HCE remarkably elevated the glucose uptake. Interestingly, metformin in combination with quercitrin plus quercetin treatment, but not metformin + quercitrin or quercetin, exhibited a similar ability for glucose uptake in HepG2 cells as metformin + HCE treatment ( Figure 2D).
Alleviation of Systemic Endotoxin
Serum endotoxin level was significantly elevated in the HFD group relative to the normal group. However, the metformin + HCE group more significantly reduced the serum endotoxin concentration than HCE or metformin alone relative to the HFD group.
Histopathological Alteration
Staining of hepatic tissue with oil red o revealed that HFD treatment induced lipid droplet deposition in the liver ( Figure 3A). Additionally, HFD treatment markedly decreased the length and volume of intestinal villi and obviously reduced the size of adipocytes relative to the normal group (Figures 3B-E). However, these alterations were recovered in all of the medicine-treated groups. Indeed, the hepatic lipid accumulation, intestinal villi atrophy, and adipocytes enlargement in the HFD-fed animals were more prominently ameliorated by metformin + HCE treatment than metformin or HCE treatment alone.
Activation of AMPK and GLUT2
Treatment of HFD-fed animals with metformin + HCE, but not metformin or HCE alone, resulted in a significant increase in hepatic gene expression of AMPK. Moreover, treatment of HFD-fed animals with metformin + HCE enhanced the pAMPK/AMPK ratio. However, treatment of HFD-fed animals with metformin or HCE led to less enhancement of the pAMPK/AMPK ratio than their combination. Moreover, exposure of HFD-fed animals to all treatments significantly elevated hepatic gene expression of GLUT2 and markedly increased the hepatic GLUT2 protein level (Figure 4).
Attenuation of Inflammation
AS expected, HFD treatment significantly up-regulated gene expression of the TLR4 and downstream signaling proteins, such as IL-6 and MCP-1, relative to the normal group. Nevertheless, HCE + metformin treatment showed greater inhibition of the TLR4 and MCP-1 expression than HFD treatment rather relative to metformin or HCE alone (Figure 5).
Modification of Gut Microbial Distribution
PCoA analysis of RFLP data revealed unique characteristics of the gut microbial community in normal, HFD, metformin and HCE groups. More specifically, the distribution pattern of the gut microbial community in the metformin + HCE group had more similarity with the metformin alone group than with other groups (Figure 6). Exposure to HFD resulted in a significant increase in the abundance of Gram-negative bacteria in the animals. Additionally, treatment with HFD-fed animals with metformin + HCE, but neither metformin nor HCE alone, significantly decreased the population of universal Gramnegative bacteria. Conversely, exposure of HFD-fed animals to all three medicines significantly reduced the population of Escherichia coli. No significant differences in the abundance of universal Gram-positive bacteria were observed among groups. However, exposure of HFD-fed animals to metformin + HCE, but neither metformin nor HCE alone, significantly decreased the population of Clostridium leptum. In contrast, treatment of HFDfed animals with metformin or HCE alone resulted in a greater increase in Bacteriodetes fragilis abundance when compared to HFD-fed animals treated with metformin + HCE (Figure 7).
DISCUSSION
Although substantial studies have shown that HC and metformin individually could improve metabolic activities (24,25), to the best of our knowledge, this is the first report to evaluate the FIGURE 3 | Histopathological analysis. On the final experimental day, the liver (A), jejunum (B) and adipose tissue (C) were removed rapidly, after which tissue sections were prepared and stained with oil red O or hematoxylin and eosin. Histological examination of the tissue sections was conducted under a light microscope (200× magnification). Calculated length (D) and volume (E) of the intestinal villi are shown. Data were expressed as the means ± SD and evaluated using one-way ANOVA followed by the LSD post-hoc test. ## P < 0.01 compared to the normal group; *P < 0.05 compared to the HFD group; **P < 0.01 compared to the HFD group (n = 3).
impact of combined treatment with metformin and HCE in a dysmetabolic animal model induced by HFD. More specifically, the major goal of this study was to examine whether the edible formulation of the medicinal herb HC can exert certain synergic effects on the activity of metformin or relieve the side effects of this antidiabetic drug, as well as to elucidate the underlying mechanism of any overserved effects. Based on the actual clinical dosage calculated by a conversion formula from FDA guidance (26), we selected 100 mg/kg of metformin, 400 mg/kg of HCE and half of this dose of metformin (50 mg/kg) together with half the dose of HCE (200 mg/kg) for this investigation. As a representative anti-hyperglycemia agent, metformin significantly ameliorated the FBG in HFDtreated animals. Similarly, HCE treatment significantly reduced the FBG level in HFD-fed animals; however, combination of the metformin and HCE more effectively lowered the FBG than metformin or HCE alone at their higher doses. OGTT, the most widely used procedure for evaluating whole body glucose tolerance, has often been employed to assess insulin sensitivity (27,28). Indeed, since last 20 years, various indices of insulin sensitivity/resistance using the data from OGTT are documented (29). In the present study, treatment of HFD-fed animals with metformin and HCE in combination led to a greater improvement in OGTT parameters than higher doses of metformin or HCE alone, suggesting the synergistic beneficial impact of these two therapeutic agents on glucose tolerance as well as insulin sensitivity/resistance. Furthermore, in a previous study, using relevant in vitro and in vivo models, we showed that treatment with metformin + HCE was more beneficial than metformin alone in the improvement of glucose uptake, insulin secretion, glucose metabolism and insulin sensitivity (23). Our results revealed that the level of quercetin and quercitrin in its glycoside form in HCE were 0.363 and 0.045 mg/g, respectively. These two compounds are active pharmaceutical ingredients of HCE known to have potential antioxidant and anti-inflammatory activities (30). Quercetin shares a common mechanism with metformin in elevating glucose uptake, which is mediated via AMPK activation and upregulation of GLUT expression (31). Our results indicated that HCE assisted metformin in further phosphorylation and gene expression of AMPK. Exposure of HFD-fed animals to all treatments significantly elevated glucose uptake ability via an increase in gene expression of GLUT2 as well as the hepatic level of transporter protein. Thus, our in vitro and in vivo findings indicate that the combination of metformin and HCE may ameliorate hyperglycemia and glucose tolerance via cooperative augmentation of glucose uptake. It is worth noting that HCE boosts these effects, which is likely because of the collaborative action of quercetin and quercitrin rather than other components.
As expected, obesity, fatty liver, and fatty kidney pathophysiological states were induced in animals in response to long-term HFD feeding as supported by a noteworthy increase in Data were expressed as the means ± SD and statistically evaluated using one-way ANOVA followed by the LSD post-hoc test. # P < 0.05 compared to the normal group; ## P < 0.01 compared to the normal group; *P < 0.05 compared to the HFD group; **P < 0.01 compared to the HFD group (n = 7). body, fat, liver and kidney weights. In parallel, histopathological evidence, such as marked hepatic lipid accumulation and increased adipocyte population in the adipose tissue of HFDfed animals also indicated that HFD generates grievous lipid dysmetabolism. As in previous studies (23), treatment with either HCE or metformin ameliorated the symptoms of obesity and fatty liver in the present investigation. Meanwhile, HFD destroyed the morphology of intestinal villus; however, these effects were obviously ameliorated by HCE and/or metformin treatment. Interestingly, treatment of HFD-fed animals with HCE and metformin in combination at their half doses was found to be more effective at reducing the body weight, liver weight and fat weight, especially the weight of abdominal and perinephric fats, than treatment with HCE or metformin alone at their original doses. Notably, none of the aforementioned treatments altered the epididymal fat content of HFD-fed animals.
As circulating lipid markers, the levels of serum TG, TC, and HDL indicate the status of holistic lipid metabolism. On the final experimental day, blood was collected from the animals and the serum endotoxin level (A) was determined as described in the Materials and methods. Stool samples were collected and the abundance of the 16S rRNA gene of the bacterial strains (B-F) was determined as described in the Materials and Methods section. The results are expressed as normalized fold values relative to the normal group. Data were expressed as the means ± SD and evaluated using one-way ANOVA followed by the LSD post-hoc test. # P < 0.05 compared to the normal group; ## P < 0.01 compared to the normal group; *P < 0.05 compared to the HFD group; **P < 0.01 compared to the HFD group (n = 7). "ns" means none statistic significant.
Chronic consumption of HFD induces dyslipidemia and the development of fatty liver (32). Previous reports demonstrated that treatment of HFD-fed rats with metformin or HC alone depleted the increased serum levels of TG and TC, and that this was accompanied with increased serum HDL levels (33,34). Interestingly, in the present study, HCE + metformin treatment more effectively restored the dysregulated lipid metabolism than HCE or metformin alone in HFD-fed animals. Additionally, as expected, the serum levels of both hepatic transaminases AST and ALT, the sensitive indicators of various liver injuries including fatty liver, were found to be significantly higher in the HFD group than the normal group, which was in keeping with the aberrated histological architecture of the liver in the former group. Overall, our results revealed that treatment of HFD-fed animals with HCE + metformin was more effective than treatment with either compound alone at restoring liver morphology and reducing the serum levels of AST and ALT.
Lipopolysaccharides (LPS), which are also known as endotoxins, exists in the outer membrane of Gram-negative bacteria, where they trigger entotoxemia (5). Metabolic endotoxemia-induced chronic low-grade inflammation has been deemed a vital hallmark of metabolic diseases such as obesity and type 2 diabetes (35). Previous reports have shown that both metformin and HC possess anti-inflammatory activities (36,37). Furthermore, metformin prevents a number of diseases that are associated with endotoxin insult of Gram-negative bacteria (38)(39)(40). In our study, combination of metformin and HCE more significantly attenuated the level of endotoxin in the circulatory system of HFD-fed animals than either compound alone. This is further supported by our findings regarding the significant reduction in abundance of fecal universal Gramnegative bacteria without any modulation in the population of fecal universal Gram-positive bacteria in HFD-fed mice in response to treatment with metformin + HCE, but not with metformin or HCE alone. The significant suppression of gene expression of both proinflammatory cytokine IL-6 and inflammatory chemokine MCP-1, as well as the potent inhibition of TLR4 in HFD-fed mice by metformin + HCE also indicates a feasible mechanism for the cooperative effects of this combination on the anti-inflammatory action against endotoxemia.
For the last few years, the relationship between various diseases and gut commensal microbiota has been widely investigated worldwide (41). Gut microbial composition, which can be altered by HFD (42), plays a vital role in the development of metabolic diseases through regulation of host energy homeostasis and redundancy in fat accumulation (43). Therefore, gut microbial modulation is regarded as a feasible strategy for ameliorating metabolic diseases. Indeed, previous studies have revealed that both metformin and medicinal herbs can ameliorate obesity and related endotoxemia, probably via alteration of the distribution of gut microbiota (22,44). According to our RFLP analysis, exposure of HFD-fed animals to metformin + HCE caused a more pronounced modulation of the gut microbial population than other treatments. The more similar profile of gut microbiota between the metformin + HCE group and the metformin alone group indicates that metformin potentially restrained the HCE-induced gut microbiota shift. Interestingly, similar to dietary fiber (45), combination of metformin and HCE notably improved glycemia and reduced Clostridium leptum in HFD-induced obese animals. Therefore, it is conceivable that HCE together with metformin may exert prebiotic effects leading to significant reduction in the population of gut Gram-negative bacteria, including Escherichia coli.
Taken together, our findings suggest that HCE assists metformin in the improvement of obesity, glucose tolerance, hyperglycemia, and hyperlipidemia. This is more likely mediated by reduction of endotoxin and inflammatory stress through regulation of the gut microbial community, particularly Clostridium leptum and Gram-negative bacteria including Escherichia coli. Thus, it is conceivable that combined treatment with Houttuynia cordata and metformin may provide a more efficient strategy for the treatment of patients with metabolic syndrome, particularly T2D and hyperlipidemia. The gut microbiota responsible for contributing the synergistic effects of Houttuynia cordata on metformin need to be further explored in future studies.
AUTHOR CONTRIBUTIONS
J-HW wrote manuscript. SB edited and rewrote some parts of manuscript. NS analyzed microbiota data. Y-WC and YC involved in study design and data analysis. HK conceived and designed the study.
|
v3-fos-license
|
2018-05-20T13:21:37.814Z
|
2018-06-01T00:00:00.000
|
21715654
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://wwwnc.cdc.gov/eid/article/24/6/pdfs/17-1887.pdf",
"pdf_hash": "5425d1a05da6d6cdc46ab5f09bfcc85e61af8d25",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45578",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "5425d1a05da6d6cdc46ab5f09bfcc85e61af8d25",
"year": 2018
}
|
pes2o/s2orc
|
Brucella suis Infection in Dog Fed Raw Meat, the Netherlands
A Brucella suis biovar 1 infection was diagnosed in a dog without typical exposure risks, but the dog had been fed a raw meat–based diet (hare carcasses imported from Argentina). Track and trace investigations revealed that the most likely source of infection was the dog’s raw meat diet.
After diagnosis confirmation, serum and urine samples were collected from the dog. Serologic testing for B. suis yielded a positive result by microscopic agglutination test (MAT; >120 IU/mL) and rose bengal test (4,5). Serologic test results for B. canis (serum agglutination test <50 IU/ mL) (1) and bacteriologic culture of a urine sample were negative. Despite treatment with doxycycline (10 mg/ kg 1×/d for 14 days starting 3 days after neuter), the dog did not recuperate and because of the poor prognosis was euthanized. Postmortem examination of the dog was performed, and samples from kidney, spleen, prostate, liver, and abdominal lymph nodes were tested by PCR (4). Only the prostate yielded a positive result for Brucella spp.
Because brucellosis is notifiable in the Netherlands, the Incidence Crisis Centre of the Netherlands Food and Consumer Product Safety Authority was notified. The Centre started investigations to track potential transmission and trace the source of infection. The owners of the index dog were asked to list all dogs that had had frequent contact with their dog during the previous 2-3 months. From the 5 contact dogs identified, blood samples were collected (twice, 4 weeks apart) for serologic testing (MAT and rose bengal) and urine samples were collected for bacteriologic culture. Blood from 1 contact dog yielded a weakly positive result for B. suis antibodies (MAT 30 IE/mL; rose bengal negative) at both collection times. An acute infection in this dog was considered unlikely because no seroconversion was detected. All other dogs yielded negative serologic results. All urine samples were bacteriologically negative.
The owners of the index dog reported no relevant exposure risks except that the dog was fed a raw meat-based diet (usually commercial mixed raw feed and in June-July 2016 unprocessed heads of hares, all from the same supplier). Because raw meat consumption has been associated with B. suis infections in dogs (6,7), the feed was consid- samples, 2 yielded a positive PCR result for Brucella spp. and were subsequently cultured. Colonies from 1 sample were confirmed by MALDI-TOF mass spectrometry (with an in-house extended database) to be B. suis biovar 1. One isolate was sequenced and molecularly characterized in silico by MLVA and MLST (ST14) (2)(3)(4). The isolates from the index dog and from the batch of hare carcasses showed high similarity (only 1 locus difference in the MLVA Ms07: 4 repeats dog isolate; 6 repeats hare isolate). Similarity with 24 closely related reference isolates from a public database (http://microbesgenotyping.i2bc.parissaclay.fr/) was much lower (Figure).
Conclusions
This B. suis biovar 1 infection in a dog in the Netherlands was linked to its commercial raw meat-based diet. Canine infections with this biovar have been documented in B. suis biovar 1-endemic areas (e.g., Australia and Latin America), mostly associated with exposure to feral pigs or consumption of raw feral pig meat (6,7). In the case we report, the B. suis biovar 1 infection most likely originated from hare carcasses imported from Argentina into the Netherlands. B. suis biovar 1 is endemic to Latin America and has been isolated from hares (7)(8)(9). The dog showed clinical signs ≈4 months after it had been fed raw hare heads from a supplier of commercial raw feed. The presence of B. suis biovar 1 in another batch of hare carcasses from the same supplier makes foodborne transmission highly likely. The genotypic similarity between the isolates from the dog and the feed and the fact that the supplier imported multiple batches from the same slaughter plant in Argentina during the preceding months confirms the feed as the most probable source of infection. This report illustrates possible implications of the global trade of raw meat. Importation of hare carcasses, whether or not approved for human consumption, from countries outside the European Union into the European Union is legal. Because the aforementioned batches of hare carcasses from Argentina were approved for human and animal consumption, humans and other animals were potentially at risk when handling or consuming meat products from these batches.
Medical microbiologists of the Municipal Health Service assessed the zoonotic risks for all persons who had come in contact with the dog or with samples from the dog or hare carcasses. Five laboratory technicians who had been exposed to pure cultures (before bacterial identification) were given postexposure prophylaxis and tested for seroconversion to B. suis (postexposure weeks 2, 4, 6, and 24) according to national guidelines (10). To our knowledge, no human infections were linked to this case.
B. suis biovar 1 is a potential threat to the pig farming industry because introduction of B. suis into pig herds can have substantial economic consequences (11). A striking detail is that the last B. suis infection in pigs in the Netherlands (1969) was associated with swill feeding of hares imported from Argentina (12).
In response to our findings, preventive measures were implemented (e.g., sampling of imported raw meat and communication of risk to international authorities and rawfeed suppliers). This case stresses the microbiological risks for humans and animals of feeding raw meat-based diets, which has become increasingly popular among pet owners (13). This case also highlights the need for a One Health approach because B. suis biovar 1 is a zoonotic agent and can cause severe infections in humans (14,15).
|
v3-fos-license
|
2018-04-03T00:23:08.597Z
|
2016-04-01T00:00:00.000
|
205692850
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/srep24016.pdf",
"pdf_hash": "f6345b319008df3c162a479b8c2e7bb734ab6e49",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45580",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "f6345b319008df3c162a479b8c2e7bb734ab6e49",
"year": 2016
}
|
pes2o/s2orc
|
A rare variant at 11p13 is associated with tuberculosis susceptibility in the Han Chinese population
Genome-wide association studies (GWASs) have yet to be conducted for tuberculosis (TB) susceptibility in China. Two previously identified single nucleotide polymorphisms (SNPs) from tuberculosis GWASs, rs2057178 and rs4331426, were evaluated for TB predisposition. The associations between SNPs and gene expression levels were analyzed using the genomic data and corresponding whole-genome expression of the Han Chinese in Beijing, China. Genotyping was successfully completed for 763 pulmonary TB patients and 763 healthy controls. The T allele of the rare variant rs2057178 was significantly associated with TB predisposition (χ2 = 14.07, P = 0.0002). Meanwhile, the CT genotype of rs2057178 was associated with a decreased risk of TB (adjusted OR = 0.52, 95% CI, 0.34–0.78). The CT genotype of rs2057178 was also associated with decreased expression levels of infection-related gene, suppressor of cytokine signaling 2 (SOCS2), and increased expression levels of v-maf avian musculoaponeurotic fibrosarcoma oncogene homolog B (MAFB). No gene expression levels were found to be associated with the genotype of rs4331426. We found that the rare variant rs2057178 was significantly associated with TB in the Han Chinese population. Moreover, the expression levels of MAFB and SOCS2 correlated with rs2057178 and might be potential candidates for assessing TB susceptibility.
were identified in previous GWASs, were selected to verify their association with TB predisposition 10,11 . More importantly, the suggestion of Wilkinson 13 was adopted to test the latent TB infection status of the control group to control for the exposure factor of M. tb infection in this study. Meanwhile, the Gene Expression Omnibus Database was used to explore the relationship between the selected SNPs and whole genome mRNA expression levels 14,15 . Lymphoblastoid cell lines, which carry the complete set of germline genetic material, have been instrumental, in general, as a source of biomolecules and as a system to carry out various immunological and epidemiological studies 16 . Dissemination of M. tb in infected persons may be connected with the initiation of adaptive immune responses, which are under strict host genetic control 17 . The transcriptome level among the lymphoblastoid cell lines may be closely associated with disease-associated genetic variants. Thus, we consider that specific gene expression levels in lymphoblastoid cell lines may be closely regulated by host genomic variants. RNAs extracted from lymphoblastoid cell lines of the 45 unrelated CHB of the HapMap Project were quantified to explore the relationship between selected SNPs and the expression levels of potential TB susceptibility genes.
Methods
Study subjects. This case-control study was carried out in two designated hospitals with TB control programs in Jiangsu Province, eastern China. One was the TB control center of Danyang County, and the other was the Nanjing Chest Hospital in the province's capital city. Incident TB cases of Han ethnicity registered in these two hospitals from July 1st, 2013 to December 31st, 2014 were recruited for inclusion as cases in this study. All enrolled cases were bacteriologically confirmed by Lowenstein-Jensen (LJ) culture, and M. tb was identified using the p-nitrobenzoic acid (PNB) method. Meanwhile, the healthy controls were recruited from two communities from Danyang County during the same study period. All control candidates underwent X-ray examination. Sputum culture was provided if the potential controls reported having TB-like clinical symptoms. Only subjects with normal X-ray manifestation, negative LJ culture, if tested, and no comorbidity with other infectious diseases (such as HIV/AIDS and hepatitis B virus) were eligible as healthy controls. All of the controls were of Han ethnicity, and they were 1:1 matched to the cases by age (± 5 years) and gender. In total, 764 TB cases and 764 healthy controls were recruited for the study. All experimental protocols in this study were approved by the Institutional Review Board of the Center for Disease Control and Prevention of Jiangsu Province, and written informed consent was obtained from each participant before the study. Additionally, all of the methods in this study were carried out in accordance with the approved guidelines.
Interferon-Gamma Release Assay. The interferon-gamma release assay (QuantiFERON-TB Gold
In-Tube [QFT; Qiagen, Valencia, CA, USA]) was used to test the latent TB infection (LTBI) status of the controls. QFT was performed according to the instructions provided by Qiagen 18 .
Genotyping of rs2057178 and rs4331426. The restriction fragment length polymorphism (RFLP) method was used for the genotyping. The sequences of the primers used to amplify the PCR fragment of rs2057178 were as follows: 5′-TCC ATT GGC CTG AAC TGG AT-3′ (forward); 5′-TGG CCT CCA GTT CTT TAG CA-3′ (reverse). A 186 base pair PCR fragment was amplified by the primers. The restriction endonuclease enzyme StuI (New England BioLabs, inc., Ipswich, MA, USA) was used to digest the PCR fragment. The presence of the C allele results in two fragments: one fragment of 125 base pairs in length and one fragment of 61 base pairs in length. The presence of the T allele results in a single fragment of 186 base pairs in length. The PCR amplification fragment for rs4331426 was 250 base pairs in length, and the sequences of the amplification primers were as follows: 5′-AAG GGT GTT GTT CTG TTT CTA GA-3′ (forward), 5′-TGT TGC ACC ACC TCT TGT AGA-3′ (reverse). The restriction endonuclease enzyme HhaI (New England BioLabs, inc., Ipswich, MA, USA) was used to digest the PCR fragment. The presence of the G allele results in two fragments: one fragment of 202 base pairs in length and one fragment of 48 base pairs in length. The presence of the A allele results in a single fragment of 250 base pairs in length.
Genotypic data of rs2057178 and rs4331426 of the 45 Han Chinese in Beijing (CHB) from the HapMap Project and whole-genome expression levels from the Gene Expression Omnibus of
PubMed. The genotypic data of rs2057178 and rs4331426 were extracted from the HapMap Genome Browser Release #28 (phases 1, 2 & 3-merged genotypes and frequencies), and the genotypes of each SNP for the 139 CHB individuals were derived from this database. The DNA samples were prepared from blood samples collected from individuals living in the residential community at Beijing Normal University. All of the samples are from unrelated individuals who identified themselves as having at least three out of four Han Chinese grandparents.
Finally, 45 CHB provided validated genotypic data of the two SNPs. Using mRNAs extracted from lymphoblastoid cell lines for the corresponding 45 CHB, the Gene Expression Omnibus (GEO) of PubMed (accession number GSE6536) was used to analyze the relationship between the two SNPs and the whole-genome mRNA expression levels of the 47,293 genes 14,15 . Statistics. An unpaired Student t test was applied to numerical variables, whereas the differences in categorical variables were tested using the χ 2 test. The Cochran-Armitage trend test was used to compare the genotype dosage among the TB cases and controls. Hardy-Weinberg equilibrium (HWE) was assessed by Pearson χ 2 test. The strength of associations between genotypes and TB were estimated by odds ratio (OR) and its 95% confidence interval (95% CI) through univariate and multivariate logistic regression analyses adjusted for age and gender. A P value of less than 0.05 was considered statistically significant. The relationships between the genotypes of the SNPs and the gene expression levels of the 45 CHB were analyzed by the online software GEO2R based on the moderate t test. Meanwhile, the usual t test was also applied to analyze the relationship between the genotypes and the gene expressions. The Benjamini & Hochberg (False discovery rate) was adopted for the multiple comparison correction 19 . The significance level for gene expression among different genotype groups was P < 0.20 20
Results
A total of 763 TB cases and 763 controls were included in this analysis, with one case and one control failing the genotyping. The mean ages for TB cases and controls were 49.17 ± 17.48 years and 52.03 ± 17.33 years, respectively. The QFT results showed that the positive rate of LTBI for the controls was 23.1% (176/763). As shown in Table 1, the minor allele (T allele) frequency of rs2057178 was 0.048 in the TB cases and 0.027 in the healthy controls (χ 2 = 14.07, P = 0.0002). As the minor allele of rs2057178 was less than 0.05 and the TT genotype was only found in six subjects, the TT genotype among TB cases and controls showed a decreased risk of TB without reaching significance (adjusted OR = 0.56, 95% CI, 0.10-3.10). However, the CT genotype was significantly associated with a decreased risk of TB (adjusted OR = 0.52, 95% CI, 0.34-0.78). The dominant model (CT + TT vs. CC) demonstrated a protective effect on TB (adjusted OR = 0.52, 95% CI, 0.35-0.78). Based on the T allele frequency (0.048) of rs2057178 and the estimated TB prevalence (51/100000) in this region 21 , the corresponding power for the dominant OR of rs2057178 was 87.77%. For SNP rs4331426, the minor allele (G allele) frequencies between TB cases and controls showed no statistically significant difference (χ 2 = 0.04, P = 0.8390). The Hardy-Weinberg equilibrium test demonstrated that genotypes of each locus in the controls were all in Hardy-Weinberg equilibrium (χ 2 = 2.79, P = 0.095 for rs2057178 and χ 2 = 0.142, P = 0.704 for rs4331426).
Then, the controls were classified as QFT-positive and QFT-negative to compare the genotype distributions of the two variants among the TB cases, non-infected controls and LTBI controls. The data in Table 2 show that the CT genotype of rs2057178 was significantly associated with a 0.60-fold (adjusted OR = 0.40, 95% CI, 0.22-0.70) decreased risk of TB in QFT-positive controls. The protective effect of the CT genotype was also observed in the QFT-negative controls (adjusted OR = 0.57, 95% CI, 0.36-0.90). The Cochran-Armitage trend test showed that the proportion of the CT genotype was increasing from the QFT-negative controls (8.0%) to the QFT-positive controls (11.4%, P trend = 0.0006). No associations of rs4331426 with TB were found among the TB cases and the subgroups of the controls.
The liner regression analysis was conducted between the genotypes of the two SNPs and the whole genome mRNA expression levels in the lymphoblastoid cell lines. The 45 CHB were classified into two groups based on the genotypic data of rs2057178 and rs4331426. For rs2057178, 42 CHB had the CC genotype, two CHB had the CT genotype and one CHB failed the genotyping (no TT genotype was found). For rs4331426, 39 CHB had the AA genotype, four CHB had the AG genotype and two CHB failed the genotyping (no GG genotype was found). The GEO2R analyzed the expression levels of 47293 genes in each CHB subject, and 28 genes revealed significantly different expression levels between the two genotype groups of rs2057178 by moderate t test after multiple comparison (Table 3). However, the usual t test only found the first 20 genes were in relationship with SNP rs2057178. Thus, we included the first 20 genes for both reaching significance. No gene expression levels were found to be associated with the genotypes of rs4331426 after multiple comparison adjustment (data not shown).
Discussion
In this case-control study, the T allele of SNP rs2057178 was significantly associated with a decreased risk of TB in the Han Chinese population, and 28 mRNA levels were found to be associated with the genotypes of rs2057178 in the 42 CHB, which indicated a potential functional role of rs2057178 in modulating those gene expression levels.
However, no genotype of rs4331426 was found to be associated with TB susceptibility and no gene expression levels were associated with any genotype of rs4331426. Thy et al. first reported that rs4331426 was associated with TB susceptibility in a GWAS of the African population in 2010 11 . The HapMap data showed that the G allele frequency of rs4331426 in the Han Chinese population was 0.044, whereas the G allele frequency in the African population was 0.51. Because of the vast difference in the G allele frequencies between the African and Asian populations, the results of repeated association studies would be different. Another replicated association study conducted in the Chinese population by Wang et al. found that the G allele of rs4331426 had an opposite effect on TB susceptibility 22 compared with the results of Thy et al. The G allele frequency was 0.0338 in the control group of Wang's study, while the G allele frequency in our control group was 0.0301, all were close to the G allele frequency of 0.044 of the CHB of the HapMap data. Based on our data, we did not find any association between the genotypes of rs4331426 and TB predisposition. Another two association studies conducted in the Chinese population also did not find a relationship between rs4331426 and TB risk 23,24 . As SNP rs4331426 was located in a gene desert region, it was difficult to determine the function of the locus. Generally, it was postulated that other functional loci, in linkage with rs4331426, would be the target loci that were involved in the mechanism of predisposition to TB. In this study, the relationship between the genotypes of rs4331426 and the whole-genome expression levels indicated that no gene expression levels were correlated with the genotypes of rs4331426 after multiple comparison adjustment.
A subsequent GWAS by Thy et al. revealed that rs2057178 was associated with TB susceptibility 10 . In Thy's study, the effect of the T allele on TB susceptibility was further verified in the Gambian and Russian populations. However, the association between rs2057178 and the predisposition to TB failed to replicate in the Indonesian population. Another study conducted in the Asian population also failed to replicate the protective effect of the T allele of rs2057178 on TB susceptibility 23 . It is interesting that a recent GWAS conducted in the African population also revealed the relationship between rs2057178 and TB susceptibility 9 . For the populations discussed above, the HapMap data showed that the T allele frequency of rs2057178 varied with a broad spectrum; it was highest in the African population (0.33) and lowest in the Asian population (0.02). Inter-population heterogeneity cannot be ignored for genetic susceptibility for TB. However, in this study, SNP rs2057178 was found to be significantly associated with TB susceptibility in the Han Chinese population. When the control group was stratified into QFT-positive and QFT-negative groups, the proportion of the CT genotype in the QFT-positive group (11.4%) was higher than that of the QFT-negative group (8%). The Cochran-Armitage trend test revealed that the QFT-positive group with the CT genotype would be more resistant to TB, which suggested that the T allele of rs2057178 might protect people latently infected with TB from developing TB disease. Although the locus was associated with TB susceptibility, the functional role of the locus was not clearly determined as the SNP was located in an intergenic region. SNP rs2057178 was in the 45 Kb downstream of Wilms' tumor 1 (WT1) gene, which had been shown to be associated with the occurrence of Wilms' tumor 25 . It was reported that WT1 variants might play a role in altering the effects of interferon-beta on vitamin D 26 , which had been shown to be beneficial in the treatment of TB 27 . Meanwhile, the WT1 gene was involved in the activation of the vitamin D receptor 28 , which was critically important for binding with 1,25-dihydroxyvitamin D3 to modulate the immune system in fighting M. tb infection 29 .
Although the HapMap genotypic data of rs2057178 and the whole-genome expression levels of the 42 CHB did not reveal a significant association between the genotypes of rs2057178 and WT1 gene expression, another 20 gene expression levels were found to be significantly associated with rs2057178. More importantly, v-maf avian musculoaponeurotic fibrosarcoma oncogene homolog B (MAFB, Fig. 1), suppressor of cytokine signaling 2 (SOCS2, Fig. 2) were found to be associated with SNP rs2057178 in infectious diseases. According to the fold change (FC) of the gene expressions, MAFB was up expressed by 31% while SOCS2 was down expressed by 57%. MAFB was first reported to be a candidate gene for TB susceptibility in a GWAS by Mahasirimongkol et al. 30 Table 2. The genotypes distribution of rs2057178 and rs4331426 between tuberculosis cases and IGRA positive and negative controls. * Adjusted by age and gender. and the expression level of MAFB was found to be higher in patients with active TB compared with the healthy controls and previous TB cases 31 . Our study provided evidence for rs2057178 in modulating the TB susceptible gene MAFB in trans effect, and the mechanism of modulation needs to be further explored. Simultaneously, the SOCS2 expression level was significantly decreased in the CT genotype of rs2057178 for the CHB compared with the CC genotype. A previous study showed that SOCS2 was required to mediate the effects of lipoxin 32 , which was thought to negatively regulate protective Th1 responses against mycobacterial infection in vivo 33 .
Even though the interferon regulatory factor 5 (IRF5, Fig. 3) was not found in association with SNP rs2057178 by the usual t test, and the FC showed that IRF5 was only slightly decreased (14%) among the CT genotype of rs2057178, IRF5 has an important role in the type 1 interferon response to M. tb 34 between the protective effect of the CT genotype of rs2057178 and the decreased expression level of IRF5, and the level of IRF5 needs to be further validated in TB cases and controls. Several limitations need to be noted in this study. First, the QFT method is an indirect method for detecting latent TB infection, and it may not accurately represent the existence of M. tb in vivo because we do not know how long the immunological reaction to M. tb will last. However, the differentiation ability of the QFT is more convincing in detecting M. tb-induced infection rather than other Mycobacteria when compared with the tuberculin skin test. Second, the transcriptome varies considerably across different cell populations and developmental stages. A previous study revealed the different cell-type associated gene expression profiles of tuberculosis 35 . Even some researchers found that the interferon-inducible genes were predominantly expressed in neutrophils and, to some extent, in monocytes, but not in T cells 36 . Gene expression levels in other cell types should be evaluated to comprehensively reveal the potentially distinct gene expression profiles of different cell populations. Third, the limited sample size for revealing the associations between SNPs and gene expression levels may be confounded by other factors, and it is worthwhile to compare the actual mRNA expression levels among the TB cases and the healthy controls with larger samples in future studies.
In conclusion, we replicated the loci of TB GWAS in the Han Chinese population. We found that rs2057178 was significantly associated with TB predisposition and that the expression levels of MAFB and SOCS2 were significantly associated with the genotypes of rs2057178. We assume that MAFB and SOCS2 could be potential candidate genes for TB susceptibility in the Han Chinese population. Further functional studies are required to reveal the mechanism of host genetics on TB susceptibility. Additionally, the liner regression analysis for the association between SNP genotypes and gene expression levels could be a choice for exploring the potential functional role of disease predisposition loci.
|
v3-fos-license
|
2020-01-23T14:16:17.278Z
|
2020-01-23T00:00:00.000
|
210862514
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2019.01572/pdf",
"pdf_hash": "4cce48314ca33f4ea526e500e582eb5a66666b69",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45583",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "4cce48314ca33f4ea526e500e582eb5a66666b69",
"year": 2019
}
|
pes2o/s2orc
|
Determination of Endogenous Bufalin in Serum of Patients With Hepatocellular Carcinoma Based on HPLC-MS/MS
Bufalin is a cardiotonic steroid and a key active ingredient of the Chinese medicine ChanSu. It has significant anti-tumor activity against many malignancies, including hepatocellular carcinoma (HCC). Previous studies have shown that human bodies contain an endogenous bufalin-like substance. This study aimed to confirm whether the endogenous bufalin-like substances is bufalin and further detect the differences between HCC and control groups of endogenous bufalin concentration by the high-performance liquid chromatography coupled tandem mass spectrometry (HPLC-MS/MS). The results confirmed the endogenous bufalin-like substance is bufalin. Totally, 227 serum samples were collected: 54 from HCC patients and 173 from healthy volunteers constituting a control group. Both the test group and the control group contained bufalin in serum, revealing that bufalin is indeed an endogenous substance. The bufalin concentration was 1.3 nM in HCC patients and 5.7 nM in normal people (P < 0.0001). These results indicate that human bodies contain endogenous bufalin, and it may be negatively correlated with the incidence of HCC.
INTRODUCTION
Hepatocellular carcinoma (HCC) is the most common type of liver cancer, which is the fourth most common malignancy and the third leading cause of cancer related deaths in China (1). Surgical resection is regarded as the only radical treatment of HCC. However, the prognosis of patients with HCC remains unsatisfactory due to carcinoma recurrence and a limited response to targeted therapy, chemotherapy, and radiotherapy (2). The small-molecule multikinase inhibitor sorafenib has been the only systemic therapy proven to extend overall survival when used as a first-line treatment over 10 years, showing a median improvement of 2.8 months compared with placebo, despite a low response rate of 2% (3). Recently, other small-molecule multikinase inhibitors (e.g., regorafenib and lenvatinib) have been approved for HCC treatment, but median survival time in patients was <13.6 months (4). Therefore, it is of great significance to explore new effective treatments of HCC.
Traditional Chinese medicines (TCM), including plants, animal parts, and minerals, have drawn a great deal of attention in recent years for their potential in the treatment of HCC and can prevent recurrence after resection of small HCC (5)(6)(7). In TCM practice, Chansu (venom of toad skin) and Chanpi (the skin of toad) have been used in the treatment of tumors, including HCC (8)(9)(10). Bufalin has been recognized as a prominent digoxinlike component and a potential Na + -K + -ATPase inhibitor from Chansu and Chanpi (11,12). Recent studies proved that bufalin has marked anti-tumor activities through its ability to inhibit proliferation, induce apoptosis and autophagy, reverse drug resistance, and inhibit invasion and metastasis of HCC (13)(14)(15)(16)(17)(18)(19)(20)(21). Some scholars have used high-performance liquid chromatography to detect the content of bufalin in Huachansu preparations including injections, tablets, and capsules. The bufalin in these drugs that enters the human body by intravenous injection or oral administration is called exogenous bufalin.
Previous studies have demonstrated that there may be a new type of steroid hormone in healthy people. Such hormones include bufalin-like substance that were thought to exist only in amphibian toads. Ferrandi et al. ruled out the possibility that these substances were derived from food, confirming that this was an endogenous substance (22)(23)(24). Weidemann et al. detected the content of a digoxin-like substance in serum of 84 women with breast cancer and found that 73.6% of the patients had significantly lower levels of this substance than healthy people (25,26). In 1995, Numazawa et al. extracted an endogenous bufalin-like substance from normal human plasma by separation, purification, and immunological methods. The endogenous bufalin-like substance was similar to the function of exogenous bufalin and could inhibit the growth of a variety of human leukemia cells, which suggested that this endogenous bufalinlike substance could act as an important player in inducing cell differentiation in vivo (27,28). In 2001, Oda et al. (27) determined by means of monoclonal antibodies that the concentration of the bufalin-like component was mostly maintained at 5 nM in serum from 19 healthy volunteers.
This study aimed to confirm whether endogenous bufalinlike substances are bufalin, detect endogenous bufalin substances in the serum of HCC patients and healthy volunteers, and investigate the potential relationship between endogenous bufalin and the incidence of HCC.
High-performance liquid chromatography with tandem mass spectrometry (HPLC-MS/MS) is considered a powerful analytical tool increasingly applied to endogenous detection of hormones.
In the early stage of the research group, HPLC-MS/MS was established to determine the concentration of bufalin in rats after intravenous administration. The methodological results of the determination of each biological sample show that the linearity, precision, and accuracy of the method are satisfactory (29). In this study, for the first time HPLC-MS/MS was used to qualitatively and quantitatively analyze and examine the differences of endogenous bufalin from HCC patients and healthy volunteers in serum.
Chemicals and Reagents
Bufalin (>98% purity) was purchased from Sigma-Aldrich Company (St. Louis, MO, USA). Cinobufagin [>97% purity, internal standard (IS)] was purchased from the National Institute for the Control of Pharmaceuticals and Biological Products of China (Beijing, China). All were corrected for purity and salt forms when weighed or diluted for standard stocks, whose chemical structures are shown in Supplementary Figure 1. HPLC-grade methanol and acetonitrile were obtained from Fisher Scientific Company (Pittsburgh, USA). Formic acid was purchased from MREDA Company (Beijing, China). Ultrapure water was produced by A. S. Watson (Guangzhou, China). All other reagents were of analytical grade.
Preparation of Bufalin Stock Solution and Quality Control Samples
The stock solution of bufalin was prepared separately in methanol-water (5: 95, V/V) solution at a concentration of 1.0 µg · mL −1 . All working solutions were freshly prepared by serially diluting stock solutions with mobile phase.
Sample Collection and Preparation
All serum samples were separated from the clotted whole blood by centrifugation at 3,000 rpm for 15 min. A 1 mL aliquot of each serum sample was collected and stored at −80 • C until analysis. Before analysis, all serum samples were simultaneously thawed at room temperature. A 100 µL aliquot was mixed with methanol (300 µL each) by vortexing, and the mixture was left on ice for 5 min. Samples were then centrifuged at 14,000 rpm for 10 min at 4 • C. The supernatant was transferred to an autosampler vial, and 10 µL of the solution was injected into the analytical column separately for HPLC-MS/MS identification.
Participants
This clinical trial was reviewed and approved by the Changhai Hospital Ethics Committee (CHEC2015-113). Approvals for the study protocol (and any modifications thereof) were obtained from independent ethics committees. Healthy volunteers and HCC patients were recruited for the study carried out in Changhai Hospital between May 2015 and December 2015. All subjects provided signed informed consent.
Men or women older than 18 years old were eligible to participate in the study as healthy volunteers. Healthy volunteers were excluded if they got pregnant or had infectious diseases including hepatitis, malignant tumors, or other major diseases such as liver and kidney failure.
Eligible patients were men or women older than 18 years old with hepatocellular carcinoma diagnosed according to the diagnostic criteria detailed in the Expert Consensus of the Standard Diagnosis of Primary HCC issued in 2011 (30). Patients who had previously used toad-related preparations (including interventional, intravenous, oral, and topical routes), taken antitumor Chinese medicines or hormonal drugs, were critically ill, had other tumors, did not return after HCC surgery, had legal infectious diseases (except viral hepatitis), or had mental disorders were excluded from the study.
Method Validation
Method validation including determination of specificity, precision, and accuracy, extraction recovery, matrix effect, and stability was performed according to the Chinese pharmacopeia (Version 2010).
For specificity, comparison of responses in spiked and blank samples from at least 6 lots was performed. A false positive rate of <20% was considered acceptable. Extraction recovery and matrix effect were assessed in three replicates at three concentration levels (low, mid, and high) for bufalin (2.0, 10.0, and 50.0 ng/mL). The matrix effect was the ratio of peak area in the spiked postextraction samples to the concentration of the corresponding solvent substituted samples, and the recovery was the ratio of peak area in the spiked samples concentration to that of the corresponding spiked post-extraction samples.
Inter-day and intra-day precision and accuracy were assessed in five replicates at three concentration levels (low, mid, and high). Samples were analyzed in three analytical lots in separate days (at least 2 days), and the RSD% (relative standard deviation) for inter-day and intra-day precision of no more than 15% was satisfactory. For intra-day and inter-day accuracy, RE% (relative error) within 15% was considered acceptable.
Stability, including long term stability (12 h at room temperature), short term stability (6 h in an autosampler), and three freeze-thaw cycle stability, was evaluated using quality control (QC) samples at the same three concentration levels.
Statistical Analysis
All data were calculated from the concentration of bufalin using Mass Hunter software. Data are expressed as mean ± standard deviation (SD) and SPSS Version 18.0 statistical software (SPSS, lnc., Chicago, IL, USA) was used to process all the data. For comparisons, chi-squared test, Dunnett's test, Wilcoxon signed-rank test, and Mann-Whitney U-test were performed, as appropriate. P < 0.05 was taken as statistical significance.
Specificity
Comparing the blank matrix and water spiked with bufalin and IS from the chromatograms, no significant interferences can be seen at any given retention time; this indicated that human serum contains bufalin. A good baseline separation was achieved for each component (shown in Figure 1).
Assay Precision and Accuracy
Intra-day and inter-day precision and accuracy for bufalin from human serum samples data are shown in Supplementary Table 2. All intra-day and inter-day precision and accuracy were acceptable with RSD% <15% and RE% within ±15%.
Extraction Recovery and Matrix Effect
The extraction recoveries of the bufalin under the protein precipitation conditions are summarized in Supplementary Table 3. The extraction recoveries and matrix effect of bufalin at the three concentration levels in serum were 91.4-96.8% and 84.6-98.9%, respectively.
Stability
The stability results (Supplementary Table 4) showed that bufalin, at the three concentrations studied, had acceptable stabilities after three freeze-thaw cycles, at room temperature (20 • C) for 12 h, and in and auto-sampler (4 • C) for 6 h after protein precipitation.
Subjects' Characteristics
After screening, 173 healthy volunteers and 54 HCC patients were enrolled. The ages of healthy volunteers ranged from 20 to 92 years (mean ± SD, 46.9 ± 11.8) and HCC patients ranged from 26 to 77 years (mean ± SD, 55.4 ± 11.2). The characteristics of HCC patients in terms of age, sex, alpha-fetal protein (AFP) levels, total bilirubin (TB), alanine transaminase (ALT) levels, and the Barcelona clinic liver cancer (BCLC) stage are summarized in Table 1.
According to the American Association for the Study of Liver Diseases (AASLD) practice guidelines for the management of HCC, the patients with BCLC stage A (11 cases) received resection and had recurrence after surgery. Patients with BCLC stages B-D (43 cases) received transcatheter arterial
Healthy Volunteers' Serum Contains Endogenous Bufalin
The median concentration of bufalin in the serum of healthy volunteers was 2.2 ng · mL −1 (5.7 nM), which was consistent with the study by Miwa Oda (5 nM) (27). There was a statistically significant difference between male and female (Supplementary Table 5; P = 0.016). And there were also significant differences in serum bufalin concentration between the <40, ≥40-<60, ≥60 years age groups in healthy volunteers (Supplementary Table 5; P = 0.007).
Serum of Patient With HCC Contained Endogenous Bufalin
The median bufalin concentration in patients with HCC was 0.5 ng · mL −1 (1.3 nM). There was a significant difference in serum bufalin concentrations between healthy volunteers and liver cancer patients ( Table 2; P < 0.0001). Compared with the healthy group, there was no significant difference in bufalin levels between males and females (Supplementary Table 5; P = 0.45) or between the <40, ≥40-<60, and ≥60 years age groups in patients with HCC (Supplementary Table 5; P = 0.11).
The Relationship Between Endogenous Bufalin and AFP Levels in HCC Patients
We also investigated the relationship between endogenous bufalin and AFP levels in HCC patients. In the low AFP (<10 ng/mL) group, the median bufalin concentration was 0.3 ng · mL −1 (0.8 nM). In the elevated AFP (≥10 ng/mL) group, the median bufalin concentration was 0.5 ng · mL −1 (1.3 nM). There was no significant difference in bufalin levels between low AFP and elevated AFP groups in HCC patient ( Table 3; P = 0.56).
DISCUSSION
The endogenous digitalis-like compounds (EDLC), a group of steroids, have been demonstrated to exist in mammals as potential Na + /K + -ATPase inhibitors (31)(32)(33)(34)(35). These compounds are postulated to play an essential role in the pathophysiology of hypertension, preeclampsia, end-stage renal disease, congestive heart failure, and diabetes mellitus (36). Bufalin is a cardiac glycoside steroid, which have anti-carcinoma, anti-inflammatory, and immune-regulating effects, similar to steroid hormones (37). Consistent with previous studies (27), healthy volunteers' serum contains endogenous bufalin-like substance. The major finding in this study is confirmation that the endogenous bufalin-like substance is bufalin. The concentration (5.7 nM) is consistent with the previous data (5 nM) in healthy volunteers. We further found that the endogenous bufalin concentration in HCC patients is significantly reduced.
In healthy volunteer group, the concentration of bufalin in males was significantly higher than in females. Although the endocrine system in both men and women is regulated by the hypothalamic-pituitary-adrenal (HPA) axis, there are still differences in hormone types and secretion levels between males and females. In addition to the currently known glucocorticoids, sex hormones, and mineralocorticoids, there are likely other structurally similar steroidal hormones in the body (22); bufalin may be one of them. Our results suggest that secretion of bufalin may be similar to the pattern of secretion of other hormones in the human body. In the <40 age group, the secretion of bufalin gradually increases, reaching a peak in the 40-59 years old group. After 60 years old, production of bufalin drops rapidly, and its concentration is even lower than in 20-39 years old cohort.
Low EDLC plasma concentrations may significantly increase an individual's risk of developing cancer (26). Weidemann et al. (38) compared EDLC plasma and cortisol serum concentrations in breast cancer patients (n = 22) and patients with a benign breast disease (n = 10) and found than there was a significant positive correlation between EDLC and cortisol in the control as well as in patients (rs = 0.7, P = 0.05). They hypothesized that a lowered EDLC response threshold of tumor as compared with normal cells increases the risk of tumorigenesis, especially in individuals with reduced EDLC plasma concentrations after long stress exposure (38). Previous studies have demonstrated that plasma concentration of EDLC, including bufalin-like substances, in patients with breast cancer or leukemia were reduced compared to healthy people (28). In our study, endogenous bufalin did show significant downregulation in HCC patients as compared with healthy volunteers (P < 0.0001). Therefore, we speculate that the decrease in bufalin concentration may be related to the occurrence and development of certain tumors, but whether the endogenous bufalin itself has anti-tumor effects was not covered in this experiment. By detecting endogenous bufalin in healthy volunteers and HCC patients, we are able to propose the following points: (1) The human body produces hormones which promote cell differentiation, induce apoptosis and prevent the occurrence of tumors, and bufalin may be one of them; (2) In the event of viral or aflatoxin infection, emotional depression, fatigue, or another condition, the levels of such hormones may be altered; this alteration could have an important correlation with the occurrence and development of HCC.
AFP, as a tumor marker, is widely used clinically for the diagnosis and screening of HCC for many years (39). However, it has been recognized that AFP is less sensitive in detecting HCC and that AFP levels usually increase in other cases of liver disease (chronic hepatitis or cirrhosis) without HCC (40,41). Our study demonstrated that there was no direct relationship between endogenous bufalin concentration and AFP levels in HCC patients. The endogenous bufalin may be used as a supplement to AFP for clinical diagnosis of HCC in the future so that more patients can benefit from the optimal therapy.
Finally, there are also some limitations in the present study. Our results need to be corroborated by more evidence that bufalin concentration is negatively associated with the incidence of HCC. As this is a single-center, small-size study, we don't know whether the conclusion could be extended to different stages and histological types of HCC. Future studies of the association of endogenous bufalin with the development of HCC considering more histological types of HCC with multi-centered data collection should be carried out to confirm the hypothesis. Moreover, patients with HCC should be dynamically tested for changes in endogenous bufalin based on changes in their condition, and animal studies on supplementation of exogenous bufalin to inhibit the occurrence of HCC could be conducted.
CONCLUSION
In this study, we determined by HPLC-MS/MS that healthy volunteers and HCC patients both produce endogenous bufalin, and for the first time confirmed that the bufalin concentration in patients is generally lower than that of healthy people. Bufalin may have a negative correlation with the incidence of HCC.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Changhai Hospital Ethics Committee. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
YS and HZ contributed conception and design of the study. MH, GY, and QL collected patient samples and tested serum samples. QL and YY performed the statistical analysis. All authors participated in the writing of the manuscript and confirmed the final review of the manuscript.
|
v3-fos-license
|
2023-05-28T15:07:51.942Z
|
2023-05-25T00:00:00.000
|
258945985
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ccsenet.org/journal/index.php/jmsr/article/download/0/0/48815/52598",
"pdf_hash": "2b0d06d24d06f7ed01f62bb206334dfbc2029d05",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45584",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Environmental Science"
],
"sha1": "d9887d17979199983172eeb1ef717a9424c4e7e5",
"year": 2023
}
|
pes2o/s2orc
|
Characterization of Clay Materials from Côte d’Ivoire: Possible Application for the Electrochemical Analysis
The utilization of clay minerals as electrode modifiers is based on their unique structure and properties. In this study, clays from various regions of Côte d'Ivoire were characterized for their potential use in developing electrochemical sensors. The clay samples underwent analysis via X-ray diffraction (XRD), scanning electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDS) mapping analysis, Fourier transform infrared spectroscopy (FTIR)
Introduction
For centuries, humans have utilized clay to produce tools in a variety of fields. Clay minerals are commonly employed in pharmacy, cosmetics, ceramics, and the manufacturing of bricks for construction purposes. In the field of chemistry, clay's applications are vast and include the development of functional solid catalysts, adsorbents, ion exchangers, and electrochemical sensors, as reported in several studies (Moraes et al., 2017 ;Supelano-García et al., 2020 ;Crapina et al., 2021 ;Maghe et al., 2012).
Clay minerals are natural materials primarily composed of hydrated aluminum silicates, commonly known as phyllosilicates, with a particle size smaller than 2 μm. Phyllosilicates belong to the silicate family (aluminosilicates) and possess a sheet-like structure. Each sheet consists of a stacking of siliceous (T-layer) and aluminous (O-layer) flat layers that are interconnected (Murray, 2006). According to their structure, clays are classified into various classes or groups, such as smectites, montmorillonite, mica (illite), kaolinite, vermiculite, and more (Shichi & Takagi, 2000). The applications of clays depend on their mineral structure, chemical composition, and physicochemical properties.
However, as regards to carbon paste electrode (CPE) modified with clay, it is a complex heterogeneous system. The use of such electrodes requires a thorough characterization to understand the phenomena of charge and mass transfer in a mixture: solids, liquid, conductors, and insulators. In addition, the origin, composition, and structure of clay included in the paste can impact the electrode response (Mousty et al, 2004;Gómez et al, 2011). Therefore, a well characterization of clay is primordial in the elaboration of clay modified carbon paste electrode.
In Côte d'Ivoire, there are numerous clay sites that have not been thoroughly studied (Konan et al., 2006). Since 1994, several papers have been published focusing on various aspects and characteristics of different clay sites (Sei et al., 2002;Andji et al., 2009;Yoboue et al., 2014;Coulibaly et al., 2014). Recently, Ivorian natural clays have been utilized for various applications based on their physicochemical properties (Coulibaly et al., 2018;Kpangni et al., 2008;Konan et al., 2007). However, to the best of our knowledge, the utilization of Ivorian clay in modified carbon paste electrodes has not been studied.
In this work, new carbon paste electrode modified with natural clay from Côte d'Ivoire was elaborated for the potential detection of organic pollutants. Clay and clay modified electrode were thoroughly characterized using microscopic and electrochemical techniques. The analytical performance of the newly developed sensor was evaluated by studying the electrochemical behavior of the ferri/ferrocyanide couple as redox probes on both the bare and modified electrodes.
Regents and Materials
All chemicals used in the experiment were of analytical grade. Graphite powder with a particle diameter (ø) of 0.1 mm was purchased from Sigma Aldrich. Potassium hexacyanoferrate (II) trihydrate (K 4 [Fe(CN)] 6 ·3H 2 O) was obtained from Scharlau Chemie S.A. Solutions were prepared using distilled water. Potassium perchlorate (KClO 4 ) with a purity of 99.5% from Merck was used as the supporting electrolyte. The firm Dp-pharma paraffin oil was employed. The experiments were conducted at a room temperature of 25°C.
Two types of electrodes were used : a carbon paste electrode and a modified carbon paste electrode, both serving as working electrodes. A saturated silver electrode (Ag/AgCl, KCl sat ) was employed as the reference electrode (RE), and a platinum wire was used as the counter electrode (CE).
Clay samples were collected from three different regions in Côte d'Ivoire, which are located in the south and midwest of the country. The Agban sample (AG) was collected at Bingerville (5.3504° N, 3.8757° W), the Adiaho sample (AD) was collected at Bonoua (5.2712° N, 3.5939° W), and the Zuenoula sample (ZU) was collected at Zuenoula (7.4240° N, 6.0520° W). The samples were collected at a depth of twenty meters for the Adiaho and Zuenoula samples, and fifteen meters for the Agban sample
Carbon Paste Electrode
The carbon paste electrode CPE was prepared by mixing 1 g of graphite powder and 300 μL of paraffin oil using mortar and pestle until homogenous paste was obtained. The paste was then incorporated into the electrode cavity and polished on smooth paper. A platinum wire provided the electrical contacts. The electrode surface could be renewed by simple extrusion of a small amount of paste from the tip of the electrode. Before each use of CPE, it was rubbed with a piece of paper until a smooth surface was observed.
Clay preparation
Each collected clay sample was carefully dried in the shade for several days to remove any moisture. Once dried, the samples were ground and sieved using a 100 μm sieve to obtain a consistent particle size. In order to perform structural characterization of the clay, the samples underwent a pretreatment process. This involved washing and decantation to obtain a more uniform product for subsequent analytical experiments.
Separation was performed on different clay granulometric particle sizes by sedimentation, decantation, centrifugation, and ultracentrifugation according to Stocke's law. This law expresses the limit speed of sedimentation as a function of the diameter (D) of a solid particle of specific mass γs in a liquid of specific mass and viscosity (Nasri et al, 2016). In principle, sedimentometry is a test that completes the particle size analysis by sieving for fractions below 80μm. Its purpose is the determination of sand, silt, and clay content.
Modified Electrode Preparation
The modified carbon paste electrode (MCPE) was prepared using a similar procedure. First, weighed amounts of clay and graphite powder were thoroughly mixed with paraffin oil. Various proportions of clay to graphite powder (w(clay)/w(G)) were used until a uniformly mixed paste was obtained. The resulting paste was then packed tightly into the electrode's cavity through vigorous packing. To maintain the electrode's performance, the surface could be renewed easily by extruding a small amount of paste from the electrode's tip. Additionally, prior to each use, the electrode surface was carefully rubbed with a piece of paper until a smooth surface was achieved.
Characterization of Clay
The density of the clay samples was determined using a pycnometer. Various techniques were employed to evaluate the properties of the clays, including physico-chemical composition studies, X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and scanning electron microscopy (SEM). These techniques allow comprehensive information about the physico-chemical properties, mineral composition, morphological characteristics, and chemical composition of the clay samples.
X-ray diffraction (XRD) was utilized to analyze the structure and composition of clay minerals. The analysis was performed using CuK α radiation (λ = 1.5406 Å) on a Brüker D8 Advance diffractometer, with a generator current of 40 mA and voltage of 40 kV. Data collection was carried out over a 2θ angular range of 5-60° with a step size of 0.02°s -1 .
Scanning electron microscopy (SEM) was employed for morphological characterization of the clay samples. The measurements were conducted using a Hirox SH 4000 model SEM in Europe.
To identify functional groups present in the clay samples, Fourier-transform infrared (FTIR) spectroscopy was performed using a Thermofisher Scientific Nicolet 380 spectrometer. The transmission mode was used, where a mixture of potassium bromide (KBr) pellets and a small quantity of the ground sample (a few mg) was prepared. The acquisitions were made between 4000 and 400 cm -1 , using 64 scans with a resolution of 2 cm -1 .
Chemical analysis of the clay samples was conducted using X-ray fluorescence (XRF). A Thermo Fisher Scientific Energy Dispersive (EDXRF) apparatus was utilized, with a maximum voltage of 40 kV and a maximum energy of 40 keV.
Characterization of the Electrode
Cyclic voltammetric measurements were performed using a computer-controlled potentiostat (PalmSens, Ecochemie, Netherlands) and PSTrace software. A conventional three electrodes cell (10 mL) consisting of a carbon paste electrode (CPE) as working electrode, a Ag/AgCl,KCl sat as reference electrode and a Pt wire as counter electrode, was used. The solutions pH was measured using a digital pH meter (Hanna Instruments, USA). Each individual experiment was performed at least three times and the results were averages.
Characterization of Clays
It is widely recognized that natural clay often contains impurities and exhibits heterogeneity. Table 1 presents the results pertaining to the particle size distribution of the three clay samples under investigation. These findings indicate that the collected samples consist of a mixture of particles with varying sizes. However, it is important to note that clay minerals, as natural materials, typically have particle sizes smaller than 2 μm (Uddin, 2017). Accordingly, the clay content, defined as the fraction below 2 μm, is determined to be approximately 29.66% for sample AD, 57.667% for sample AG, and approximately 18% for sample ZU. Table 1, which displays the different fractions of the materials, shows that the studied samples comprise a combination of sand, silt, and clay components. However, it is important to note that these results alone do not allow us to conclusively determine the nature of the samples. The observed density values suggest that the samples likely consist of a mixture of different types of clay minerals. To gain a more accurate understanding of the composition, further analysis and characterization techniques are required.
Thus, while the obtained density values are consistent with certain clay mineral structures, additional investigations are needed to precisely identify the nature and composition of the clay minerals in the samples.
The X-ray diffraction (XRD) patterns of the samples are presented in Figure 1AD, Figure Several authors, including Suzuki et al. and Caillère et al. (Zuzuki et al., 2013;Caillère et al., 1982), have described kaolinite particles at the nanoscale as having a laminar structure of pseudo-hexagonal platelets. The morphology depicted in Figure 2 aligns with the typical appearance of kaolin, characterized by heterogeneous layered sheets of varying sizes. Additionally, a significant quantity of quartz mineral, which serves as an impurity in kaolin, is observed.
These findings are consistent with previous studies conducted on clay samples from Côte d'Ivoire by various authors (Konan et al., 2007;Meite et al., 2020;Kouadio et al., 2022;Konan et al., 2010). These studies consistently reported that the clay samples predominantly consist of kaolinite. Therefore, the SEM results support the XRD findings and confirm the prevalence of kaolinite as the primary mineral phase present in all the studied clays. The elemental analysis conducted with Energy Dispersive X-ray Spectroscopy (EDS) provides valuable information about the chemical composition of the clay samples and highlights the variations between the three samples. Figure 3 presents the percentage composition of various components, including Si, Al, Ti, Fe, K, Mg, Na, O, Ca, and C. These results indicate the relative abundance of these elements in the clay samples. The presence of silicon (Si) and aluminum (Al) is expected, as they are major constituents of clay minerals. The percentage of Si and Al can provide insights into the type and composition of clay minerals present in the samples.
Other elements such as titanium (Ti), iron (Fe), potassium (K), magnesium (Mg), sodium (Na), oxygen (O), calcium (Ca), and carbon (C) may also be detected in varying amounts, depending on the specific characteristics of the clay samples and their sources. The presence and quantities of these elements contribute to understanding the overall chemical composition and potential impurities in the clay samples.
The differences observed in the percentage composition of the elements among the three clay samples suggest variations in their chemical makeup. These differences may be attributed to variations in the mineralogical composition, geological origin, and processing of the clay samples.
This analysis of the clay minerals reveals some noteworthy differences between the samples. In the case of ZU, the presence of calcium (Ca) is detected, while it is not detected in the AD and AG clay minerals. This indicates a variation in the elemental composition of the three samples.
Furthermore, the analysis shows that the Al/Si atomic concentration ratio is close to 1.0 for AG, which aligns with the expected chemical composition of kaolinite (Al 2 Si 2 O 5 (OH) 4 ). This suggests that the AG sample predominantly consists of kaolinite. On the other hand, AD and ZU exhibit a higher silicon (Si) content compared to aluminum (Al). This higher Si content may be attributed to the presence of a significant concentration of quartz in these clay minerals. Quartz is known for its high Si content and is commonly found as an impurity in clay samples.
Additionally, the analysis reveals the presence of several components in small quantities and varying proportions, including titanium (Ti), potassium (K), magnesium (Mg), sodium (Na), and calcium (Ca). These elements may be present as minor constituents or impurities in the clay samples. The elemental composition of the three clay samples was analyzed, and the oxide content is presented in Table 2.
The results confirm that the samples are predominantly composed of aluminum oxide (Al 2 O 3 ) and silicon oxide (SiO 2 ), indicating their classification as aluminosilicates (Nirmala & Viruthagiri, 2015). The SiO 2 /Al 2 O 3 ratios for AG, ZU, and AD are 1.72 ; 3.34 ; and 3.87, respectively. These ratios are higher compared to the typical ratio of 1.18 found in kaolinites (Lecomte-Nana et al., 2013). The elevated SiO 2 /Al 2 O 3 ratios suggest the possible presence of free quartz in a significant proportion within the clay fraction (Gourouza et al., 2013). The excess silica observed can be attributed to the presence of quartz and/or compounds such as 2/1 clay minerals like illite and muscovite (Coulibaly et al., 2020).
Additionally, the clay samples exhibit a relatively large quantity of iron oxide, indicating the presence of ferric phases. The presence of other oxides such as CaO, MgO, Na 2 O, K 2 O, and TiO 2 is also observed, albeit in low percentages.
These findings further support the aluminosilicate nature of the clay samples, highlighting the elevated SiO 2 /Al 2 O 3 ratios, the potential presence of free quartz, and the occurrence of ferric phases. The detailed oxide composition presented in Table 2 provides a comprehensive overview of the elemental composition of the clay samples. The semi-quantitative mineralogical composition of the three clay samples was determined by summing the values obtained from the qualitative mineralogical analysis. This allows for the calculation of the total chemical composition, as shown in Table 3. The calculation is based on the relationship developed by Njopwouo (). T(a) = Σ Mi x Pi (a), as referenced in Kouadio et al. (2022), where T(a) represents the content (mass %) of oxide a in the clay, Mi represents the content of mineral i (%) in the clay, and Pi(a) represents the proportion of oxide a in mineral.
By applying this relationship, the semi-quantitative mineralogical composition of the three samples was determined, providing valuable insights into their overall chemical composition. These results are found to be consistent with the findings published in the literature by Meite et al. (2020) and Kouakou et al. (2022), further validating the accuracy and reliability of the analysis conducted in this study. Table 3. Semi-quantitative mineralogical compositions of clays
Electrochemical Characterization of Clay Modified Electrode
The ferri/ferrocyanide couple is an ideal electrochemical probe which is widely used on different electrodes for the study of surfaces (Promph et al, 2015;Vogt et al, 2016). In order, to compare the electrochemical properties of bare carbon paste electrode, clay paste electrode and clay modified carbon paste electrode, the redox couple ferricyanide/ferrocyanide was chosen.
Preliminary studies were performed on clay paste electrodes (ClPE) in potassium hexacyanoferrate solution at pH 7. Figure 5 shows a typical Cyclic voltammetry (CV) recorded at a ClPE electrode between -0.4 V and 0.6V.
The absence of redox peaks observed on the clay paste electrodes (ClPE) in the cyclic voltammetry (CV) analysis indicates that the ferri/ferrocyanide redox couple is inactive on the ClPE. This suggests either a lack of reduction of Fe (CN) 3− or the absence of subsequent re-oxidation of Fe(CN) 4− on the ClPE surface. It is possible that the ClPE electrode itself is inactive in this electrochemical system.
The conductivity of clays has been reported to depend on various factors, such as heating temperature, clay pore structure, and soil mineralogy. These factors can influence the electrochemical behavior and conductivity of the clay paste electrodes. Vol. 12, No. 1;2023 59 have highlighted the relationship between clay conductivity and these factors. Therefore, the lack of activity observed on the ClPE electrodes could be attributed to their specific conductivity properties influenced by these factors.
The electrochemical behavior of clay-modified electrodes was tested by cyclic voltammetry (CV) in potassium hexacyanoferrate solution. Figure 6 represents the responses obtained between 0.0 V and +0.7 V (vs. AgCl/Ag) on CPE, and CPE modified by 5% of clay AD (ADCPE), 5% of clay AG (AGCPE) and 5% of clay ZU (ZUCPE) respectively in a solution containing 1 mM [Fe(CN) 6 ] 3/4 (1:1) at 20 mV/s. The system ferri/ferrocyanide showed a different behavior on CPE and MCPEs. The oxidation and reduction peak currents observed increase for the modified CPEs versus unmodified electrode (see Table 4). A slight decrease in (Epa) was observed for ADCPE while it increases for AGCPE and ZUCPE. The value of ratio between the anodic and cathodic peak currents (Ipa/Ipc) is superior to 1; this shows a quasi-reversible of the system because the ideal reversible process is characterized by the ratio Ipa/Ipc approaches unity. Intensity of peaks on CPE system was markedly lower than those on MCPE, which can indicative of hindered diffusion on CPE and the improvement of the electron transfer rate on MCPE. Another characteristic parameter is the separation of the peak potentials ΔEp.
The theoretical value for ΔEp in reversible process is 60 mV, and it is independent of scan rate. Here, with a difference of 100 mV, ΔEp characterizes a slow electron transfer kinetics due to several factors, such as uncompensated solution resistance and non-linear diffusion (Xiao et al., 2014;Aristov & Habekost, 2015). The electrochemical behavior of clay-modified electrodes, namely ADCPE, AGCPE, and ZUCPE, was investigated by altering the clay content in the carbon paste. It has been reported in previous studies (Lubna et al., 2022;Salih et al., 2017;Eslami et al., 2014) that the clay content in carbon paste can significantly influence the voltammetric responses and sensor properties. In this study, the proportion of clay in the carbon paste was varied from 3% to 20% to assess its impact on the electrochemical performance. Figure 7 illustrates the responses of the redox probe as a function of the clay percentage in the carbon paste. By systematically altering the clay content, the aim was to understand the relationship between clay concentration and the resulting electrochemical behavior. The behavior of the ferri/ferrocyanide redox couple varied with the concentration of clay in the carbon paste. The highest anodic current peaks were obtained when the clay content was 5% for AG and ZU, and 10% for AD. However, beyond these concentrations, a significant decrease in current peaks was observed. This decrease can be attributed to a reduction in the carbon content of the electrode material. As shown in Figure 5, the clays used in this study appear to be non-conductive materials. It has been reported that a high clay content can lead to the saturation of the electrode surface and consequently reduce the oxidation current of the reactant (Salih et al., 2017).
Nevertheless, the properties of these clays can be modified to enhance the sensitivity of the sensors. In order to avoid potential saturation of the electrode, it was determined that a clay concentration of 5% for AG and ZU, and 10% for AD would be suitable for future studies. These concentrations strike a balance between maximizing the anodic current peaks and maintaining the electrode's performance
Conclusion
This study investigated the potential of composite materials based on clay and carbon paste for the development of electrochemical sensors. The three clays obtained from Côte d'Ivoire (Adiaho, Agban, & Zuenoula) were thoroughly characterized using X-ray diffraction (XRD), scanning electron microscopy (SEM), EDS-mapping analysis, and Fourier transform infrared spectroscopy (FTIR). The characterization results consistently identified kaolinite as the dominant component in all three clay samples.
The electrochemical behavior of the ferri/ferrocyanide redox couple was then evaluated on clay-modified carbon paste electrodes using cyclic voltammetry. The results demonstrated that the modification of the electrodes with varying amounts of clay in the carbon paste had a significant impact on the response of the analyte.
Based on these findings, it can be inferred that the modified electrodes utilizing Ivorian clays, hold potential for future electroanalytical investigations. These clay-based composite materials offer new opportunities for the development of sensitive and reliable electrochemical sensors. Further research and optimization of the clay content in the carbon paste are warranted to fully harness the capabilities of these modified electrodes and explore their application in various electroanalytical techniques.
Conflicts of Interest
The authors declare no conflicts of interest regarding the publication of this paper.
|
v3-fos-license
|
2019-02-20T06:09:14.127Z
|
2019-02-19T00:00:00.000
|
66873659
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10048-019-00565-6.pdf",
"pdf_hash": "6d65da0844d26dfde92bc1c1a39db77e07711a6b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45585",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "6d65da0844d26dfde92bc1c1a39db77e07711a6b",
"year": 2019
}
|
pes2o/s2orc
|
Next-generation sequencing study reveals the broader variant spectrum of hereditary spastic paraplegia and related phenotypes
Hereditary spastic paraplegias (HSPs) are clinically and genetically heterogeneous neurodegenerative disorders. Numerous genes linked to HSPs, overlapping phenotypes between HSP subtypes and other neurodegenerative disorders and the HSPs’ dual mode of inheritance (both dominant and recessive) make the genetic diagnosis of HSPs complex and difficult. Out of the original HSP cohort comprising 306 index cases (familial and isolated) who had been tested according to “traditional workflow/guidelines” by Multiplex Ligation-dependent Probe Amplification (MLPA) and Sanger sequencing, 30 unrelated patients (all familial cases) with unsolved genetic diagnoses were tested using next-generation sequencing (NGS). One hundred thirty-two genes associated with spastic paraplegias, hereditary ataxias and related movement disorders were analysed using the Illumina TruSight™ One Sequencing Panel. The targeted NGS data showed pathogenic variants, likely pathogenic variants and those of uncertain significance (VUS) in the following genes: SPAST (spastin, SPG4), ATL1 (atlastin 1, SPG3), WASHC5 (SPG8), KIF5A (SPG10), KIF1A (SPG30), SPG11 (spatacsin), CYP27A1, SETX and ITPR1. Out of the nine genes mentioned above, three have not been directly associated with the HSP phenotype to date. Considering the phenotypic overlap and joint cellular pathways of the HSP, spinocerebellar ataxia (SCA) and amyotrophic lateral sclerosis (ALS) genes, our findings provide further evidence that common genetic testing may improve the diagnostics of movement disorders with a spectrum of ataxia-spasticity signs. Electronic supplementary material The online version of this article (10.1007/s10048-019-00565-6) contains supplementary material, which is available to authorized users.
Introduction
Hereditary spastic paraplegias (HSPs) comprise a group of genetic disorders resulting from neurodegeneration of the corticospinal tracts. The HSPs' main clinical feature is a progressive spasticity and weakness of the lower limbs. HSP is classified as a pure form when symptoms are limited to: progressive spasticity and weakness of the lower limbs, bladder dysfunction and mild somatosensory deficits. In case of any additional neurological symptoms, a complicated HSP form is recognised. To date, over 70 different SPG loci have been identified, and over 60 corresponding genes have been investigated [1][2][3]. All modes of HSP inheritance have already been described: autosomal dominant (ADHSP), autosomal recessive (ARHSP), X-linked (XLHSP) and less frequently, mitochondrial. Among 20 different ADHSP subtypes, SPG4 is the most common one, accounting for approximately 40% of the cases. The frequency of other ADHSP subtypes ranges from 1% to 10%. The main ARHSPs identified to date are SPG5, SPG7, SPG11 and SPG15 [4].
According to population studies, the proportion of families without genetic diagnosis ranged from 45% to 67% in the ADHSP and from 71% to 82% in the ARHSP groups [5]. Recently reported dual-transmission of some HSP subtypes makes their molecular characterisation even more complicated. Due to the HSP heterogeneity, next-generation sequencing (NGS) became a highly useful screening tool in HSP investigations and differential diagnosis. Broad NGS studies have revealed a clinical and genetic overlap between different HSP subtypes, as well as between other neurodegenerative disorders, such as hereditary spinocerebellar ataxias (SCAs), amyotrophic lateral sclerosis (ALS) and neuropathies [6].
In the present study, we analysed familial HSP patients through spastic-ataxia spectrum disease genes according to the approach suggested by Synofzik et al. [6].
Materials and methods
The study was approved by the Bioethics Committee of the Institute of Psychiatry and Neurology in Warsaw. All of the participants provided informed consent.
In the presented study, we aimed to test a group of 30 unrelated hereditary spastic paraplegia patients using the targeted Illumina TruSight™ One Sequencing Panel (Illumina). The original HSP cohort comprised 306 probands in which Multiplex Ligation-dependent Probe Amplification (MLPA) and Sanger Sequencing had been performed to diagnose five HSP subtypes (SPG3, SPG4, SPG6, SPG11 and SPG31) in 62 families [7][8][9][10]. Out of the remaining 244 probands, 30 familial HSP index cases were selected for NGS testing. The major inclusion criteria comprise: (i) spastic paraplegia as a main clinical feature, (ii) positive family history and (iii) availability of DNA sample for more than one affected family member and/or potential carriers. The families' history suggested AD inheritance in 18 and AR in 12 families. In three probands, SPG11 deletions and duplication had been identified in one allele, and NGS sequencing focused on searching for the second causative variant to confirm the AR SPG11. One identified carrier of the SPAST pathogenic variant was used as a positive control in the NGS screening (Fig. 1).
All studied patients were evaluated according to the Fink criteria for HSP [11]. The HSP pure form was observed in 16 probands, and the complicated form was observed in 14 probands.
The bioinformatically analysed 132 ataxia-spasticity panel genes involved the following: (1) 37 genes directly linked with HSP: 12-ADHSP, 22-ARHSP and 3-XLHSP; (2) 25 genes linked with hereditary ataxias: 12 AD spinocerebellar ataxia (SCA), 11 ARSCA (SCAR) and four spastic-ataxia (SPAX) genes; (3) three leucodystrophy genes; (4) 14 amyotrophic lateral sclerosis (ALS) genes; (5) 16 genes linked with different neuropathies, including five hereditary motor neuropathies (HMN) and six Charcot Marie-Tooth neuropathies; and (6) other complex movement or multisystem disorders with prominent gait disturbances, comprising 42 genes (Supplementary Table 1). Because certain genes are linked with more than one phenotype, the number of genes and conditions are not equal. The classification and interpretation of the identified variants were performed according to recommendations of the American College of Medical Genetics and Genomic and the Association for Molecular Pathology (ACMGG&) ( Table 1) [12]. Variants selected through filtering were confirmed by Sanger sequencing in the probands and their family members.
Results
The NGS TruSight™ One output data reached approximately 97% of the aligned reads. A mean number of 16,752,119 reads with 259 base pair length fragments per sample was obtained. An average of 91.2% of targeted reads passed the Q score, whereas 88% were covered at least 30 times.
In this study, we identified 18 pathogenic and likely pathogenic variants in 16 spastic paraplegia probands, as well as six variants of uncertain significance ( Table 2; Table 3). The most frequent HSP genetic types, SPG4 and SPG3, were identified in five probands: SPAST (SPG4) pathogenic variants in three probands and ATL1 (SPG3) in two probands. In four of the mentioned probands, a previous study involved only the MLPA screening, and one of the SPG4 patients was known to carry a pathogenic variant. In 11 out of 22 individuals, in whom SPAST, ATL1 and REEP1 gene single nucleotide variants (SNV) were previously excluded by Sanger sequencing, we identified three HSP subtypes with AD transmission: WASHC5 (SPG8), KIF5A (SPG10) and KIF1A (SPG30) and SPG11 (SPG11) as the only ARHSPs. Moreover, in one case, a homozygous variant in the CYP27A1 gene, known as pathogenic in cerebrotendinous xanthomatosis (CTX), was identified. Among six variants of uncertain significance we detected: WASHC5, KIF5A, SETX and ITPR1 variants in families with AD mode of inheritance. We were not able to detect any variant corresponding to phenotype in 27% of the examined cohort (four cases with AD and four with AR mode of inheritance).
Autosomal dominant HSPs
One known pathogenic ATL1 variant: c.715C>T (p.Arg239Cys) and one novel: c.1064A>C (p.Asn355Thr) were identified in two HSP probands. The variants presented pure HSP with the age of onset at the first and second years of life.
SPAST (SPG4)
In the SPAST gene, the variants were identified in three probands: a missense (c.1378C>T-p.Arg460Cys), nonsense (c.1597G>T-p.Glu533*) and splice site (c.1617-2A>G) mutation. SPAST c.1378C>T is a known pathogenic variant, a moderately conserved nucleotide and highly conserved amino acid position. The two other SPAST gene variants (c.1597G>T and c.1617-2A>G) have not been previously described, neither in the patient cohorts nor in population studies. The ages at onset in the three SPG4 patients were 35, 42 and 28 years, PVS very strong evidence of pathogenicity, PS strong evidence of pathogenicity, PM moderate evidence of pathogenicity, PP supporting evidence of pathogenicity respectively. Two probands had pure HSP, while in one with the nonsense variant, a complicated HSP phenotype with neuropathy as an additional symptom was observed.
WASHC5 (SPG8)
The WASHC5 missense variants were found in two HSP probands and at least one affected individual within their families. Patient SPG0302 was found to have WASHC5 c.1859C>T (p.Val620Ala). The female proband and her affected siblingaged 39 and 37 years at onset-had frontal cortex atrophy. Moreover, in patient SPG0302, white matter and thoracic spinal cord lesions were present. The male proband SPG0403, with WASHC5 c.647C>T (p.Pro216Leu), presented a complex HSP with dysarthria. His brother with the same variant had intellectual disability in addition to HSP (but he had a verified birth asphyxia-a possible cause of the brain damage).
KIF5A (SPG10)
Two KIF5A variants were identified in two probands. One of them, KIF5A c.484C>T (p.Arg162Trp), which localised in motor domain of the kinesin protein was present in a proband with pure HSP and onset of symptoms at age 41. The second, KIF5A variant c.1402C>T (p.Arg468Trp), which altered the stalk part of the protein, was identified in a female proband with pyramidal signs, ataxia, dysdiachokinesia, bradykinesia, titubation, ophthalmoparesis and dementia, in whom first symptoms appeared after turning 40. In MRI, marked atrophy of the cerebellum and cerebral cortex (predominantly temporal and parietal) was observed.
KIF1A (SPG30)
A heterozygous KIF1A c.962G>A (p.Gly321Asp) variant, localised in the motor domain of the protein, was found in an AD pedigree. The female proband and her mother had childhood onset, complex hereditary spastic paraplegia and cognitive decline.
Autosomal recessive HSPs
The NGS analysis enabled us to identify ten different SPG11 variants (with the ExAC frequency below 0.005) in seven probands. In all of them, the variants were present in both alleles. In the SPG1002 proband, three different variants were detected. In three other patients with single variants found in this study, SPG0103, SPG0301 and SPG0702, the microrearrangements: duplication of exons 28-29, deletions of exons 9-11 and exon 29, respectively, were localised in trans. Five of the variants were frameshift deletions or insertions, two were in-frame deletions, one was in the splicesite, one was nonsense and one was a missense change. In SPG1002, the missense variant was identified in cis with the frameshift one. All of the seven SPG11 probands had a complicated form of HSP and showed cognitive impairment: dysarthria 5/7; dysphagia 2/7; nystagmus 3/7; ophthalmoparesis (horizontal gaze) 2/7; cervical dystonia 1/7 and mild ataxia 3/7. In neuroimaging performed in six probands, thin corpus callosum was found in 5/6, periventricular white matter lesions were found in 4/6, and mild cortical and subcortical atrophy was identified in 2/6. EMG provided evidence of polyneuropathy in three out of five examined probands.
CYP27A1 (CTX)
In one proband, NGS revealed a homozygous variant, c.379C>T (p.Arg127Trp) in the CYP27A1 gene, known as pathogenic in cerebrotendinous xanthomatosis (CTX). The carrier status (heterozygosity) was confirmed in the proband's father. The patient, with pyramidal and cerebellar signs, petit mal seizures, bilateral cataract and retinal degeneration in the right eye, was classified as a case of the complicated HSP. Mild cortical and subcortical atrophy were present in brain MRI. Furthermore, in the patient's medical history, vitamin B12 deficiency and nephrolithiasis were documented. To date, neither xanthomas nor other signs characteristic for CTX were not observed in the patient.
Genes with uncertain significance in HSPs
Three different variants of uncertain significance were identified in the ADHSP patients. ITPR1: c.2687C>T (p.Ala896Val) was identified in seven individuals from two unrelated families with pure HSP. In the SPG1203 proband, two different ITPR1 variants (c.3412A>G-p.Met1138Val and c.6304G>Tp.Ala2102Ser) were found. A female patient with weakness and spasticity of her lower limbs, balance disturbances and polyneuropathy had onset of symptoms at age 50. Genetic testing in her relatives was impossible; however, her family history may indicate AD inheritance. All the pedigrees and localization of identified ITPR1 variants are shown in Fig. 2.
SETX (ALS4/SCAR1)
One SETX missense variant of uncertain significance, c.7417C>G (p.Leu2473Val), was detected in a 2-year-old proband and the father, who has been affected since childhood. The father's neurological examination showed upper and lower limb weakness and spasticity with increased tendon reflexes and clonus.
Discussion
Due to heterogeneity, the increasing number of involved genes and varieties of phenotypes (disorders) linked to a single gene, the classification and diagnostics of HSPs are challenging. To overcome these difficulties, different NGS approaches have been applied in a number of studies, mostly targeted sequencing but also whole exome sequencing [13][14][15][16][17]. In the present study, we analysed 30 HSP index cases using the Illumina TruSight™ One NGS sequencing panel. Bioinfomatic analysis was performed for 132 out of the 4813 genes included in the panel. This methodology allowed us to identify 25 variants in nine genes. The pathogenic and likely pathogenic variants were identified in 16 probands. In five of them, in whom only MLPA technique had been used for microrearrangement searching, we identified three SPAST and two ATL1 variants by NGS. It is an evidence that MLPA is not sufficient for SPG4 testing alone, nonetheless together with NGS is now a standard in diagnostic approach. Less frequent HSP subtypes were identified in a group of patients in whom the SPAST, ATL1 and REEP1 pathogenic variants had been previously excluded. Two different variants were identified in WASHC5 (SPG8, OMIM #603563, previously known as KIAA0196) and KIF5A (SPG10, OMIM #604187) genes, both regarded as rare HSP subtypes (approximate frequency 1-2%) that may be associated with pure or complicated HSP phenotypes [4]. The WASHC5: c.1859T>C (p.Val620Ala) variant has previously been detected in pure HSP patients but has not been reported in either ExAC or the 1000 Genomes projects [18]. The KIF5A:c.484C>T (p.Arg162Trp) variant has been reported in a three-generation pedigree with spastic paraplegia as a primary symptom [19].
KIF1A is a neuron-specific motor protein involved in intracellular transport along microtubules. Variants in the KIF1A gene have been described in patients with AR hereditary sensory and autonomic neuropathy type 2 (HSAN2, OMIM #614213) and subtype 30 of the hereditary spastic paraplegia (SPG30, OMIM #610357) [20][21][22][23]. De novo KIF1A variants with AD transmission have been identified in multiple cases with childhood onset of intellectual disability and a number of neurological signs, such as progressive spastic paraplegia, optic nerve atrophy, peripheral neuropathy and cerebral and/or cerebellar atrophy, have been variously classified as autosomal dominant mental retardation type 9 (MRD9, OMIM#614255) [24][25][26][27][28] or complicated hereditary spastic paraplegia [25,29,30]. Finally, KIF1A mutations have been found in pure HSP subjects [30][31][32]. In the present study, a dominant KIF1A variant localised in the motor domain of the protein was found in a female proband and her mother with childhood onset complex HSP and cognitive decline. Twenty-three out of 25 heterozygous KIF1A variants (including the present study) alter the highly conserved motor domain of the protein. However, two out of four variants responsible for recessive HSP and any of the variants identified in HSAN2 are localised in the motor domain. This suggests that localization of the KIF1A variants within the gene is not adequate evidence for phenotype transmission. Moreover, the latest data indicate that dominant conditions, including ADHSP, linked with KIF1A variants are more frequent than recessive ones. SPG11 (OMIM #604360) is the only known recessive HSP subtype identified in this study. Contrary to other studies, we have not detected any affected patient with CYP7B1 (SPG5, OMIM #270800) or SPG7 (SPG7, OMIM #607259) mutations, or any mutation carriers [13][14][15][16][17]32]. Moreover, variants in ZFYVE26 (SPG15, OMIM #270700), which occur with frequency below 0.005 in the ExAC database, were not detected in our cohort.
In addition to the recessive variants, in one case, we detected a homozygous variant in the CYP27A1 gene. Pathogenic variants in the cytochrome P450 CYP27A1 gene result in the production of a defective sterol 27-hydrolase enzyme and have been linked with cerebrotendinous xanthomatosis (CTX) (OMIM #213700). Clinical manifestation of CTX includes neurological dysfunction (e.g. cerebellar ataxia, pyramidal signs, and seizures), cataracts, tendon xanthomas and chronic diarrhoea [33,34]. However, some atypical presentation of symptoms may occur. For example, Verrips et al. described seven patients with CYP27A1 variants and slowly progressive spinal cord syndrome classified as spinal xanthomatosis. Moreover, similar to our case, all of the patients presented pyramidal signs, and in five of them, spinal cord white matter lesion have been demonstrated. Six out of seven cases studied by Verrips et al. did not have tendon xanthomas [35]. Patients with CYP27A1 variants affected with pure and complicated HSP but without xanthomas were also described by Burguez et al. and Nicholls et al. [15,36]. These findings suggest that patients with CYP27A1 variants may Variants of uncertain significance within ITPR1 and SETX genes were detected in four cases. ITPR1 variants have already been described as possibly corresponding to four different phenotypes: multi-exon deletions in ITPR1 gene to spinocerebellar ataxia type 15 (SCA15, OMIM #606658), single nucleotide variants to spinocerebellar ataxia type 29 (SCA29, OMIM #117360) or ataxic cerebral palsy (Ataxic CP), and the truncated and splice-site variants in Gillespie Syndrome (GLSP, OMIM #206700) also presented ataxia and balance disturbances [37][38][39][40][41][42]. ITPR1 encodes a homotetramer calcium channel protein that modulates intracellular calcium signalling. Its primary structure consists of three major domains [43]. In this study the ITPR1 c.2687C>T (p.Ala896Val) variant was detected in two unrelated families and segregates with pure HSP phenotype in seven cases. We also identified two different ITPR1 variants in a patient with pyramidal signs and polyneuropathy. Although the three described variants were reported in the ExAC database, their frequency was lower than 0.005 (Table 2b). The relatively mild HSP symptoms in our patients were first observed in adulthood i.e. the age of onset was not optimal for control studies. The segregation data in the families with c.2687C>T (p.Ala896Val) supports its pathogenicity; however, according to the ACMGG& guidelines, this is not adequate evidence to classify it as a pathogenic/probably pathogenic variant. Variants identified in the present study are localised in the coupling-domain and comprise the first report assigning ITPR1 variants to HSP.
A variant classified as of uncertain significance was also found in the senataxin gene. SETX variants are responsible for AR spinocerebellar ataxia (SCAR1) and AD amyotrophic lateral sclerosis (ALS4) [44][45][46][47][48]. The heterozygous variant of the SETX gene has also been described as a cause of hereditary motor neuropathy (dHMN) [49,50]. Taniguchi et al. reported a family with a SETX variant misdiagnosed as a hereditary spastic paraplegia [51]. The mentioned variant (SETX:c.8C>T) was localised in the N-terminal end of the protein, different than the SETX: c.7417C>G (p.Leu2473Val), altering the C-terminal part of the protein, which was identified during our study in father and son with pure HSP. It is localised in the region of the helicase domain, where known pathogenic variants correlated with ALS4 and SCAR1 phenotypes had been reported as well [52].
Although the molecular investigation of rare heterogenic disorders, such as hereditary spastic paraplegias, will soon be based on massive NGS technology, their molecular aetiology assessment still remains challenging. Two major difficulties to face at present are: (1) interpretation of the detected variants (pathogenic vs benign) and (2) classification of the identified variant and its association with a specific disease. Unified and reliable sequence variants interpretation guidelines were developed by the American College of Medical Genetics and Genomics and the Association for Molecular Pathology. Each rare or novel variant should be evaluated in a patient's and family's history context, and physical examination and previous differential diagnosis should be performed. Such clinical evaluation is supportive during the process of variants classification as disease-causing, incidental or benign findings [12]. Variants classified as pathogenic but also likely pathogenic have sufficient evidence to be used in genetic counselling and clinical decision-making. In contrast, variants of uncertain significance need further investigation that may result in their reclassification [12].
Implementing NGS technologies in clinical practice also brings problems due to the genotype-phenotype correlation and variants' classification. The classification systems were designed according to a predominant disease phenotype and/ or a mode of inheritance. Currently, various genes corresponding to numerous complex phenotypes, such as spinocerebellar ataxias, spastic paraplegias and amyotrophic lateral sclerosis, are associated with SPG7, SPG11, PNPLA6, KIF1C and SETX, and they may be inherited as both autosomal dominant and recessive traits (KIF1A, REEP2, AFG3L2, SETX). In clinical practice, it becomes problematic whether the identified gene variant should be classified as corresponding to a new phenotype or if it Bfits^the patient's genotype consistent with the previous clinical diagnosis. Synofzik et al. proposed introducing the unbiased modular phenotyping approach to replace the ataxias and hereditary spastic paraplegia classification [6]. In parallel, we also recommend simultaneously testing and analysing the HSP, SCA and ALS genes due to their overlapping phenotype and common cellular pathways involved.
In this paper, we report 24 different variants of nine genes in HSP patients. Seven of the variants are novel. They were classified according to the ACMGG& guidelines, and nine were classified as pathogenic, nine as likely pathogenic and six as of uncertain significance. Among nine analysed genes, five have already been known as directly associated with HSP. NGS testing revealed genetic variants in 22 out of 30 tested families. Altogether with the previous study [8], seven different HSP subtypes have been diagnosed in the Polish group of patients to date. Our data also support the evidence that KIF1A (SPG30) variants are more frequent in patients with ADHSP, although they were primarily identified as ARHSP. Moreover, we believe that CYP27A1 variants should be considered to be complicated HSP phenotype cases, as well.
The overlapping phenotypes of HSP, SCA and ALS are associated with multiple genes; therefore, NGS-based screening provides the best comprehensive genetic diagnostic approach. The most challenging interpretation of the novel variants requires the entire body of clinical and molecular evidence available in the entire studied group of patients sharing a defined spectrum of clinical signs.
|
v3-fos-license
|
2019-08-15T14:13:01.178Z
|
2019-08-14T00:00:00.000
|
199576944
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s13018-019-1299-2",
"pdf_hash": "91b48e39d4f25159b401ee3d371b2dad56ecc19f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45586",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "91b48e39d4f25159b401ee3d371b2dad56ecc19f",
"year": 2019
}
|
pes2o/s2orc
|
Effects of ginsenoside Rb1 on spinal cord ischemia-reperfusion injury in rats
Background The aim of this study was to evaluate the effects of different doses of ginsenoside Rb1 (GRb1) pretreatment on spinal cord ischemia-reperfusion (SCII) in rats and explore the potential mechanisms about the expression of survivin protein after the intervention. Methods A total of 90 healthy adult Sprague-Dawley (SD) rats were randomly divided into six groups: sham-operated (n = 15), SCII model (n = 15), and GRb1-treated groups (n = 60). The GRb1-treated group was divided into four subgroups: 10 mg/kg, 20 mg/kg, 40 mg/kg, and 80 mg/kg (n = 15). The corresponding dose of GRb1 was injected intraperitoneally 30 min before operation and every day after operation. Forty-eight hours after model establishment, the neurological function of hind limbs was measured with Basso, Beattie, and Bresnahan (BBB) scale. The superoxide dismutase (SOD) and malondialdehyde (MDA) levels in serum and spinal cord tissue were detected respectively. The expression of survivin protein was observed by immunofluorescence staining. HE and TUNEL staining were used to observe neural cell injury and apoptosis, respectively, in the spinal cord of rats with SCII. Results The intervention of different doses of GRb1 could increase SOD activity and decrease MDA content in serum and spinal cord tissue, increase survivin protein expression, and decrease neuronal apoptosis. It was dose-dependent, but there was no significant change between 40 mg/kg and 80 mg/kg. Conclusions GRb1 could reduce the cell apoptosis induced by SCII through inhibiting oxidative stress. It can also inhibit apoptosis by promoting the expression of Survivin protein. Ginsenoside Rb1 had a dose-dependent protective effect on SCII in the dose range of 10 mg/kg–40 mg/kg.
Introduction
Spinal cord ischemia-reperfusion injury (SCII) refers to the phenomenon that the nerve function could not be recovered after a period of ischemia and blood reperfusion, on the contrary, the nerve function was damaged and aggravated [1,2]. SCII could occur in a variety of conditions, such as spinal trauma, vascular surgery, and the like. Spinal cord tissue belonged to nerve tissue. So far, most studies still believe that nerve tissue ischemiareperfusion injury was a disease without effective treatment. At present, the general treatment method was that once SCII is found, immediate treatment with large doses of methylprednisolone and neurotrophic drugs could promote the recovery of damaged spinal cord function and prevent further aggravation of injury [3,4].
GRb1 belonged to panaxadiol (Fig. 1a), which had anti-oxidation, anti-apoptotic and anti-tumor effects [5][6][7][8]. GRb1 had protective effects on ischemic brain injury, and its effect was similar to that of neurotrophic factor [9]. It acts as a free radical scavenger when free radicals are produced in large quantities after cerebral ischemia. Previous studies have shown that GRb1 can significantly alleviate ischemia-reperfusion injury of kidney and brain, but there are few reports on SCII [10]. This study was to explore the relationship between survivin protein and spinal cord ischemia-reperfusion injury, and the possible mechanism of different doses of GRb1 in the treatment of spinal cord ischemia-reperfusion injury. Fig. 1 a Chemical structure of GRb1. b Neurological functional assessment measured by BBB. In general, during the whole experimental period, GRb1-treated group showed significantly better function in comparison with the SCII group. In the sham group, neurological functions of rats were basically no influence. c Timeline of GRb1 injection (IP) during the ischemia-reperfusion procedure. d-e Oxidant stress marker parameter results of MDA in serum and spinal cord tissue. f-g Oxidant stress marker parameter results of SOD in serum and spinal cord tissue. Although the SOD values of each subgroup in the drug group were higher than the sham group, the values of each time node were lower than those in the SCII group. The SOD values of each subgroup in the drug group were lower than the sham group, but the values of all time nodes were higher than those in the SCII group. (n = 15 in each group, *P < 0.05, compared with the sham group; #P < 0.05, GRb1 vs. SCII group)
Animals
Ninety adult male Sprague-Dawley rats (weighing 200 to 230 g) were purchased from the Experimental Animals Center of Xi'an Jiao-tong University. All animals were housed in polypropylene cages (at room temperature of 22 ± 3°C, relative humidity of 50 ± 15%, and 12-h light/dark cycle) and allowed free access to standard rodent chow and water. None of the animals had any neurological abnormality before anesthesia and surgery. All animal experiment procedures were conducted in accordance with the policies of our university and NIH Guidelines for the Care and Use of Laboratory Animals. Xi'an Jiaotong University Approval for Research Involving Animal No. XJTULAC2018-454.
Chemicals and reagents
GRg1 powder with high purity (> 98% of the total weight) was purchased from Shanghai Winherb Medical Science Co. Ltd. (Shanghai, China). GRg1 solution was prepared and injected at a concentration of 10 mg/ml.
Mouse monoclonal survivin antibody was obtained from Santa Cruz (CA, 95060, USA). Terminal dUTP nick-end labeling (TUNEL) assay was purchased from Nanjing Jiancheng Bioengineering Institute (Nanjing, China). Superoxide dismutase activity test kit and Malondialdehyde test kit were obtained from Nanjing Jiancheng Bioengineering Institute (Nanjing, China).
Experiment protocols
Ninety male SD rats were randomly divided into 6 groups with 15 rats in each group. There were a sham group, an SCII model, and four treatment groups. (n = 15 per group). In SCII model groups, rats underwent spinal cord ischemia-reperfusion and injected with an equal volume of saline. In GRb1-treated groups, rats received GRb1 administration 30 min before ischemia-reperfusion and the same do as before every day until being sacrificed. GRb1treated group was divided into four subgroups: 10 mg/kg, 20 mg/kg, 40 mg/kg, and 80 mg/kg.
SCII model
SCII was induced as described previously by Hwang with slight modifications [11]. In brief, after overnight fasting with unrestricted access to water, the animals were anesthetized intraperitoneally with chloral hydrate (intraperitoneal injection (i.p.), 400 mg/kg) and placed in the supine position. The left carotid artery was exposed and cannulated with a catheter connecting to an external blood reservoir to monitor the proximal arterial pressure and maintain it at 80 mmHg during the aortic occlusion. The tail artery was cannulated with a polyethylene catheter for intraarterial infusion of heparin and the monitoring of distal arterial pressure. To induce spinal cord ischemia, the left femoral artery was exposed and a balloon-tipped 2F catheter (Edwards Life Science, Shanghai, China) was inserted into the descending thoracic aorta (10-12 cm from the site of insertion). The catheter balloon was inflated with 0.05 ml saline and maintained for 10 min. After ischemia, the balloon was deflated and the drained blood was reinfused slowly through the carotid artery catheter. Then, all catheters were removed and the wounds were closed (Fig. 1c). The rats were returned to their cages and allowed to recover. Protamine sulfate (4 mg, i.p.) was given to neutralize excessive heparin, and the bladder content was expelled via manual compression as required.
Rats behavioral test
After SCII, behavioral test for the motor function was examined at 48 h using the Basso, Beattie, and Bresnahan motor rating scale [12]. The judgers were blinded to experimental conditions and are familiar with the BBB score.
Specimen collection
Rats were sacrificed after rats behavioral test for each group. After chloral hydrate (400 mg/kg, i.p.) anesthesia finished, 3 ml of blood was collected from the heart. After standing at indoor temperature for 2 h, the supernatant was centrifuged at 1000 rpm for 10 min to obtain serum, which will be stored in a refrigerator at − 20°C for testing. Some rats immediately removed the lumbar spinal cord tissue about 1 cm in length for SOD and MDA. Tissue specimen were washed by frozen saline and immediately prepared as homogenates (1:10) then centrifuged (14,000 r/min, 4°C, 15 min); supernatant layer was derived and immediately frozen in liquid nitrogen and stored at − 70°C until further processing. The rest of the rats opened left aortic ascending aorta, and the right atrium was exposed at the same time. After rapid lavage with 250 ml of ice-cold saline, 4% paraformaldehyde was slowly perfused for about 30 min, until the right atrium flowed out clear paraformaldehyde and the lungs turned white. The lumbar spinal cord tissue was immediately removed into the 4°C paraformaldehyde and fixed it for 24 h, then dipped it in clean water for 24 h and changed the water three times in the middle. At last, it embedded in paraffin. Continuous slice was conducted within 2 mm of lumbar-3 spinal cord tissue, with a thickness of 10 um.
Tissue and serum MDA assay
In the spinal cord and serum, lipid peroxidation was determined as malondialdehyde (MDA) concentration. After SCII, MDA levels in the damaged spinal cord were measured based on reaction with thiobarbital acid using MDA assay kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) per manufacturer's instructions.
Using N-methyl-2-phenylindole as substrate, intracellular MDA concentration was calculated by measuring maximal absorbance at 532 nm on a spectrophotometer. MDA concentrations were expressed as nanomoles per milligram of spinal cord protein (nmol/mg prot) in spinal cord homogenate and nanomoles per milliliter (nmol/ml) in serum.
Tissue and SOD analysis
Serum superoxide dismutase (SOD) activity in serum and spinal cord homogenate was measured with xanthine oxidase assay kits (Nanjing Jiancheng, Bioengineering Institute, Nanjing, China) per manufacturer's instructions. The principle of this method is based on inhibition of nitro-blue tetrazolium reduction by xanthine-xanthine oxidase system as a superoxide generator. SOD was calculated by measuring maximal absorbance at 550 nm on a spectrophotometer. One unit of SOD was defined as the enzyme amount causing 50% inhibition in NBT reduction. The SOD activity was expressed as units per milligram of spinal cord protein (U/mg prot) in spinal cord homogenate and nanomoles per milliliter (nmol/ml) in serum.
H&E staining
Taking the spinal cord biopsies embedded in paraplast from specimen collection step. Representative 5 μm sections of paraffin-embedded tissue were cut and deparaffinized with xylene, graded ethanol, then mounted on slides for hematoxylin and eosin (H&E) staining. The damaged neurons were identified by the loss of Nissl substance, the cavitation around the nucleus, and the presence of pyknotic homogenous nuclei.
Immunohistochemistry staining
The tissue sections were deparaffinized and blocked in 2% normal horse serum for 2 h and were then incubated with a primary survivin antibody at 4°C overnight. After three washes in phosphate-buffered saline, the corresponding secondary antibody was added, followed by incubation for 2 h at room temperature. The sections were then rinsed and placed in the avidin-peroxidase conjugate solution for 2 h. Horseradish peroxidase was detected with 0.05% diaminobenzidine. After that, the sections were counterstained with hematoxylin, dehydrated, and mounted. Appropriate sections were used as positive and negative controls. Five microscopic fields of positive cells were chosen for imaging.
Survivin western blot analysis
To confirm survivin expression, we performed a western blot. Procedures were all performed per manufacturer's instructions (BCA Protein Assay Kit). Briefly, frozen homogenate tissues from specimen collection step were thawed. Protein lysate buffer was added to 20 mg of frozen tissues (with 2.0 μL protein lysate per 1.0 mg tissue). Each sample containing 20 μg total protein was loaded onto a 12% separating gel and transferred to membranes. After washing once with Tris-buffered saline with Tween 20(TBST) for 5 min and blocking in 5% skim milk in TBST for 4 h at room temperature, the membranes were incubated overnight with primary antibodies, anti-survivin (1:100, Santa Cruz, CA), and anti-β-actin (1:2000) at 4°C. After washing, membranes were incubated with horseradish-peroxidase (HRP)-labeled goat anti-mouse (1:10000) secondary antibody (Jackson) for 4 h at room temperature. The color was developed using enhanced chemiluminescence, and images were analyzed with LabWorks gradation image analysis software. Each experiment was repeated thrice.
Terminal-deoxynucleotidyl transferase-mediated nick end labeling assay TUNEL assay was performed to detect apoptosis. In brief, the sections fixed by paraformaldehyde were quenched in 3% hydrogen peroxide for 10 min, washed with phosphate-buffered saline, and incubated with terminal deoxynucleotidyl transferase enzyme for 1 h at 37°C . The reaction was stopped by washing in phosphatebuffered saline, and anti-digoxin-peroxidase was then added to the slides. After another wash, the sections were incubated with diaminobenzidine for 10 min at room temperature, followed by counter-staining with hematoxylin and dehydration. Distilled water, as a substitution for terminal deoxynucleotidyl transferase, was used as negative control. The number of TUNEL-positive cells in the five microscopic fields was counted.
Statistical evaluation
All data were reported as mean ± standard deviation. All statistical analyses were conducted using statistical analysis software (SPSS, Version 18.0). In experiments involving histology or immunohistochemistry, all figures shown are representative of at least three experiments performed on different experimental days. For statistical evaluation, one-way analysis of variance (ANOVA) was employed. Independent samples t test was used to compare value of BBB score/SOD/MDA and the numbers of survivin, TUNEL-positive cells between the reperfusion groups and the GRb1 treatment groups. Pearson correlation analysis was also performed to some index. P value < 0.05 was considered statistically significant.
Rats behavioral test results
The results (Fig. 1b) showed that the BBB score of rats after the ischemia-reperfusion injury is significantly decreased. Although the values of each subgroup in the drug group are lower than those in sham group, the values of each subgroup are higher than those in the ischemia-reperfusion group (p < 0.05). With the increase of the dosage of drug intervention, the score also increased, but the score of 80 mg/kg changed little compared with 40 mg/kg.
Effects of GRb1 on oxidant stress after SCII Determination of MDA content in serum and spinal cord tissues As can be seen from Fig. 1d, e, the content of MDA in serum and spinal cord increased significantly after ischemia-reperfusion injury. Although the content of MDA of each subgroup in the drug group was higher than those in the sham group, the activity of each group was decreased compared with that in the ischemia-reperfusion group (p < 0.05). With the increase of drug intervention dose, the content of MDA decreases correspondingly, but the content of MDA changes little in 40 mg/kg compared with 80 mg/kg.
Determination of SOD activity in serum and spinal cord tissues
From Fig. 1f, g, it can be seen that the SOD activity of serum and spinal cord decreased significantly after ischemia-reperfusion injury. The SOD activity of each subgroup in the drug group was lower than that in the sham group, but the activity of each group was increased compared with that in the ischemia-reperfusion group (p < 0.05). The activity of SOD increased with the increase of drug intervention dose. However, the activity of SOD changed little in 40 mg/kg compared with 80 mg/kg.
Effect of GRb1 on morphological changes after SCII
HE staining images showed that the Nissl bodies in the cytoplasm of normal spinal cord neurons were clearly visible (Fig. 2a). After ischemia-reperfusion, the neurons shrank, the Nissl bodies were blurred, and the number of Nissl bodies decreased (Fig. 2b). The spinal cord tissue showed vacuolar degeneration of different sizes. This kind of injury change was found in all sub-groups of the drug group, but compared with the ischemia-reperfusion group, the damage degree was obviously lighter (Fig. 2c-f).
Effects of GRb1 on expression of survivin after SCII
As can be seen from Fig. 3, there was a significant increase in survivin protein-positive cells in the anterior angle of the spinal cord in rats after ischemia-reperfusion injury (Fig. 3b). Compared with the ischemia-reperfusion group, the number of survivin positive cells in each subgroup of the drug group increased (Fig. 3c-f). With the increase of drug intervention dose, survivin protein-positive cells increase correspondingly. However, the number of survivin positive cells did not change much in 40 mg/kg compared with 80 mg/kg. From Fig. 3g, we can see that survivin protein is not expressed in spinal cord neurons before ischemia, and it expressed immediately after ischemia-reperfusion. The expression level of survivin protein increased with the
Effect of GRb1 on apoptosis after SCII
As can be seen from Fig. 4, there was also a significant increase in TUNEL-positive cells in the anterior angle of the spinal cord in rats after ischemia-reperfusion injury (Fig. 4b). The number of TUNEL-positive cells in each subgroup of the drug group was significantly lower than that in the ischemia-reperfusion group (p < 0.05). The number of TUNEL-positive cells decreased significantly with the increase of the drug intervention dose, but the number of TUNEL-positive cells did not change significantly when compared with 40 mg/kg and 80 mg/kg. The correlation coefficient of survivin-positive cells and TUNEL positive cells is − 0.601.
Discussion
In this study, we established a model of spinal cord ischemia-reperfusion injury in rats. Through the intervention of GRb1, the detection of SOD, MDA, survivin protein expression, and apoptosis were used to show the partial protective mechanism of GRb1 for spinal cord ischemia-reperfusion injury. Oxidative stress is an important mechanism in spinal cord ischemia-reperfusion [13]. Nitric oxide, superoxide anions, hydrogen peroxide, and hydroxyl radicals were produced during oxidative stress, which could cause damage to membranes and basic organelles by peroxidation of unsaturated fatty acids in membrane phospholipids, and also caused cell death through necrosis or apoptosis patterns [14]. Many reactive oxygen species (ROS) are produced at a low level under physiological conditions, but under oxidative stress conditions, especially when the generation of ROS exceeds the scavenging capacity of antioxidant enzymes such as SOD, it can lead to cell damage and neural tissue damage [15,16]. SOD is an important antioxidant enzyme widely distributed in various organisms [17]. It is the primary substance for scavenging free radicals in organisms and is the preferred indicator for the antioxidant capacity of organisms. Therefore, many scholars used SOD activity as an intuitive indicator of the degree of oxidative stress damage in the ischemia-reperfusion injury. MDA can be formed by condensation of acetaldehyde and ethyl formate. In vivo, the end product of lipid peroxidation of free radicals is MDA, which can cause cross-linking of many biomacromolecules and is cytotoxic [18]. MDA is also an important monitoring index of oxidative stress damage during ischemia-reperfusion injury, which is widely used in such experiments. As can be seen from the results of this experiment, spinal cord ischemia-reperfusion injury can significantly reduce SOD activity in serum and spinal cord tissue of rats. But the content of MDA increased significantly. These results suggest that ischemia-reperfusion injury of the spinal cord in rats can induce obvious oxidative stress. Although the subgroup values of the drug group were lower than those of the sham group, the SOD activity was increased and the MDA content was decreased compared with the ischemia-reperfusion group. The change was more obvious with the increase of the intervention dose, and the trend was not obvious after the intervention dose was more than 40 mg/kg. The result confirms that GRb1 can increase SOD activity and reduce MDA production in rats, and this trend is no longer obvious after the intervention dose is more than 40 mg/kg.
Survivin is made up of 142 amino acids, which can function only by forming homodimers [19]. Survivin has only one baculovirus IAP repeats (BIR) domain, which is very important for the formation of dimer and the inhibition of apoptosis such as binding to caspases. The BIR domain at the N-end of the survivin protein can inhibit the activity of caspase-3 and caspase-7 to inhibit apoptosis. The c-terminal of survivin protein does not have the RING structure that other members of the IAP family have. And the survivin protein monomers can aggregate and bind to each other through the BIR domain to form a symmetrical dimer, which is necessary for the survivin protein to resist apoptosis. Survivin proteins are highly cell-selective and highly expressed in the tissues and organs of embryos and fetuses [20]. At present, most studies focus on tumors such as hepatocellular carcinoma and lung cancer [21][22][23]. Only a small number of studies on cerebral ischemia-reperfusion injury suggest that survivin protein can be expressed after brain ischemia-reperfusion injury [24,25]. It can be seen from the results of this experiment that SCII can increase the number of survivin protein-positive cells in the anterior horn of spinal cord and increase the expression of survivin protein in spinal cord tissue, indicating that SCII can promote rat spinal cord neurons express survivin protein. Compared with the ischemia-reperfusion group, the number of survivin positive cells and the expression level of survivin protein increased in each subgroup of the drug group. The change became more obvious with the increase of the dose of the intervention drug, but the trend was not obvious after the intervention dose was more than 40 mg/kg. These results suggest that GRb1 intervention can promote the expression of survivin protein in rat spinal cord neurons, and this trend is not obvious after the intervention dose is more than 40 mg/kg. And this change has an inverse trend with the changes of apoptotic neurons in the anterior horn of rat spinal cord, and the correlation coefficient is − 0.601.
GRb1 has been shown to be a ligand for glucocorticoid receptors and androgen receptors, and they act as agonists to promote rapid ion influx and NO production [26,27]. There is a preclinical systematic review to investigate the efficacy of GRb1 for animal models of myocardial ischemia/reperfusion injury. This study suggests that GRb1 is a potential cardio-protective candidate for further clinical trials of myocardial infarction. A clinical study showed that GRb1 has therapeutic effects on cardiac function and remodeling in patients with heart failure [28]. Analysis of GRb1 metabolites has been reported to detect 14 metabolites in rat urine, feces, stomach, and large intestine [29,30]. After intravenous injection of drugs, urine mainly contains prototype drugs and some metabolites. The peak time for blood medicines was about 1.02 h [31]. The bioavailability by oral administration is low, and there are fewer prototype drugs entering the blood. On this basis, the metabolic reactions such as hydrolysis, binding, oxidation, and isomerization are also minimal. Most of the metabolites detected in urine after oral administration were metabolites of gastrointestinal flora, of which hydrolyzed products were the majority. And in urine, the amount of hydrolyzed metabolites was higher than that of the prototype drugs [32]. When GRb1 was given intravenously to healthy people, the peak concentration of GRb1 in plasma was 10.572 ± 8.925 mg/L and the drug peak time is 1.655 ± 0.144 h. The plasma terminal elimination half-life was 47.983 ± 7.256 h, so we speculated that the duration of the pretreatment treatment effect of GRb1 was about 2 days. After 150 h, the plasma concentration of the drug was still 0.889 ± 0.033 mg/L [33]. In this experiment, intraperitoneal injection of drugs is used to ensure the blood concentration on the one hand and sufficient drug metabolism in the liver on the other hand. Some people used 1200 mg/kg oral drug concentration to carry out the test, and the results showed that the linear relationship between the blood concentration in the range of 1-20.0 mg/L was good, and then the rising trend slowed down. Considering that the bioavailability of the drugs absorbed by abdominal cavity is higher than that by oral administration, but there is still some loss, so the 20 mg/kg group in the experimental group has not achieved the best effect. After deducting the fixed loss, the upper limit of blood concentration may have been reached in the 40 mg/kg group. Similarly, the 80 mg/kg group also reached the upper limit. So there is little difference between the 40 mg/kg group and the 80 mg/kg group in all aspects of evaluation effect. It can be seen from this experiment that GRb1 can play a protective role in the process of SCII by antioxidant, promoting survivin protein expression and inhibiting apoptosis. Moreover, the protective effect increases with the increase of the dose within the range of 10-40 mg/kg, but no longer increases after the dose exceeds 40 mg/kg. It suggested that the effective dose is within the range of 10-40 mg/kg. This provides an animal experimental basis for the clinical application of GRb1.
Conclusion
Preconditioning of GRb1 could protect rat spinal cord from ischemia-reperfusion injury through anti-oxidation, promoting survivin protein expression and inhibiting apoptosis. The protective effect of the intervention dose within the range of 10-40 mg/kg was enhanced with the increase of the dose, while the protective effect was no longer enhanced after the dose exceeded 40 mg/kg.
|
v3-fos-license
|
2021-12-12T06:16:18.437Z
|
2021-12-01T00:00:00.000
|
245020504
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-021-03336-2.pdf",
"pdf_hash": "a3b4c7d7c9eed378afa4f41d57c66f1b75dcc59a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45589",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "e4b0b33206cfd7c7b3f9d4bf985939691232834f",
"year": 2021
}
|
pes2o/s2orc
|
Restricted cement augmentation in unstable geriatric midthoracic fractures treated by long-segmental posterior stabilization leads to a comparable construct stability
The goal of this study is to compare the construct stability of long segmental dorsal stabilization in unstable midthoracic osteoporotic fractures with complete pedicle screw cement augmentation (ComPSCA) versus restricted pedicle screw cement augmentation (ResPSCA) of the most cranial and caudal pedicle screws under cyclic loading. Twelve fresh frozen human cadaveric specimens (Th4–Th10) from individuals aged 65 years and older were tested in a biomechanical cadaver study. All specimens received a DEXA scan and computer tomography (CT) scan prior to testing. All specimens were matched into pairs. These pairs were randomized into the ComPSCA group and ResPSCA group. An unstable Th7 fracture was simulated. Periodic bending in flexion direction with a torque of 2.5 Nm and 25,000 cycles was applied. Markers were applied to the vertebral bodies to measure segmental movement. After testing, a CT scan of all specimens was performed. The mean age of the specimens was 87.8 years (range 74–101). The mean T-score was − 3.6 (range − 1.2 to − 5.3). Implant failure was visible in three specimens, two of the ComPSCA group and one of the ResPSCA group, affecting only one pedicle screw in each case. Slightly higher segmental movement could be evaluated in these three specimens. No further statistically significant differences were observed between the study groups. The construct stability under cyclic loading in flexion direction of long segmental posterior stabilization of an unstable osteoporotic midthoracic fracture using ResPSCA seems to be comparable to ComPSCA.
. Comparison groups with donor characteristics. ResPSCA: cement augmentation of pedicle screws at cranial (Th5) and caudal (Th9) level only; ComPSCA: cement augmentation of all pedicle screws; Mb Bech: Morbus Bechterew; Th6 Fract: Consolidated fracture of Th 6; SD: Standard Deviation. *For a pairwise comparison between the groups, specimen pairs were assigned the same specimen number. # Statistical evaluation of mean value differences between the groups, p values < 0.05 stating significant difference. www.nature.com/scientificreports/ The specimens were compressed axially and eccentrically by 20 mm. In a subsequent CT evaluation, no screw loosening and no damage to the spinal column structure was found 5 . All specimens were wrapped in plastic foil, cooled and shock frozen at − 80 °C to minimize ice crystal growth 6 .
For the current study, the specimens were then gently thawed. The temperature was gradually increased, for at least 2 days, to − 20 °C, then for one day to − 2 °C, then transitioned to room temperature within 16 h prior to testing. This is intended to reduce the temperature gradient within the specimen during thawing and to protect the tissues.
Experimental procedure. The non-instrumented vertebrae Th4 and Th10 were embedded with a polyurethane casting resin (RenCast; Huntsman Advanced Materials, Basel, Switzerland). Additional screws were inserted into the vertebral bodies to improve the bond between the bone and embedding. The vertebral endplate of Th7 was positioned horizontally to ensure an upright alignment of the spine.
The specimens were clamped in a test stand developed in-house (Fig. 1a). The major component is a swivel arm that is driven by a motor, generating a defined torque. The specimens were fixed with the lower embedding on a slide, while the upper embedding was connected to the swivel arm. The rotation axis of the swivel arm was set to the center of the fracture gap in Th7 (Fig. 1b). In order to generate torque as straight as possible into the spinal column, the specimen was not fully constrained (Fig. 1a). The slide allowed lateral movements while forward and backward movements were suppressed. The upper embedding was connected via a bearing rod to a linear bearing in the swivel arm. This enabled rotation and axial compensatory movements of the spinal column. The linear bearing was, in turn, pivotally mounted in the swivel arm. Thereby, mainly torque in the flexion/extension direction was introduced, whereas pure transverse forces were minimized.
Markers with speckle patterns were pinned to the instrumented vertebral bodies (Fig. 1). The pins were pinned into the vertebral body. Care was taken to ensure that they were far away from the screw tips and the surrounding cement, which was introduced after carefully evaluating the CT scan after compressive testing. Markers were also attached to the swivel arm, as well at an independent reference point.
The specimens were periodically bent in the direction of flexion. A torque of 2.5 Nm was applied, as recommended in osteoporotic thoracic spines 7 . A total of 25,000 load cycles were applied, which corresponds to the expected motion within the first 3-4 weeks after surgery in a geriatric patient population 8 . Tests were carried out at a frequency of 1.2 Hz. During a load cycle, the load was applied in the first half and released in the second half. The specimens were kept moist throughout the testing period, being wrapped in moist gauzes that were regularly moistened 7 . The rotation of the swivel arm was measured with an angle sensor (Incremental encoder 5821, Fritz Kübler GmbH, Germany). Furthermore, the positions of the markers were recorded with a digital image correlation system with a three-camera setup (Q400, LIMESS Messtechnik und Software GmbH, Krefeld, Germany) 9 . These measurements were taken at the beginning (10 cycles), every 500 cycles and at the end (24,990 cycles) for continuous monitoring. Two cycles were recorded with a frame rate of 15 Hz for each individual measurement time-point.
Evaluation.
After cyclic loading, CT was performed in order to detect any signs of implant failure, screw loosening or subsequent vertebral fractures. These were evaluated independently by two of the authors, one spine surgeon (U.J.S.) and one radiologist (M.R.).
As the markers are anchored into vertebral bodies, it is assumed that they represent the movement of the vertebral bodies during loading 9 . The marker positions measured with the digital image correlation system were www.nature.com/scientificreports/ correlated and exported into a coordinate system corresponding to a person standing upright. An evaluation routine was developed to calculate the relative movement between two markers. Since torque was introduced, the evaluation was limited to the relative rotation of the vertebral bodies. In order to calculate these rotational components about a respective axis of the coordinate system, one vector defined by two speckle pattern points on each marker, respectively, was regarded. When selecting these points, it was ensured that the resulting vector was preferably perpendicular to the respective rotation axis prior to loading. For all three axes of the coordinate system, the projection of the respective vector into the plane perpendicular to the respective axis was regarded.
The angles between two vector projections of different markers were calculated for each time step. Thus, relative rotation depending on the regarded axis could be calculated for any pair of markers. The calculation was done using MATLAB (MathWorks and Simulink, USA). The relative rotations between the swivel arm and the reference marker were compared with the data from the angle sensor to check the continuity of the measurements (Supplement). The relative rotations between the adjacent vertebrae Th5/Th6 and Th8/Th9 and between the vertebra pairs Th5/Th9 and Th6/Th8 were evaluated. For each time interval of a series of measurements, the peak-to-peak amplitude and the zero offset to the rest position were determined. In the course of the measurement, the part of the movement characterized by the peakto-peak amplitude was regarded as reversible. A non-reversible part was indicated by the difference between the zero offset and the rest position of the first time interval. This was subsequently defined as permanent deflection (Supplement). The determined permanent deflections and peak-to-peak amplitudes were considered separately and examined in the course of the 25,000 cycles.
The statistical analysis was performed with SPSS 24.0 (IBM, USA). The Shapiro-Wilk test was used to verify normal distribution. Mean differences were checked with the Student t-test for normally distributed data pairs, otherwise the Mann-Whitney test was used. A value of p < 0.05 was considered significant.
Results
Evaluation of the CT images showed loosening of pedicle screws in three specimens, including screw loosening in one specimen of the study group ( Fig. 2a-c). Thereby, a cut-out of the right pedicle screw in Th8 and some signs of loosening of the right augmented pedicle screw in Th9 of specimen ResPSCA 1 were visible (Fig. 2b). In the control group, screw loosening was observed in two specimens. Screw cuts out of the right pedicle screws in Th9 could be seen in ComPSCA 2 and ComPSCA 5 (Fig. 2c).
In all specimens, mainly relative rotations around the transverse axis were observed, on which the assessment focuses. The marker in Th5 of the specimen ComPSCA 2 protruded slightly into the disc. Based on a potential effect on the measurements, this marker was not taken into account in the assessment. During loading, no pronounced periodic movement between the adjacent vertebral bodies was observed. In the course of the measurements over all specimens, no abrupt changes were detected, which would have indicated premature failure. For this reason, only cycles 10, 5,000, 10,000, 15,000, 20,000 and 24,990 were evaluated.
In Fig. 3, box plots of the peak-to-peak amplitudes between Th6 and Th8, at the beginning and end of the measurements, of both study groups are shown. Statistically, there were no differences in the mean values of the peak-to-peak amplitudes between the beginning and the end of testing in between the groups (p = 0.67 for ResPSCA, p = 0.83 for ComPSCA), as well as between both groups (p = 0.73 for cycle 10 between ResPSCA and ComPSCA, p = 0.53 for cycle 24,990 between ResPSCA and ComPSCA). Figure 4 compares the mean values of the calculated permanent deflections and peak-to-peak amplitudes for both test groups with complete (ComPSCA) and restricted cement augmentation (ResPSCA). For each of Figure 2. CT scans after cyclic loading are illustrating one of five cases with ResPSCA, without any signs of screw loosening or implant failure (a), one specimen with cut-out of the right pedicle screw in Th8 (big arrow) and signs of loosening of the right cement-augmented pedicle screw in Th9 (b, small arrows). In (c), one of two specimens is shown with a cut-out of the right cement-augmented pedicle screw of Th9 after ComPSCA (arrow). www.nature.com/scientificreports/ the vertebral body pairs the course of the measured values between the two comparison groups appeared to be qualitatively and quantitatively similar. This finding is supported by the fact that, for each data pair between the ResPSCA and ComPSCA groups, the mean values were examined and no statistically significant differences were found (Table 2). Figure 5 compares the two test groups. In each case, the permanent deflections and peak-to-peak amplitudes of the comparison pairs Th5/Th6, Th6/Th8 and Th8/Th9 are considered. In most cases, the permanent deflections of Th5/Th6 and Th8/Th9 were small and comparatively smaller than those between Th6/8, with the exception of ResPSCA 1, ComPSCA 2 and ComPSCA 5, respectively. In all of those three specimens, implant failure was visible. The peak-to-peak amplitudes of Th5/Th6 and Th8/Th9 were significantly smaller than those of Th6/Th8, but the differences were less obvious in the specimens ResPSCA 1, ResPSCA 6, ComPSCA 4, and ComPSCA 5.
Discussion
The most important finding of this article is the comparable construct stability between ComPSCA and ResPSCA with two cases of cut-outs in the ComPSCA group and only one in the ResPSCA group under cyclic testing, despite the fact that biomechanical testing under axial loading was done previously in all specimens. The dynamic testing results confirm these three cases of implant failure. Hereby, the orientation of two vertebrae has changed permanently during the course of cyclic loading, which can be interpreted as a sign of screw loosening. However, based on the fact that only a one-sided screw cut-out was seen in all three cases, with signs of implant failure and macroscopically uneventful contralateral screw positioning, no higher grades of instability can be expected. This is in accordance with our results, with consistent but only subtle differences in segmental movement between the three specimens with implant failure in comparison to the others.
Otherwise the peak-to-peak amplitudes of movement were in accordance with the expected results. Minimal to low peak-to-peak amplitudes were recorded in the stabilized healthy segments Th5/Th6 and Th8/Th9. In contrast, moderate to high peak-to-peak amplitudes were seen between Th6/Th8, which represents the stabilized unstable fracture region. Generally, the ranges of peak-to-peak amplitudes between the specimens were large without any significant differences between the study groups. This seems to be not very surprising considering the rather small study group and the big range of ages in patients and the morphological differences between www.nature.com/scientificreports/ the spines. However, both study groups were matched regarding patient age, bone density and gender in order to minimize the differences between the groups. Interestingly, two of the specimens with implant failure were highly osteoporotic, with T-scores of less than -4 (two cases). The third specimen had spondylitis ancylosans. Several authors recommend long segmental stabilization with pedicle screw implantation of three levels above and below the fracture in patients with spondylitis ancylosans 10,11 . This can partially explain the implant failure. However, all implant failure happened to be below the fracture. This is somewhat surprising, as in daily practice screw cut-outs seem to occur more frequently in the instrumented vertebral bodies above the fracture in correspondence to the data reported by Banno et al. 12 . In contrast, other studies reported observed higher rates of screw loosening at the lowest level of instrumentation 13 . Generally, the huge majority of implant failure occurs at the lowest or highest level of instrumentation 12,13 . Additionally, all cut-outs were one-sided. This can be explained by the fact that the cascade of implant failure has just begun. This might end in screw cut-outs of both pedicle screws, leading to higher instabilities in the further course.
Generally, specimens tended to adopt a proceeding kyphotic malposition during the course of testing due to cyclic loading of 25,000 cycles predominantly in the flexion direction. The permanent deflection was expected based on cyclic loading without protective interactions of the muscle and rib cage due to permanent strain on the connective tissues. Generally, 25,000 cycles represent the average load during all day activities over a period of 3-4 weeks for elderly people 8 . This number of cycles was chosen to simulate this very important period of bony healing. In correspondence to that, increased in vivo stiffness has been observed to begin 3 weeks after osteotomy in an ostoporotic sheep model 14 . In addition, fatigue tests should be conducted in follow-up studies to evaluate the long-term behavior of the stabilization. Thus, the load acting on the material can be supposed to be higher as compared to clinical practice. The selected bending moment of 2.5 Nm is based on a literature Figure 4. Comparison of the mean values of the test groups with complete (ComPSCA) and restricted cement augmentation (ResPSCA) with regard to permanent deflection (above) and peak-to-peak amplitude (below). No statistically significant differences were found when comparing any pair of data for the ResPSCA and ComPSCA groups. To make the error bars more visible, the dots have been slightly shifted. However, the measured values refer to the cycle indicated on the abscissa. www.nature.com/scientificreports/ recommendation for range of motion tests on osteoporotic thoracic spines as maximal loads in order not to destroy tissues 7 . In vivo tests of the more heavily loaded lumbar spine measured 3.5 (± 1.5) Nm when bending the upper body and 4.2 (± 1.7) Nm when lifting a weight from the floor 15 . On the one hand, significantly lower loads are assumed to be in the area of the middle thoracic spine. On the other hand, upper body flexion and weight lifting are extreme loads that should be avoided postoperatively. By performing cyclic testing over an estimated period of 3 to 4 weeks and applying high cyclic loads, a model was chosen that simulates an extreme situation without any stabilizing effect that would be expected in living patients as a part of the fracture healing process. When evaluating the relative movement between the individual vertebral bodies, indications of screw loosening were found, but there were no clear patterns. An indirect measuring method was chosen, which allows for continuous observation. In order to measure screw movement in the vertebral body directly, markers would have to be attached to the screw tip or shaft. This would require the removal of bone material, which would have a lasting effect on screw retention. This was not the intention of the study, but should be investigated in subsequent studies.
However, the study has several limitations. First of all, all specimens were previously tested in a load of failure manner by axial compression. Thereby, implant failure particularly screw cut-out or screw loosening could be excluded by CT examination after testing 5 . A large part of the deformation was elastically stored in the Table 2. Comparison of the mean values of the test groups with complete (ComPSCA) and restricted cement augmentation (ResPSCA) with regard to permanent deflection and peak-to-peak amplitude. ResPSCApedicle screws at most cranial (Th5) and most caudal (Th9) are cement augmented, ComPSCA-all pedicle screws are cement augmented. *Measured values given as mean value ± standard deviation (in degree). # Statistical analysis performed, stating significant difference between mean values for compared groups at p < 0.05. www.nature.com/scientificreports/ rod system through the fracture gap. Despite the fact that it is not possible to definitely exclude minor lesions, only a minority of specimens showed signs of implant failure. Generally, all specimens had a similar load history and were appropriate for a comparative study. Secondly, another freezing and thawing cycle can influence the mechanical properties of soft tissue negatively 16 . However, the influence on the mechanical properties of bone tissue seems to be not relevant 17,18 . Furthermore, only minor effects on the range of motion of functional spine units have been observed 19 . In a further study, several freezing and thawing cycles were examined. No www.nature.com/scientificreports/ significant alterations in the range of motion could be seen after the initial freezing during further freeze-thaw cycles 20 . In addition, the samples were frozen in a tissue-friendly manner 6 . As the samples have the same storage history, comparative studies are permissible. In addition, the study focuses on the screw anchorage in the bone. The relevant vertebral bodies are rigidly instrumented. The freely movable segments, on which alterations of the intervertebral discs and ligaments would have a greater impact, were not the focus of this study. For the reasons mentioned above, a comparative study with the specimens is permissible, even though they have already undergone initial testing. Since all specimens were always treated in the same way, comparability is ensured. In addition, the usual recommendations were followed for storage, test duration, moisture retention, load rates, etc. 7,21,22 . Additionally, the cyclic loading was performed in flexion only. In contrast, human spines are subjected to multiple different loadings in different directions, all of which contribute to the development of implant failure. Thereby, the midthoracic spine is particularly susceptible under flexion with lower flexion strength than compressive strength 23 . Furthermore, it was not possible to generate pure torque only. However, the test set-up applies a uniform torque in the direction of flexion in a reproducible manner, which ensures comparability. Additionally, our sample size was small (six spines in each group) and underpowered. A post-hoc analysis has shown that at least 80 speciment per group are necessary to gain a power of 80%. However, compared with related publications, our study had a similar number of specimens per group [24][25][26][27] and complies with the recommendations for in vitro testing with human donor material 20 . Thereby, the analysis of group differences can be misleading based on the low power. Nevertheless, there were two implant failures visible in the ComPSCA group and only one in the ResPSCA group. Additionally, matching of the groups was performed in accordance to the T-score, age, and gender of the specimen. Next, the anatomic model represents a simplified model not considering the rib cage (leading to a decrease in stiffness), the muscle, and the physiological body weight acting on the midthoracic cage 28,29 . Last but not least, we did not include a non-cemented group in order to prove that cement-augmented pedicle screw augmentation is superior in our testing scenario. This was done based on the moderate to good biomechanical evidence of the superiority cement-augmented screw hold in osteoporotic bone 30,31 . Based on this evidence and the clinical experience of the last decade the authors hardly ever perform posterior stabilization without cement-augmented pedicle screws in osteoporotic vertebral body fractures. Generally, only clinical studies are conclusive for the evaluation of screw loosening in everyday life. Therefore, clinical studies are warranted to compare implant failure and reduction loss between restricted and complete pedicle screw augmentation in long segmental posterior stabilization.
Conclusion
No statistically significant differences in both implant failure rate and peak-to-peak amplitudes of movement between the instrumented vertebral bodies could be seen between the ResPSCA and ComPSCA groups under cyclic loading. Thus, the construct stability of long segmental posterior stabilization of an unstable osteoporotic midthoracic fracture using ResPSCA seems to be comparable to ComPSCA.
|
v3-fos-license
|
2018-08-06T13:28:57.221Z
|
2018-07-01T00:00:00.000
|
51716904
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/s18072396",
"pdf_hash": "b88cf0e40ec3bd5dd895d28d6ffbc7b0e980e892",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45590",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "b88cf0e40ec3bd5dd895d28d6ffbc7b0e980e892",
"year": 2018
}
|
pes2o/s2orc
|
Intensity Demodulated Refractive Index Sensor Based on Front-Tapered Single-Mode-Multimode-Single-Mode Fiber Structure
A novel intensity demodulated refractive index (RI) sensor is theoretically and experimentally demonstrated based on the front-tapered single-mode-multimode-single-mode (FT-SMS) fiber structure. The front taper is fabricated in a section of multimode fiber by flame-heated drawing technique. The intensity feature in the taper area is analyzed through the beam propagation method and the comprehensive tests are then conducted in terms of RI and temperature. The experimental results show that, in FT-SMS, the relative sensitivity is −342.815 dB/RIU in the range of 1.33~1.37. The corresponding resolution reaches 2.92 × 10−5 RIU, which is more than four times higher than that in wavelength demodulation. The temperature sensitivity is 0.307 dB/°C and the measurement error from cross-sensitivity is less than 2 × 10−4. In addition, fabricated RI sensor presents high stability in terms of wavelength (±0.045 nm) and intensity (±0.386 dB) within 2 h of continuous operation.
Introduction
Fiber refractive index (RI) sensors play a significant role in the fields of biology, chemistry, and medicine [1]. And the schemes based on fiber Bragg gratings [2], long-period fiber gratings [3], photonic crystal fiber [4,5], multimode interference (MMI) [6] and surface Plasmon resonance (SPR) [7,8] are frequently reported. Comparatively, the MMI-based sensors have received great attention due to the advantages of high sensitivity, low cost, and ease of fabrication. A typical MMI sensor can be formed through splicing a section of multimode fiber (MMF) with two pieces of single mode fiber (SMF), namely single-mode-multimode-single-mode (SMS) fiber structure [9][10][11]. Multipath interference then occurs among high-order core modes and brings high sensitivity to the ambient parameters.
To be applied in RI sensing, Shao cascades thin-core fiber with SMS fiber structure and forms the composite modal interference [12]. Wang and Yang comprehensively analyzed and compared the RI sensitivity of tapered-SMS fiber structures [9,[13][14][15]. Moreover, the tapered-multi-core and multi-taper-based schemes have been respectively proposed, and the over-200-nm/RIU sensitivity was obtained in References [16,17]. Recently, the flame-heated drawing technique was adopted to further enhance the sensitivity of MMI based sensors. Fu reported a U-shape fiber humidity sensor with waist-diameter of 4.75 µm [18]. Zhang used a tapered polarization maintaining fiber to measure the concentration of ammonia [19]. In addition, the fiber structures with higher strain and curvature sensitivities are presented in [20,21].
In this paper, an intensity demodulated RI sensor is proposed to gain higher measured precision based on the front-taper SMS (FT-SMS) fiber structure, in which a taper is fabricated in the front of a section of MMF by flame-heated drawing. The composite modal interference is formed, and the intensity feature in taper area is then analyzed through the beam propagation method. The experimental results show that, in FT-SMS, the intensity of fringe is dramatically decreased with the increased external RI, and the relative sensitivity reaches −342.815 dB/RIU in the range of 1.33-1.37. Compared to wavelength demodulation, four times enhancement in detecting resolution is obtained. In addition, the measured error from cross-sensitivity is limited within 2 × 10 −4 , owing to low-temperature sensitivity. This fabricated RI sensor, also, presents high stability in terms of wavelength (±0.045 nm) and intensity (±0.386 dB).
Principles
The FT-SMS fiber structure is illustrated in Figure 1. A taper is located at the front end of MMF, which includes two transition areas and a taper-waist area. Based on the theory of evanescent wave filed, in the first transition area the inputted light will partly leak out and excite high-order cladding modes. When transmitting in the second transition area, the cladding modes can re-couple into the fiber core [19]. Then a Mach-Zehnder interferometer (MZI) is formed in the taper area, and both the intensity and wavelength will be sensitive to the change of external RI. concentration of ammonia [19]. In addition, the fiber structures with higher strain and curvature sensitivities are presented in [20,21]. In this paper, an intensity demodulated RI sensor is proposed to gain higher measured precision based on the front-taper SMS (FT-SMS) fiber structure, in which a taper is fabricated in the front of a section of MMF by flame-heated drawing. The composite modal interference is formed, and the intensity feature in taper area is then analyzed through the beam propagation method. The experimental results show that, in FT-SMS, the intensity of fringe is dramatically decreased with the increased external RI, and the relative sensitivity reaches −342.815 dB/RIU in the range of 1.33-1.37. Compared to wavelength demodulation, four times enhancement in detecting resolution is obtained. In addition, the measured error from cross-sensitivity is limited within 2 × 10 −4 , owing to lowtemperature sensitivity. This fabricated RI sensor, also, presents high stability in terms of wavelength (±0.045 nm) and intensity (±0.386 dB).
Principles
The FT-SMS fiber structure is illustrated in Figure 1. A taper is located at the front end of MMF, which includes two transition areas and a taper-waist area. Based on the theory of evanescent wave filed, in the first transition area the inputted light will partly leak out and excite high-order cladding modes. When transmitting in the second transition area, the cladding modes can re-couple into the fiber core [19]. Then a Mach-Zehnder interferometer (MZI) is formed in the taper area, and both the intensity and wavelength will be sensitive to the change of external RI. According to Reference [22], the light intensity of MZI can be expressed as: where I1 and I2 represent the intensity of the core and cladding modes, respectively. ∆ is the phase difference between the core and cladding modes and can be written as: where λ is the wavelength of incident light, ∆ is the difference of effective RI between the core and cladding modes and is the length of taper area. Therefore, the corresponding interfered wavelength will be: where j is the order of cladding mode. Moreover, an MMI will occur in the residual MMF due to the multipath difference among high-order core modes [21]. Here, the diameter and length of taper-waist are denoted by and , respectively. The lengths of transitional areas are Z and the initial length of MMF is . From Reference [23], the interfered wavelength between the and high-order core modes will be: According to Reference [22], the light intensity of MZI can be expressed as: where I 1 and I 2 represent the intensity of the core and cladding modes, respectively. ∆ϕ is the phase difference between the core and cladding modes and can be written as: where λ is the wavelength of incident light, ∆n e f f is the difference of effective RI between the core and cladding modes and L t is the length of taper area. Therefore, the corresponding interfered wavelength will be: where j is the order of cladding mode. Moreover, an MMI will occur in the residual MMF due to the multipath difference among high-order core modes [21]. Here, the diameter and length of taper-waist are denoted by D w and L w , respectively. The lengths of transitional areas are Z and the initial length of MMF is L. From Reference [23], the interfered wavelength between the m th and n th high-order core modes will be: where N is an integer, n co and r are the effective RI and radius of fiber core, respectively. Then the wavelength spacing of MMI can be written as: Further, assume that the length of MMF is 50 mm with the diameter of 105/125 µm, n co = 1.4662 and n cl = 1.4450. The n co of SMF is 1.4502 with the diameter of 8.3/125 µm. Then by using the beam propagation method (the incident wavelength is 1550 nm, and the computational rectangular area is 0.105 × 61.798 mm 2 with the mesh area of 0.2 µm 2 ), the intensity features of SMS with front and middle tapers are compared under the varied L t (= L w + Z) and external RI. Figure 2a,b shows the interference patterns in the front and middle tapers, and it is clear that there is more energy leaked in the front-tapered (FT) structure. We then set L w = 8 mm, the radius of taper waist can be calculated by Equation (6), which is decreased with the increase of Z [13].
where R 0 is the radius of MMF. The changes in normalized intensity are shown in Figure 2c with the varied Z (from 2 to 12 mm). We observe that the intensities are decreased in both tapered SMS structures with the rise of Z, but the difference of them is continuously increased (the maximum 0.081 occurs at Z = 12 mm). We further set Z = 6.5 mm and the similar results are presented in Figure 2d with the varied external RI (from 1.33 to 1.41). The intensity deduction in FT structure is clearly larger than that in middle taper and the maximum difference reaches 0.198 when RI = 1.41, which means that a higher RI sensitivity may be gained in FT-SMS. where is an integer, and r are the effective RI and radius of fiber core, respectively. Then the wavelength spacing of MMI can be written as: Further, assume that the length of MMF is 50 mm with the diameter of 105/125 μm, = 1.4662 and = 1.4450. The of SMF is 1.4502 with the diameter of 8.3/125 μm. Then by using the beam propagation method (the incident wavelength is 1550 nm, and the computational rectangular area is 0.105 × 61.798 mm 2 with the mesh area of 0.2 μm 2 ), the intensity features of SMS with front and middle tapers are compared under the varied (= + ) and external RI. Figure 2a,b shows the interference patterns in the front and middle tapers, and it is clear that there is more energy leaked in the front-tapered (FT) structure. We then set = 8 mm, the radius of taper waist can be calculated by Equation (6), which is decreased with the increase of [13].
where R0 is the radius of MMF. The changes in normalized intensity are shown in Figure 2c with the varied (from 2 to 12 mm). We observe that the intensities are decreased in both tapered SMS structures with the rise of , but the difference of them is continuously increased (the maximum 0.081 occurs at Z = 12 mm). We further set = 6.5 mm and the similar results are presented in Figure 2d with the varied external RI (from 1.33 to 1.41). The intensity deduction in FT structure is clearly larger than that in middle taper and the maximum difference reaches 0.198 when RI = 1.41, which means that a higher RI sensitivity may be gained in FT-SMS.
Fabrication
A 50-mm un-coated MMF (MM-S105/125-12A, Nufern, Hartford, CT, USA) was first spliced with two pieces of SMF (SMF-28, Corning, New York, NY, USA) by using a commercial fusion splicer (KL-280, Geelong, Nanjing, China). This SMS fiber structure was then placed into a melt-drawing machine (KF-FBT). As shown in Figure 3a, the front end of MMF is positioned under the center of flame-head. We set the speeds of drawing and hydrogen flow to 300 μm/s and 150 mL/min, respectively. The SMS fiber was then evenly stretched, and the controller showed the stretching length was 7.8 mm and = 17.8 mm, accordingly − = 3.22 cm. Figure 3b is the CCD (Coupled charge device) image of the fabricated taper and its waist-diameter is 29.2 μm. After 5-h annealing,
Fabrication
A 50-mm un-coated MMF (MM-S105/125-12A, Nufern, Hartford, CT, USA) was first spliced with two pieces of SMF (SMF-28, Corning, New York, NY, USA) by using a commercial fusion splicer (KL-280, Geelong, Nanjing, China). This SMS fiber structure was then placed into a melt-drawing machine (KF-FBT). As shown in Figure 3a, the front end of MMF is positioned under the center of flame-head. We set the speeds of drawing and hydrogen flow to 300 µm/s and 150 mL/min, respectively. The SMS fiber was then evenly stretched, and the controller showed the stretching length was 7.8 mm and L t = 17.8 mm, accordingly L − L t = 3.22 cm. Figure 3b is the CCD (Coupled charge device) image of the fabricated taper and its waist-diameter is 29.2 µm. After 5-h annealing, the transmission spectrum of FT-SMS (in air) was tested and shown in Figure 3c. It is obvious that there is a main interference fringe located at 1550.51 nm with the contrast ratio of~9 dB. the transmission spectrum of FT-SMS (in air) was tested and shown in Figure 3c. It is obvious that there is a main interference fringe located at 1550.51 nm with the contrast ratio of ~9 dB.
Experiments and Results
As shown in Figure 4, the sensing head was flatly placed and fixed onto a glass slide by epoxy rean sin adhesive. The broadband source (BBS, homemade, operated in 1520-1565 nm) was fixed at 50 mA, and the room temperature was kept at 23 ± 0.2 °C. The RI test was then performed through varying the concentration of sucrose solution from 0 to 20% (the corresponding RI is 1.33-1.37). The shifts of interference fringes were recorded by an optical spectrum analyzer (OSA, Agilent 86142B, with the resolution of 0.06 nm/0.01 dB). In Figure 5a, the dip of interference fringe moves toward long wavelength with the increase of solution concentration, but the intensity of fringe is quickly decreased. The total deduction reaches 11.15 dB (from −49.72 to −60.87 dB). According to Figure 5b, the sensitivity is −342.815 dB/RIU, and the linearity is 0.985. Because of the resolution of OSA (0.01 nm), the detection limit reaches 2.92 × 10 −5 RIU. Comparatively, the dip shifts ~2.97 nm in the range of 1.33-1.37. By calculation, the wavelength sensitivity is 82.58 nm/RIU with the linearity of 0.981. And the corresponding resolution
Experiments and Results
As shown in Figure 4, the sensing head was flatly placed and fixed onto a glass slide by epoxy rean sin adhesive. The broadband source (BBS, homemade, operated in 1520-1565 nm) was fixed at 50 mA, and the room temperature was kept at 23 ± 0.2 • C. The RI test was then performed through varying the concentration of sucrose solution from 0 to 20% (the corresponding RI is 1.33-1.37). The shifts of interference fringes were recorded by an optical spectrum analyzer (OSA, Agilent 86142B, with the resolution of 0.06 nm/0.01 dB). the transmission spectrum of FT-SMS (in air) was tested and shown in Figure 3c. It is obvious that there is a main interference fringe located at 1550.51 nm with the contrast ratio of ~9 dB.
Experiments and Results
As shown in Figure 4, the sensing head was flatly placed and fixed onto a glass slide by epoxy rean sin adhesive. The broadband source (BBS, homemade, operated in 1520-1565 nm) was fixed at 50 mA, and the room temperature was kept at 23 ± 0.2 °C. The RI test was then performed through varying the concentration of sucrose solution from 0 to 20% (the corresponding RI is 1.33-1.37). The shifts of interference fringes were recorded by an optical spectrum analyzer (OSA, Agilent 86142B, with the resolution of 0.06 nm/0.01 dB). In Figure 5a, the dip of interference fringe moves toward long wavelength with the increase of solution concentration, but the intensity of fringe is quickly decreased. The total deduction reaches 11.15 dB (from −49.72 to −60.87 dB). According to Figure 5b, the sensitivity is −342.815 dB/RIU, and the linearity is 0.985. Because of the resolution of OSA (0.01 nm), the detection limit reaches 2.92 × 10 −5 RIU. Comparatively, the dip shifts ~2.97 nm in the range of 1.33-1.37. By calculation, the wavelength sensitivity is 82.58 nm/RIU with the linearity of 0.981. And the corresponding resolution In Figure 5a, the dip of interference fringe moves toward long wavelength with the increase of solution concentration, but the intensity of fringe is quickly decreased. The total deduction reaches 11.15 dB (from −49.72 to −60.87 dB). According to Figure 5b, the sensitivity is −342.815 dB/RIU, and the linearity is 0.985. Because of the resolution of OSA (0.01 nm), the detection limit reaches 2.92 × 10 −5 RIU. Comparatively, the dip shifts~2.97 nm in the range of 1.33-1.37. By calculation, the wavelength sensitivity is 82.58 nm/RIU with the linearity of 0.981. And the corresponding resolution is 7.27 × 10 −4 RIU. These results mean that the detection limit is enhanced about four times when the intensity demodulation is adopted. Intensity / dBm Further, the FT-SMS based sensor was placed onto a heater (DFD-7000, LICHEN, Shanghai, China) and its temperature feature was characterized and demonstrated. From Figure 6a, as the temperature increases, the wavelength shifts toward long wavelength and the intensity also increases ~5.97 dB due to the expansion of fiber core. Figure 6b shows the corresponding sensitivity is 0.307 dB/°C with the linearity of 0.992. The wavelength shift of dip is 0.315 nm in the range of 30-50 °C and the sensitivity is 15.3 pm/°C with the linearity of 0.986. Further, the FT-SMS based sensor was placed onto a heater (DFD-7000, LICHEN, Shanghai, China) and its temperature feature was characterized and demonstrated. From Figure 6a, as the temperature increases, the wavelength shifts toward long wavelength and the intensity also increases~5.97 dB due to the expansion of fiber core. Figure 6b shows the corresponding sensitivity is 0.307 dB/ Intensity / dBm Considering the ambient drift of ±0.2 °C, the intensity fluctuation will be 0.061 dB/RIU from cross-sensitivity, and the measurement error is limited in 0.175‰ in our sensor. Further, from Reference [24], the variations of temperature ( Δ ) and external RI (Δ ) can be simultaneously measured by the inversion matrix: where Δ is wavelength shift, Δ is intensity change. Table 1 compares the numerical results of several tapered fiber structures, and it is obvious that our sensor based on intensity demodulation presents a higher mean sensitivity and detecting resolution in the range of 1.33-1.37. Finally, considering the influence of light source power fluctuation and wavelength drift under a long working time [25], a 120-min stability test was Considering the ambient drift of ±0.2 • C, the intensity fluctuation will be 0.061 dB/RIU from cross-sensitivity, and the measurement error is limited in 0.175% in our sensor. Further, from Reference [24], the variations of temperature (∆T) and external RI (∆n) can be simultaneously measured by the inversion matrix: where ∆λ is wavelength shift, ∆I is intensity change. D = k λT k In − k IT k λn , where k λT = 0.015, k λn = 82.58 are the wavelength sensitivities in temperature and RI, and k IT = 0.307, k In = −342.82 are the intensity sensitivities in temperature and RI. Therefore, Equation (7) is changed as: Table 1 compares the numerical results of several tapered fiber structures, and it is obvious that our sensor based on intensity demodulation presents a higher mean sensitivity and detecting resolution in the range of 1.33-1.37. Finally, considering the influence of light source power fluctuation and wavelength drift under a long working time [25], a 120-min stability test was performed at room temperature, and the numerical results are shown in Figure 7. By calculation, the fluctuations of wavelength and intensity are ±0.045 nm and ±0.386 dB, respectively. It is worth noting that the sensor head can be packaged in a capillary to enhance its durability [26]. performed at room temperature, and the numerical results are shown in Figure 7. By calculation, the fluctuations of wavelength and intensity are ±0.045 nm and ±0.386 dB, respectively. It is worth noting that the sensor head can be packaged in a capillary to enhance its durability [26].
Conclusions
In this paper, a novel RI sensor is fabricated based on FT-SMS fiber structure through flameheated drawing technique. Because of the effect of evanescent wave filed, this RI sensor presents ultrahigh sensitivity and linearity in the range of 1.33-1.37. And the detecting resolution reaches 2.92 × 10 −5 RIU, which is more than four times higher than that in wavelength demodulation. Moreover, small temperature sensitivity (0.307 dB/°C) and high stability (±0.045 nm/±0.386 dB) are simultaneously demonstrated in FT-SMS fiber structure. Such low cross-sensitivity and instability indicate that the fabricated sensor is a promising and practical device for the applications of biochemical sensing and environmental monitoring.
Conclusions
In this paper, a novel RI sensor is fabricated based on FT-SMS fiber structure through flame-heated drawing technique. Because of the effect of evanescent wave filed, this RI sensor presents ultrahigh sensitivity and linearity in the range of 1.33-1.37. And the detecting resolution reaches 2.92 × 10 −5 RIU, which is more than four times higher than that in wavelength demodulation. Moreover, small temperature sensitivity (0.307 dB/ • C) and high stability (±0.045 nm/±0.386 dB) are simultaneously demonstrated in FT-SMS fiber structure. Such low cross-sensitivity and instability indicate that the fabricated sensor is a promising and practical device for the applications of biochemical sensing and environmental monitoring.
|
v3-fos-license
|
2018-09-05T14:43:44.927Z
|
2004-03-02T00:00:00.000
|
83784920
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3998/ark.5550190.0005.710",
"pdf_hash": "740abdf532071539f7c11a66afc1ea5c7e40e7f9",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45592",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"sha1": "c0d1ef8a8a8435a75d99347cfe9ac1831f3244c4",
"year": 2004
}
|
pes2o/s2orc
|
Metalloporphyrin mediated biomimetic oxidations. A useful tool for the investigation of cytochrome P450 catalyzed oxidative metabolism
This review summarizes the recent development on metalloporphyrin catalyzed biomimetic oxidations as applied for modeling cytochrome P450 mediated oxidative metabolic processes. Successful applications of metalloporphyrin-based models on known drugs, drug candidates and agrochemicals were reviewed
Introduction
Cytochrome P450 is a member of the monoxygenase family of heme enzymes that plays an important role in metabolizing biomolecules and xenobiotics.The mechanism of its catalytic activity and structural functions have been the subject of extensive investigation in the field of biomimetic chemistry.The high-valent iron(IV)-oxo intermediate, formed by the reductive activation of molecular oxygen via peroxo-iron(III) and hydroperoxy-iron(III) intermediates by cytochrome P450, 1 is responsible for the in vivo oxidation of drugs and xenobiotics.This highvalent iron(IV)-oxo intermediate and probably other intermediates of the P450 catalytic cycle can be formed by the reaction of iron(III) porphyrins with different monooxygen donors and are responsible for the hydroxylation of hydrocarbons, epoxidation of olefins, oxidation of heteroatoms and the cleavage of C-C bonds in organic substrates.Although there are no any difference the activation state of the mainly used first row transition metalloporphyrins (Fe 2+ or Mn 2+ centered), difference of their substrate specificity and region selectivity was verified.Moreover the mainly used second row transition metal porphyrin complexes ruthenium (II) react with oxygen donors resulting trans-dioxo species which are mainly used for specific epoxidations of olefins.Besides of mechanistic studies these metalloporphyrin-based systems (Figure 1) can be used for the investigation of metabolic reactions of different substrates. 2 The advantages of using model system to understand drug metabolism are as follows: • chemical oxidations performed by biomimetic systems can lead to the formation of various metabolic products, are easy operate, and can yield products in sufficient amounts for isolation and further study, • candidate metabolites are available in relatively large amounts and can be used to identify the real in vivo metabolites and provide sample for pharmacological testing, • the mode of metabolism can be clarified, for example, unstable metabolites can be isolated under selected and controlled reaction conditions, • the usage of experimental animals can be reduced.
In this review we focus the application of metalloporphyrin-based systems on known drugs, drug candidates and agrochemicals that mimic P450 catalyzed processes.The selection of small molecule substrates was based on a dual criteria: • despite being a small molecule, has an aromatic ring, an aliphatic ring, and a Ncontaining heterocyclic ring moiety etc., which can be converted by a metabolic process, • data are available on its in vitro and/or in vivo metabolism.IBPP was oxidized by the TPPMnCl -NaOCl-bezalkonium chloride -4'-imidazoylacetophenone (in CH 2 Cl 2 at 0 o C) system to preferentially yield 6,7-epoxide which appeared to be an intermediated of main metabolite, 6,7-dihydro-6,7-diol, in vivo (human) 3 and in vitro, 4 which had not yet been isolated.Because of instability of the 6,7-epoxide in buffer solution, it was difficult to obtain appreciable amounts by dehydration with triphenylphosphine of 6,7dihydro-6,7-diol.Novel epoxidation of the pyrazolo[1,5-a] pyridine ring was carried out by the chemical model system, metalloporphyrin-NaOCl (Figure 2).The monoepoxide and diepoxide were selectively prepared using TPPMnCl and TF 8 PPFeCl 5 , respectively.In the further study, the reaction profiles of IBPP in microsomes and various chemical models (TPPFeCl, TPPMnCl, TF 8 PPFeCl, TOMe 12 PPFeCl, TPh 12 PPFeCl / iodosybenzene (PhIO), NaOCl, m-chloroperbenzoic acid (mCPBA), Pt-colloid/H 2 , O 2 ) were compared.A common reaction product (3-COOH) was obtained in the catalysts/Pt-colloid system and this product was also detected in the rat microsomal system.The α oxidation (2 α-OH, 3 α-OH and 2,3 α-diOH) of the side chains of IBPP and the ring hydroxylation (6,7-diOH) were the main pathways in both chemical model systems and microsomes.The reaction profile of IBPP in the metalloporphyrin model system was most similar to that in the rat or human microsomal system (Figure 3).Phencyclidine (PCP, an anesthetic agent) The major in vitro metabolites were identified from human liver microsomes as piperidine-4hydroxyl and cyclohexane-4-hydroxyl derivatives. 7PPFeCl catalyzed oxidation of PCP leads to the production of more of the piperidine ring-3oxo compound than any other product if either iodosylxylene or cumene hydroperoxide is used as the oxidant (Figure 4).This product was quantified as the hydroxyl derivative after reduction by sodium borohydride because the piperidine-3-oxo compound is unstable.
Similarly to in vitro metabolism the TPPFeCl-Zn-acetic acid-O 2 system was found to hydroxylate preferentially the cyclohexane and piperidine rings.Hydroxylation of the aromatic ring was restricted to the meta position.
Nicotine
The major in vivo metabolites of nicotine in human are cotinine and 3'-hydroxycotinine (Figure 5). 9,10In case of TPPFeCl/PhIO system, only one product was obtained, this was shown to be cotinine.When using TPPMnCl as catalyst with PhIO, the yield of cotinine was substantially increased.Cotinine was also oxidized by model systems.The TPPFeCl model only oxidized cotinine to 3-OH cotinine.When using TPPMnCl model, one further product was obtained.MS and NMR data indicated that it is isomeric with the keto-amide, which has been isolated as a urinary metabolite from Rhesus monkey.
Androgens
The biotransformation of androgens to estrogens is catalyzed in vivo by microsomal cytochrome P450 aromatase. 12CP450 is responsible for the in vivo oxidation of androst-4-en-3,17-dione to 19-hydroxyandrost-4-en-3,17-dione, androst-4-en-3,17,19-trione and estrone (Figure 6). 13xidation of androst-4-en-3 Acetaminophen (an analgesic and antipyretic drug) N-acetyl-p-quinone-imine (NAPQI), the electrophilic metabolite of acetaminophen responsible for hepatic necrosis and renal damage, is believed to be an enzymatic oxidation product involving cytochrome P450 or/and systems like peroxidase or prostaglandin synthase. 15,16s metabolic reactions occur in biologically neutral aqueous medium, it could be more interesting to develop model systems active in physiological conditions.Therefore water-soluble metalloporphyrins and water-soluble oxygen donor (KHSO 5 ) were chosen for model reaction.Four water-soluble metalloporphyrins were tested: they were cationic (TMPyP) or anionic (TSPP) and metallated either by Mn or Fe.The nature of oxidation products of model reaction depends on the time and on the pH of reactions.TPPFeCl appeared as the most efficient catalytic system between pH7 and pH5; decreased activity at pH7 might be attribute to the generation of inactive µ-oxo iron porphyrin dimmers.The TPPMnCl was far less active, and the pH effect was opposite.TMPyPMn was more efficient at pH7, and the iron derivative more active at pH5.In early step of the oxidation, the conversion of acetaminophen to NAPQI is apparently quantitative; then the polymerization process occur, giving nonstoichiometric amounts of NAPQI, 1,4-benzoquinone-monoimine (PQI), 1,4-benzoquinone (BQ) (Fig. 7.). 17The major in vivo metabolite of tiagabine in human is 5-oxo-tiagabine, which is formed by oxidation in one of the thiophene rings of tiagabine (Figure 8). 18The central double bond in tiagabine is hindered and relatively inert to epoxidation under a wide variety of reaction conditions.Treatment with H 2 O 2 , NaOCl or mCPBA did not yield significant amount of epoxide.The TCl 8 βBr 8 PPFeCl and perfluoro TPPFeCl/NaOCl, H 2 O 2 systems were very effective in achieving oxidation of the thiophene ring.This method is amenable to large scale synthesis of the major human metabolite of tiagabine. 19romatic ring of lidocaine, mainly catalyzed by P450s of the 1A subfamily, leading to phenol 3 (Figure 9). 21he oxidation of lidocaine was carried out with various metalloporphyrin model systems TCl 8 PPMnCl, TC 8 PPFeCl, TCl 8 SPPMnCl/ H 2 O 2 , PhIO, magnesium monoperoxyphthalate (MMP).Most model system oxidize lidocaine at its tertiary amine function which is very reactive towards the electrophilic metal-oxo active species.Model systems also give products from: (i) further oxidation of 1; and (ii) combination of 1 with CH 3 CHO and other electrophilic products derived from the oxidation of 1. Oxidation of lidocaine at sites other than its amine function was obtained by performing the reaction in water at acidic pH with an oxidant and a stable Mn porphyrin soluble in water.These conditions lead to the formation of benzylic alcohol 2. Metabolite 3 was never detected in biomimetic reactions.Finally, using metalloporphyrin model systems, metabolites 4, 1 and 2 were prepared in relatively large amounts, which were sufficient to establish their structure.These systems are versatile and may be used in organic solvent and water.A proper choice of their components led to conditions of selective formation of 1,4,5 or 2. 23 Odapipam (a dopamine D-1 receptor antagonist) Four in vitro metabolites of odapipam were isolated from rat liver microsomes; N-desmethylodapipam, 1-hydroxy-odapipam and two isomers of 3'-hydroxy-odapipam. 24 Oxidation of odapipam was carried out with TF 20 PPFeCl.Cumene hydroperoxide was used as the source of exogenous oxygen.Products of model reaction were revealed complete identity with authentic reference samples of the major metabolites of odapipam previously isolated from urine of rats or characterized from rat liver microsomal incubations. 25he model reaction has been used to achieved N-demethylation, aliphatic hydroxylation and N-oxidation on odapipam (Figure 10).
Aminopyrine
The major oxidative metabolites of aminopyrine in human are N-formyl-aminopyrine, aminopyrine and N-methylaminopyrine. 27 Oxidation of aminopyrine was carried out with TCl 8 βCl 8 PPFe(SO 3 H) 4 .PhIO was used as the source of exogenous oxygen.Products of model reaction revealed complete identity with authentic reference samples of the major metabolites of aminopyrine previously isolated from urine of rats or characterized from rat liver microsomal incubations.The model reaction has been used to achieved N-demethylation, aliphatic and aromatic hydroxylation and N-oxidation on aminopyrine (Figure 11).The N,N-dimethylalkylamine N-oxide 6 was efficiently demethylated using two kind of metalloporphyrins TPPFeCl, TPPMnCl with additives (imidazole, 1,2,4-triazole, tetrazole) to afford the corresponding secondary amine 7 which had been proposed as one of the metabolites of OPC-31260, in the rat, dog and human. 29The study demonstrated the simple preparation method for secondary amine in high yield from the corresponding N,N-dimethylalkylamine Noxide (Figure 12).Denaverine hydrochloride (a spasmolytic drug) Eleven metabolites of denaverine hydrochloride has been observed in rat studies.The identified metabolites are cleavage products of the ester and ether bond (11), of the oxidative Ndemethylation (9), of cleavage of the ether bond and a further ring closure giving 12 of reductive cleavage of the ether bond resulting in 13 and 14, and of transesterifications generating ethyl benzylate, methyl and ethyl O-(2-ethylbutyl)bezilate.The main metabolites are 11 and 12 (Figure 13). 31ifferent metalloprophyrins were used in non-aqueous TF 20 PPMnCl, TF 20 PPFeCl and aqueous (TF 8 SPPMnCl, TF 8 SPPFeCl) medium in combination with imidazole or pyridine as cocatalysts.Iodosylbenzene was used to compare reaction profile with that of hydrogen peroxide.In the biomimetic systems 11 and its methyl ester were only found in small quantities.This proves the possibility of O-dealkylations with the biomimetic method, but the cleavage of the ether bond is clearly not favoured.The absence of 13 and 14 in the biomimetic reactions are not surprising, because they are products of reductive transformations.Another metabolite 9, discovered in rat and human, was also found in moderate yields.Furthermore, 8 and its methyl ester 10 were obtained in biomimetic studies.From metabolism in rat only 10 and the ethyl ester of 8 are known.
Less than 1% of the applied dose of denaverine hydrochloride could be detected in metabolism studies in human.Besides unchanged denaverine hydrochloride, compounds 9 and 11 were detected.They are generated in chemical model systems, too. 32iclofenac (an anti-inflammatory drug) Metabolism of diclofenac in man leads to two hydroxylated products.The major metabolite results from 4′-hydroxylation of diclofenac, which catalyzed by cytochrome P450 2C9. 33The minor metabolite results from 5-hydroxylation of the most electron-rich aromatic ring of the drug, which is catalyzed by several cytochromes P450, including 3A4, 2C8 and 2C19 (Figure 14). 34xidation of diclofenac was carried out with two kind of metalloporphyrins TCl 8 PPMnCl and TCl 8 PPFeCl) in combination with CH 3 COONH 4 as co-catalysts.H 2 O 2 or t-BuOOH were used as the source of exogenous oxygen.
Ph
Results showed that the electrophilic species derived from reaction of iron porphyrin with oxygen atom donors regioselectively oxidize diclofenac at position 5.This is not surprising, as position 5 is para to an NH substituent on the more electron-rich aromatic ring of diclofenac and should be the most reactive one towards oxidants.The mechanism of formation of quinone-imine could either involve a N-oxidation of diclofenac with appearance of a cationic or radical species at position 5 within a quinone-imine type species and transfer of an OH group at this position, or direct 5-hydroxylation by an iron-oxo intermediate.Treatment of quinone-imine with a reducing agent such as sodium borohydride quantitatively led to 5-OH-diclofenac.The exclusive formation of quinone-imine in the TCl 8 PP(Mn or Fe)Cl-tBuOOH systems and the appearance of small amounts of 4′-OH-diclofenac in the TCl 8 PPMnCl-H 2 O 2 system would indicate that different active species and mechanisms are involved in the two systems.Etodolac (an anti-inflammatory agent) The major primary oxidative metabolites of etodolac in man are 6-hydroxyetodolac, 7hydroxyetodolac and 8-(1′-hydroxy)etodolac, whereas the major metabolite in rat is 4oxoetodolac (Figure 15). 36he biomimetic oxidation of etodolac was studied with halogenated and perhalogenated iron(III) porphyrins in combination with N-methylimidazole as co-catalysts.Iodosylbenzene was used as the source of exogenous oxygen.The TCl 8 PPFeCl and TPPFeCl catalyzed reaction of etodolac catalyzed by gave 4-hydroxyetodolac and 4-oxoetodolac.In the presence of perhalogenated metalloporhyrins like TF 20 PPFeCl and TCl 8 βCl 8 PPFeCl and TCl 8 βBr 8 PPFeCl the oxidation gave the increased amount of 4-hydroxy-and 4-oxoetodolac.Further the presence of strongly coordinating axial ligands like N-methylimidazole increased the yield of these metabolites.Although the aromatic ring hydroxylated and 8-ethyl hydroxylated metabolites are known but the pyrano ring hydroxylated metabolite, 4-hydroxyetodolac is not detected in the metabolism of etodolac in human or rat.The formation of 4-hydroxyetodolac may be explained by abstraction of hydrogen radical from the allylic 4-position of etodolac by the high valent oxoiron(IV) porphyrins and subsequent recombination of etodolac radical with the hydroxyl radical or hydroxyl-iron(III) porphyrin present in the reaction medium ("oxygen rebound").Further formation of 4-oxoetodolac can also be explained as over-oxidation of 4-hydroxyetodolac.
Carbofuran (an insecticide) On the basis of radiolabeling studies, the major metabolite of carbofuran was identified as 3hydroxycarbofuran, along with N-hydroxymethyl and 7-hydroxy analogues as minor components These compounds can be further transformed to the corresponding 3-keto, 3,7-dihydroxy, and 3keto-7-hydroxy metabolites before conjugation and excretion (Figure 16). 38iomimetic oxidation of carbofuran was firstly carried out using mCPBA, NaOCl, and H 2 O 2 in the presence of TCl 8 PPFeCl.Comparing the results with the metabolite profile measured in houseflies (Musca domestica), we found that in contrast to in vivo experiments, hydrolysis of carbamate side chain took place in all systems.In the case of NaOCl, due to the alkaline medium, this hydrolysis became dominant.In addition to the main product (3-keto-7-hydroxycarbofuran) oxidation at the C3 center yielded the 3-keto metabolite as well.Oxidations associated with the simultaneous hydrolysis of the carbamate group led to the formation of products (3,7-hydroxy and 3-keto-N-hydroxymethyl) derived by multistep transformations.
Oxidation catalyzed by the most popular TF 20 PPFeCl were carried out to improve the performance of our model.Use of mCPBA as oxidant resulted in the almost selective formation of 3-keto-N-hydroxymethyl metabolite.Oxidation with H 2 O 2 , however, reproduced rather well the in vivo profile.Increased resistance of TF 20 PPFeCl against oxidative degradation may be responsible for differences between product distributions observed in TCl 8 PPFeCl-and TF 20 PPFeCl-catalyzed reactions.
|
v3-fos-license
|
2021-08-31T13:16:06.271Z
|
2021-08-20T00:00:00.000
|
237358039
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-4292/13/16/3303/pdf",
"pdf_hash": "348de26b6973d5d423e11fd8cc9d3daef00d7afe",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45596",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"sha1": "5622b5273a9b557836aaefecd7a6bc95a0a5b83f",
"year": 2021
}
|
pes2o/s2orc
|
Mapping Invasive Phragmites australis Using Unoccupied Aircraft System Imagery, Canopy Height Models, and Synthetic Aperture Radar
: Invasive plant species are an increasing worldwide threat both ecologically and finan-cially. Knowing the location of these invasive plant infestations is the first step in their control. Surveying for invasive Phragmites australis is particularly challenging due to limited accessibility in wetland environments. Unoccupied aircraft systems (UAS) are a popular choice for invasive species management due to their ability to survey challenging environments and their high spatial and temporal resolution. This study tested the utility of three-band (i.e., red, green, and blue; RGB) UAS imagery for mapping Phragmites in the St. Louis River Estuary in Minnesota, U.S.A. and Saginaw Bay in Michigan, U.S.A. Iterative object-based image analysis techniques were used to identify two classes, Phragmites and Not Phragmites . Additionally, the effectiveness of canopy height models (CHMs) created from two data types, UAS imagery and commercial satellite stereo retrievals, and the RADARSAT-2 horizontal-horizontal (HH) polarization were tested for Phragmites identification. The highest overall classification accuracy of 90% was achieved when pairing the UAS imagery with a UAS-derived CHM. Producer’s accuracy for the Phragmites class ranged from 3 to 76%, and the user’s accuracies were above 90%. The Not Phragmites class had user’s and producer’s accuracies above 88%. Inclusion of the RADARSAT-2 HH polarization caused a slight reduction in classification accuracy. Commercial satellite stereo retrievals increased commission errors due to decreased spatial resolution and vertical accuracy. The lowest classification accuracy was seen when using only the RGB UAS imagery. UAS are promising for Phragmites identification, but the imagery should be used in conjunction with a CHM.
Introduction
Invasive species are a global problem with many negative impacts. An estimated USD 120 billion in damage is done annually by these species in the United States alone [1]. Phragmites australis (Cav.) Trin. Ex Steud. ssp. australis (hereafter Phragmites) is an invasive plant that has impaired wetlands, shorelines, and roadsides in the Great Lakes region and across much of the United States. It is a perennial grass that grows in a range of habitats from aquatic to terrestrial. Phragmites was originally introduced to the United States in the form of contaminated ballast material in the 18th or 19th centuries [2]. The regulatory status of Phragmites varies from state to state. In the state of Michigan, Phragmites is listed as a restricted plant [3], while in the state of Minnesota, its status was recently changed from a restricted noxious weed to a prohibited control species [4]. In Minnesota, restricted noxious weeds are plants whose importation, sale, and transportation are illegal. Prohibited control species share the same legalities with restricted noxious weeds for preventing spread; however, these species are legally required to be controlled by removing established populations.
Phragmites grows rapidly with the potential to reach up to 5 m tall. After becoming established, Phragmites quickly forms dense monotypic stands by using aerial seed dispersal and extensive networks of rhizomes. Phragmites can quickly become the dominant species in invaded areas due to fast growth rates [5], salinity tolerance [6], and by taking advantage of anthropogenic impacts to wetlands [7]. Examples of anthropogenic impacts include: removal of vegetation, soil removal or deposition, fragmentation, and pollutant runoff. Once established, this plant has the ability to alter hydrology [8], change nutrient cycles [9][10][11], and ultimately lead to a loss of biodiversity [12,13]. Minnesota also has populations of a native Phragmites genotype (Phragmites australis Trin. Ex. Steud. ssp. americanus Saltonst., P.M. Peterson and Soreng). In comparison, this native genotype does not exhibit the same growth and aggressiveness as the invasive Phragmites genotype. Native Phragmites has a smaller stature and generally does not form dense monotypic stands. Invasive Phragmites is spreading quickly in Minnesota and will likely see an expansion of its distribution [14].
Successful management of invasive species, such as Phragmites, depends heavily on knowing the location and extent of infestations. Physical mapping of invasive species, in the form of in situ monitoring, may be costly and requires large amounts of time and specialized equipment to access the locations of infestations. This, coupled with the interand-intra annual changes in distributions, leads to challenges for land managers attempting to control these pests. Current knowledge of Phragmites locations in Minnesota has been dependent on in situ monitoring. Individuals have reported Phragmites throughout the state through the Early Detection and Distribution Mapping System (EDDMapS) [15]. Although a remarkable resource, large-scale in situ mapping efforts cannot be completed quickly enough to track the changing distribution of Phragmites. Moreover, these survey methods may be hindered by a lack of access to private lands, physical inaccessibility of remote locations, and individuals unwilling to report locations.
Remote sensing has the potential to identify Phragmites over large areas without the requirement of extensive fieldwork. Satellites are collecting swaths of imagery daily, which can be acquired at minimal cost due to increased platform accessibility (Landsat or Sentinel) or through affiliations (university, federal, etc.). Occupied aircraft and unoccupied aircraft systems (UAS) are other often-used remote sensing technologies for conservation purposes [16][17][18]. Although occupied aircraft and UAS require upfront costs, such as equipment purchases or contractual services, and often lack the spectral resolution found in satellite-based platforms, they allow for spatial resolutions of less than 10 cm and flexible data acquisition. Each of these data have trade-offs between spatial, spectral, and temporal resolutions [19][20][21]. Research is needed to assess the different data and classification methods for remote sensing to be a viable tool for Phragmites detection.
Recent research has tested the use of remote sensing data for species identification. Hyperspectral sensors have been used to identify different vegetation types [22][23][24], including Phragmites [25]. Pengra et al. (2007) [25] used the EO-1 Hyperion hyperspectral sensor to identify Phragmites. It was noted that the moderate spatial resolution of the Hyperion sensor (30 m) hindered the identification of linear patches of Phragmites due to mixing of vegetation types within a pixel. Bourgeau-Chavez et al. (2013) [26] used a similar spatial resolution (20-30 m PALSAR and Landsat). They utilized backscatter intensities from multiple acquisitions of synthetic aperture radar (SAR) imagery to identify Phragmites with high producer's accuracies. SAR has been used extensively for vegetation identification and monitoring [27][28][29][30][31][32] because the backscattered energy is indicative of the geometric and dielectric properties of surface features [33]. Spaceborne sensors with higher spatial resolutions have also been studied for wetland and species mapping [34][35][36][37][38][39][40]. Laba et al. (2008) [39] reported consistently high accuracies for identifying Phragmites using QuickBird imagery and a maximum-likelihood classifier. They attributed the monotypic nature of Phragmites and the resulting consistent spectral values for accuracies of above 70%. Another study used an object-based classification approach with WorldView-2 imagery to achieve classification accuracies of Phragmites above 90% [40]. The high accuracies seen when using high-resolution satellite imagery for mapping invasive species has been validated by others [41][42][43].
To date, there are multiple satellite platforms that offer spatial resolutions higher than two meters. However, some researchers have expressed the need for even higher spatial resolutions for tracking plant invasions [44,45]. In that regard, UAS have gained popularity as a tool for invasive species detection due to high spatial and temporal resolution, relatively low cost, and their ability to survey locations quickly. Many have explored the use of UAS for identifying and monitoring invasive Phragmites [46][47][48]. Samiappan et al. (2017) [46] indicated that a maximum likelihood classifier with the use of textural algorithms could identify Phragmites from UAS imagery with average accuracies above 85%. The use of texture to identify Phragmites from UAS imagery has been studied by others. Abeysinghe et al. (2019) [47] used image texture, normalized difference vegetation index (NDVI), and a canopy height model (CHM) to identify Phragmites within an estuary in Lake Erie. They noted the importance of using a CHM to identify Phragmites. Phragmites is taller than most wetland vegetation, and others have demonstrated the importance of using height for vegetation classification [49]. After patches had been identified, Tóth (2018) [48] demonstrated the potential for tracking and monitoring changes in distribution through the use of NDVI.
This study aimed to explore the capability of UAS for identifying Phragmites in Minnesota and Michigan using object-based image analysis. The impact of incorporating a CHM and SAR imagery within the classification was also explored. In this text, a CHM is used interchangeably with a normalized digital surface model (nDSM), and it is created by subtracting a digital elevation model (DEM) from a digital surface model (DSM). CHMs can be derived from lidar, SAR, and optical imagery. This study examined the use of both UAS-derived CHMs and stereo satellite-derived CHMs. We aimed to determine the effect of CHMs and SAR backscatter when identifying Phragmites from UAS imagery. The objectives of this study were: (i) determine whether Phragmites can be identified using only the spectral and textural information from three-band (i.e., red, green, and blue; RGB) UAS imagery; (ii) explore the effects of the addition of a canopy height model within the classification, both UAS-derived and stereo satellite-derived; (iii) evaluate whether the addition of SAR backscatter information improves the identification accuracy of Phragmites.
Study Area
Four Great Lakes coastal wetlands were used as study sites. Three study sites are located in Minnesota, USA (Figure 1), and one study site is located in Michigan, USA ( Figure 2). Tallas Island ( Figure 1A) is located roughly six miles southwest of Lake Superior in the St. Louis River Estuary. This 90-acre area consists of Tallas Island, a smaller island, and a portion of the Minnesota shoreline of the St. Louis River. Both islands are dominated by forest and shrub canopy. Shallow marsh plant communities are present on the borders of the islands and along the Minnesota shoreline. A road traverses the east-northeast edge of the study area, but the area otherwise has minimal development. Grassy Point Park ( Figure 1B) is a 71-acre wetland complex located in Duluth, Minnesota along the St. Louis River. Grassy Point Park, located roughly four miles from Lake Superior, is the point where Keene Creek meets the St. Louis River. Grassy Point is heavily influenced by industrial land use. The park was the site of sawmilling operations that dumped wood waste into the estuary during the 19th century [50]. Currently, the complex is bordered by a railroad line, shipping channel, and a coal dock. The final study area is located in Saginaw Bay, Michigan ( Figure 2). The study area consists of the Lake Huron shoreline and emergent vegetation extending out into Saginaw Bay. Saginaw Bay has been an area of active Phragmites management in the Great Lakes Basin. A majority of this study area is dominated by standing, dead Phragmites stems. Vegetation on the shoreline is dominated by tree and shrub species. Non-Phragmites herbaceous vegetation has limited distribution within the study area and is mainly contained to the forest-water boundary. All vegetation not on the shore was flooded during data acquisition. The Saginaw Bay site was included due to minimal UAS acquisitions over Phragmites in the St. Louis River Estuary. It provided another Great Lakes coastal wetland testing location with environmental characteristics similar to those found near the three Minnesota study sites.
Living patches of Phragmites are present in two of the four study areas ( Figure 3). Three patches of Phragmites exist within Grassy Point Park ( Figure 3A). One patch is located in the bottom center of the park, one patch is interspersed with alder trees at the end of the canal, and the third is located along the shoreline of a basin in the northwest corner of the park. The distribution of Phragmites within Grassy Point Park is contained to these three patches. Saginaw Bay, although dominated by dead Phragmites, contains five patches of living Phragmites ( Figure 3B). There are no populations of Phragmites in either the Tallas Island or Hallett Dock study areas. Both Tallas Island and Hallett Dock were selected to test for commission errors. Table 1). The first acquisition was completed by the Natural Resources Research Institute (NRRI) GIS Lab of the University of Minnesota Duluth in August of 2017. A Canon PowerShot S110 was used on a senseFly eBee UAS at 117 m above ground level (AGL) to achieve a spatial resolution of 4.2 cm. Images were collected using 70% endlap and sidelap. The University of Minnesota Remote Sensing and Geospatial Analysis Lab (UMN RSGAL) collected imagery over all three Minnesota study areas in August of 2018. A Microdrones MD4-1000 UAS equipped with a Sony A6300 camera was used to acquire images at roughly 121 m AGL. An endlap of 85% and a sidelap of 65% was used for these image acquisitions. The Grassy Point Park collection had a spatial resolution of 2.6 cm, the Hallett Dock collection had a spatial resolution of 2.5 cm, and the Tallas Island collection had a spatial resolution of 2.6 cm. The Saginaw Bay study area was collected by the Michigan Technological Research Institute (MTRI) in August of 2018 using a DJI Mavic Pro with the stock camera. Images were acquired at about 100 m AGL with 70% endlap and sidelap, resulting in a spatial resolution of 3.3 cm. Spectral resolution of each acquisition was limited to only the red, green, and blue spectral bands. Physical ground control points were not used during any acquisition. All UAS images were georeferenced, mosaicked, and a 3D point cloud was created in Pix4Dmapper (v. 4.2.27) [51]. The derived UAS point clouds were cleaned using rapidlasso LAStools (v. 170313) [52]. Noise points were removed using the lasnoise tool, and all points below mean water level were dropped. year-round. Leaves were also present on the stems, but the density of vegetation was lower than it is during peak growing season. Minnesota lidar elevations used the North American Vertical Datum of 1988 (NAVD88) referencing Geoid09, while the Michigan lidar elevations used NAVD88 referencing Geoid12b. After the removal of noise points, a DEM was created for each study site using rapidlasso LAStools [52]. Classified ground points were used exclusively for DEM creation except for Grassy Point Park. A vendor error misclassified the ground points of the southeast island in Grassy Point Park as water. Points classified as water were used in the creation of the Grassy Point Park DEM to correct for this error. The resulting DEMs had a spatial resolution of 1 m.
Synthetic Aperture Radar
Images from RADARSAT-2, a Canadian Space Agency sensor operating at a frequency of 5.4 GHz, were used in this study. Selected images were collected over the Minnesota study areas on 3 August 2017 and 8 August 2018 with incidence angles of 29.97 degrees. A single image was acquired over the Saginaw Bay study area on 11 August 2018 with an incidence angle of 25.42 degrees. Acquisitions were collected in the wide fine quad mode. The images were radiometrically calibrated, speckle filtered using a 3 × 3 Lee filter [53], converted to sigma naught (σ • ), and geometrically corrected. Final image resolution was 9.45 m. The horizontal send-horizontal receive (HH) polarization was selected for identifying Phragmites due to the stronger response of double-bounce scattering in flooded vegetation [54,55]. All processing of the RADARSAT-2 imagery was completed using the Sentinel-1 Toolbox (v. 6.0.6) [56] in the ESA Sentinel Application Platform (SNAP) (v. 6.0.7) software [57].
Commercial Satellite Stereo Retrievals
Digital surface models (DSMs) were created by the University of Minnesota Polar Geospatial Center from DigitalGlobe, Inc. imagery. The panchromatic bands of stereo image pairs were processed using the surface extraction from TIN-based search-space minimization (SETSM) algorithm [58]. The resulting DSMs had a spatial resolution of 50 cm. The selection of DSMs for this study was based on the acquisition date of the underlying stereo images. Priority was given to DSMs created during the growing season of the same year as the UAS imagery. Hallett Dock and Tallas Island used a single DSM created from a stereo pair of WorldView-2 images collected on 27 August 2018. The Saginaw Bay study area used a DSM created from a stereo pair of WorldView-2 images collected on 19 August 2018. No suitable stereo imagery was available for the 2017 UAS acquisition over Grassy Point due to persistent cloud cover.
The SETSM algorithm outputs products in ellipsoidal height using the WGS84 ellipsoid. A transformation from ellipsoidal height to orthometric height was completed using GDAL (v. 3.0.2) [59] to match the lidar. The DSM for Hallett Dock and Tallas Island was then adjusted using published vertical geodetic monuments. A linear regression was created between the geodetic monuments and the estimated DSM values. The regression equation was applied to the transformed DSM through the Raster Calculator tool in ESRI's ArcMap 10.7 software [60]. Few published vertical geodetic monuments were present around the Saginaw Bay study area. Instead, road points were selected from a lidar DEM to be used in the linear regression.
UAS Point Cloud Adjustment
For the creation of CHMs, the UAS point clouds needed to be adjusted to match the vertical reference system of the lidar. This adjustment was done in Pix4Dmapper [51] through the use of manually digitized vertical control points. Easily discernible locations that were selected in the UAS imagery had their elevation values gathered from the lidarderived DEMs. All selected vertical control points were over pavement, gravel, or bare earth. However, floating debris was used in the Saginaw Bay study area due to minimal bare earth coverage. Points identified through floating debris used an elevation of mean water level. Five vertical control points were used for the Grassy Point acquisitions, seven points were used for the Hallett Dock acquisition, six points were used for Tallas Island acquisition, and six points were used for the Saginaw Bay acquisition. All points were distributed around each wetland complex and manually identified in at least three separate images. Pix4Dmapper [51] has three options when defining a vertical coordinate system: (1) Use one of three reference geoids (EGM1984, EGM1996, EGM2008); (2) specify a constant geoid height above the WGS84 Ellipsoid; (3) an arbitrary vertical coordinate system. Selecting the arbitrary vertical coordinate system results in Pix4Dmapper [51] adjusting the vertical values of the point cloud to match the vertical values of the vertical control points. This approach allows for both the co-registration of the point cloud to the lidar DEM and the adjustment of the point cloud to match the vertical coordinate system of the vertical control points. In this case, the UAS point cloud was co-registered to the lidar DEM and adjusted to NAVD88. Geoid09 was used for the Minnesota study sites, and Geoid12b was used for the Michigan study site. DSMs were then created using rapidlasso LAStools [52]. The spatial resolution of each DSM matched the spatial resolution of the corresponding optical imagery.
Classification
An object-based image analysis (OBIA) classifier was selected for this study. OBIA has been used frequently for wetland mapping and invasive species identification [61][62][63][64][65][66]. This classification approach has the advantage of incorporating shape, size, and contextual information in addition to spectral and textural information, which produces better approximations of real-world features [67,68]. Additionally, data inputs are not confined to use of the same sensor or resolution. The ability to include multiple data types into object-based classifications frequently results in higher mapping accuracy compared to pixel-based classifications [67,68].
Some researchers that have used OBIA for identifying vegetation performed one segmentation, then allowed a machine-learning algorithm to classify the objects thereafter [18,47,63]. Doing so does not allow for the incorporation of contextual information within the OBIA classification. Our process differs in that we used an iterative approach without a machine-learning classifier to identify Phragmites. This means that objects were merged and further segmented to delineate patch boundaries. Iterative approaches allow for the inclusion of expert knowledge into the classifier as well as reducing oversegmentation and under-segmentation of objects [69].
Classifications were completed using the Trimble eCognition Developer (v. 9.4) software [70]. Three object-based classifiers, or rule sets, were built to answer the study questions. The first classifier used only the optical imagery to identify Phragmites, the second classifier utilized a CHM with the optical imagery, and the third classifier included the RADARSAT-2 HH polarization with the CHM and optical imagery. Each of the three classifiers followed the same pattern. First, new temporary raster layers were created, followed by the removal of No Data areas. The first temporary raster layer was the visibleband difference vegetative index (VDVI). VDVI uses the red, green, and blue bands to calculate a vegetative health index similar to NDVI and ranges from −1 to 1 [71]. VDVI was calculated using the following mathematical formula: Additionally, a CHM was created by subtracting a lidar-derived DEM from a DSM. The CHM creation was only present in the two classifiers that use height of vegetation to identify Phragmites. No Data values were removed using a series of quadtree segmentations.
Data layers were initially segmented using the multi-resolution segmentation algorithm at a scale parameter of 30, along with a shape parameter of 0.3 and compactness parameter of 0.5. The scale parameter determines the size of the resulting objects, e.g., larger values for scale parameter produce larger image objects and smaller values produce smaller image objects. The shape parameter determines the influence of shape and color for image object creation. Larger values for shape will result in shape having more influence than color for object creation. Smaller values for shape will result in color having a higher weight for image object creation. The compactness parameter determines the smoothness or compactness of the image objects. Higher values of compactness result in more compact objects, while lower values will result in smoother objects. Parameter values used in this study were determined through trial and error.
Initial image objects were assigned to either vegetation or not vegetation classes based on the mean VDVI of the entire scene. Objects below the mean VDVI were placed into the not vegetation class while objects above the scene mean were placed into the vegetation class. The objects in the vegetation class were merged and re-segmented using the multi-resolution segmentation algorithm at a scale parameter of 10. A shape parameter of 0.3 and compactness parameter of 0.5 were used for the segmentation. Phragmites was initially identified from the vegetation class using texture, specifically the grey-level cooccurrence matrix (GLCM) homogeneity algorithm. Texture has been used by others to identify Phragmites [46,47] and other invasive plants [72][73][74][75]. The GLCM homogeneity algorithm multiplies a pixel's grey level value based on the neighboring pixel's grey level. Pixels sharing the same grey level are multiplied by one, while pixels with different grey levels exponentially decrease by a factor dependent on the extent of the difference in grey level [76]. All layers were used in the texture calculation. Objects with a rough texture, or a low GLCM homogeneity value, were assigned to the Phragmites class.
Removal of canopy gaps and refinement of object shapes was completed after the initial identification of Phragmites. It is unlikely that vegetation within a patch of Phragmites is another species due to the density at which Phragmites grows. Therefore, all vegetation objects completely surrounded by the Phragmites class were reassigned as Phragmites. The objects identified as Phragmites were further defined through the use of the GLCM contrast algorithm. This algorithm is similar to GLCM homogeneity, but it highlights objects with textures different from their neighbors [76]. Finally, objects with small areas were removed. Phragmites patches are likely to be multiple square meters in size. Anything below that size was deemed improbable to detect. The resulting classifications had two final classes, Phragmites and Not Phragmites.
All OBIA classifiers were trained on the 2018 UAS collection of Grassy Point Park because it was the first available acquisition with known Phragmites patches. Creation of each rule set was done through trial and error. Four classifications were completed for the Hallett Dock, Tallas Island, and Saginaw Bay study areas (Tables 2 and 3). This included: (1) strictly RGB UAS imagery; (2) RGB UAS imagery and a CHM created using a UAS-derived DSM; (3) RGB UAS imagery and a CHM created using a stereo satellite-derived DSM; (4) RGB UAS imagery, RADARSAT-2 HH polarization, and a UAS-derived CHM. Three classifications were completed using the 2017 Grassy Point Park collection (Table 3): (1) strictly RGB UAS imagery; (2) RGB UAS imagery and a CHM created using a UAS-derived DSM; 3) RGB UAS imagery, RADARSAT-2 HH polarization, and a UAS-derived CHM. Table 2. Layers and classification parameters used for each image classification scheme within eCognition. The first column identifies the classifier. The second column represents if the VDVI layer was used to identify vegetation. Column three represents the use of a CHM and the data source used for its creation. The remaining columns identify the parameters used to classify Phragmites, organized based on the order of operations within the classifier. Scale parameters for the multi-resolution segmentations (MRS) are provided. All multi-resolution segmentations used a shape parameter of 0.3 and a compactness parameter of 0.5. The maximum threshold for the multi-threshold segmentation (MTS) was 4 m and the minimum threshold was 2 m. Each scheme used the RGB bands of the UAS imagery. Schemes 2-4 included a CHM, while only Scheme 4 included the RADARSAT-2 HH polarization. The CHM and RADARSAT-2 HH polarization are used in the calculation of GLCM homogeneity and contrast if the layer is present in the classifier.
Differences between Rule Sets
Although each rule set follows the same scheme, there were slight differences between them. The biggest difference was the inclusion of a CHM in the second and third classifier. This allowed for partitioning of vegetation into different height classes. A multi-threshold segmentation was used to assign objects with a mean height from two to four meters as potential Phragmites. Remaining vegetation objects with a mean height below two meters were merged then segmented using a multi-resolution segmentation at a scale parameter of 100. The multi-resolution segmentation used a shape parameter of 0.3 and compactness parameter of 0.5. Vegetation objects touching the potential Phragmites class were reassigned as potential Phragmites. The potential Phragmites class was further refined following the procedure described above. This allowed for the edges of Phragmites patches to be included. Two additional algorithms were used following the GLCM homogeneity algorithm to shape Phragmites objects: max difference and brightness. The max difference algorithm looks at the differences in all layers between neighboring objects. Brightness corresponds to how bright an object appears, e.g., white objects will be brighter than black objects. The max difference and brightness algorithms were used to fill gaps in the canopy. These two algorithms were not included in the first classifier due to overestimating Phragmites without a height threshold. Lastly, for the third classifier, the mean RADARSAT-2 HH backscatter intensity was used to remove incorrectly identified Phragmites objects. This was the only difference between the second and third classifier.
The second and third classifiers included the surface models and the RADARSAT-2 HH polarization layers when calculating texture. Surface models were included because height is representative of surface texture due to shadowing from taller vegetation. The RADARSAT-2 HH polarization was included because backscatter intensity is directly related to structural properties of surface scatterers [33]. Leaf orientation, stem density, and other physical characteristics of plants impact the backscatter intensity, and these physical characteristics are also related to surface texture.
Accuracy Assessment
Phragmites patch locations in the St. Louis River Estuary were determined through survey data provided by the Great Lakes Indian Fish and Wildlife Commission (GLIFWC), verified populations through EDDMapS, and populations reported to the Minnesota Aquatic Invasive Species Research Center (MAISRC) (https://www.maisrc.umn.edu/phragmitesmap (accessed on 1 September 2019)). The boundaries of the three identified patches in Grassy Point Park were manually digitized using visual interpretation and the UAS imagery. No populations were reported in Tallas Island or Hallett Dock. Visual interpretation of both the Hallett Dock and Tallas Island imagery corroborated the absence of Phragmites within these two study areas. Saginaw Bay did not have point data corresponding to Phragmites locations. Personal communication with the provider of the Saginaw Bay UAS imagery confirmed the presence of Phragmites within the study area. The Saginaw Bay UAS imagery was visually interpreted, and the living patches of Phragmites were manually digitized. Visual interpretation of the UAS imagery was used to confirm no regrowth of Phragmites within the standing, dead Phragmites stalks.
For the Grassy Point Park and Saginaw Bay study sites, 50 points were randomly generated within the digitized Phragmites patch boundaries, and 80 points were randomly generated outside of the digitized Phragmites patch boundaries (Figure 4). The number of assessment points in the Phragmites class was selected due to the minimal area of Phragmites within the study areas, following the recommendation of Congalton (1991) [77] and Congalton and Green (2019) [78] for the minimum number of points per class. For study areas with no known Phragmites, a total of 130 points were randomly generated within the extent of the study area (Figure 4). Each randomly generated point was then examined by a Phragmites expert who visually interpreted the UAS imagery to determine whether the point fell on Phragmites. Two Not Phragmites validation points in Hallett Dock were incorrectly identified as Phragmites by the expert conducting the validation. Physical surveying of the area by the GLIFWC showed no Phragmites in Hallett Dock so those two points were discarded. A number of points generated in known Phragmites patches fell within canopy gaps. This resulted in fewer than 50 assessment points in the Phragmites class for both Grassy Point Park and Saginaw Bay (Table 4). However, this was deemed acceptable due to the small total area of Phragmites in each of these study sites. Increasing the number of assessment points for the Phragmites class in this situation would have resulted in points being located closely to other assessment points, as well as potentially inflating accuracy. A confusion matrix was created for each classification, and the individual producer's and user's accuracies for each class were calculated [78]. Only the accuracy of the Not Phragmites class in both the Tallas Island and Hallett Dock study areas was calculated. Accuracies for the Phragmites class were not calculated for Hallett Dock and Tallas Island due to the absence of Phragmites within those two sites. To account for this, the total area of misclassified vegetation was calculated for Hallett Dock and Tallas Island to provide another accuracy metric. Total area of misclassified vegetation was calculated by the division of the area of the Phragmites class by the total area of the study site. Combined producer's and user's accuracy of the Phragmites class was calculated for three of the four classifications. A confusion matrix was used to calculate the combined accuracy for the three classifications, which was created by combining the accuracy assessment points from each of the four study sites. The combined accuracy for the classification utilizing a satellite-derived CHM (Scheme 3, Tables 2 and 3) was not calculated due to no classification over Grassy Point Park.
RGB Spectral Bands and Textural Algorithms for the Identification of Phragmites
The identification of Phragmites using the spectral and textural information from the RGB UAS imagery was tested in the four study areas (Scheme 1, Tables 2 and 3). User's and producer's accuracies were calculated for each class (Phragmites, Not Phragmites) using the independently validated points ( Table 5). The resulting classification using only the RGB UAS imagery is shown in Figure 5. Classifications using only the spectral and textural information from the RGB imagery (Scheme 1, Tables 2 and 3) exhibited low user's accuracies for the Phragmites class ( Table 6). The extent of Phragmites in each study area was significantly overestimated. All of the Phragmites validation points were correctly identified in Grassy Point Park, but 32 of the 85 Not Phragmites validation points were incorrectly identified as Phragmites. This resulted in a user's accuracy of 58% for the Phragmites class. Saginaw Bay had a user's accuracy of 48%. Thirty-three of the 37 Phragmites validation points were correctly identified at Saginaw Bay, but 36 Not Phragmites validation points were incorrectly classified as Phragmites. A similar response of widespread misidentification was seen in the Hallett Dock and Tallas Island study areas. Roughly 28% of the Hallett Dock study site and 33% of Tallas Island study site were misclassified as Phragmites. Forty-three of the 128 Not Phragmites validation points at Hallett Dock were incorrectly classified as Phragmites. Sixty-three of the 130 Not Phragmites validation points at Tallas Island were incorrectly classified as Phragmites. All study areas had user's accuracies above 90% for the Not Phragmites class. Combined user's accuracies across all four sites for the Phragmites class was low. Consistent misidentification led to a user's accuracy of 31% for the Phragmites class. However, since most vegetation was classified as Phragmites, the Phragmites class had a producer's accuracy of 95%.
Incorporating CHMs for Phragmites Identification
The four study areas were classified with the addition of a UAS-derived CHM ( Figure 6) using Scheme 2 (Tables 2 and 3). Each classification was evaluated using the independently validated points (Table 7). Classifications using a UAS-derived CHM had higher accuracies than the classifications without a CHM (Table 8). Splitting the vegetation into different height thresholds allowed for trees, shrubs, and shorter vegetation to be classified correctly more often. The Phragmites class for Grassy Point Park had a user's accuracy of 92% with three Not Phragmites validation points being incorrectly identified as Phragmites. Eleven Phragmites validation points were incorrectly identified as Not Phragmites resulting in a producer's accuracy of 76% for Grassy Point Park's Phragmites class. Inaccuracy of patch extent caused most omission errors. All but one known Phragmites patch was identified in Grassy Point Park, which is interspersed with Alnus spp. The Phragmites patch interspersed with Alnus sp. was unlikely to be correctly identify due to the tree canopy cover. Commission errors were greatly reduced even without the correct classification of all Phragmites validation points. The user's accuracy for the Not Phragmites class was 88%, and the producer's accuracy was 97%. The Not Phragmites class of Tallas Island had a user's and producer's accuracy of 100%. It is important to note that vegetation was still misidentified as Phragmites, potentially leading the accuracies at the Hallett Dock and Tallas Island locations to be falsely high. Including a UAS-derived CHM reduced the total misclassified area of Hallett Dock to 0.7% of the study site. Of the vegetation in Tallas Island, 1.1% was misidentified as Phragmites. A different result was seen at Saginaw Bay. Including a UAS-derived CHM reduced the classification accuracy. The Phragmites class had a user's accuracy of 100% and a producer's accuracy of 3%. A single Phragmites validation point fell within the estimated Phragmites extent. All Not Phragmites validation points were correctly identified in Saginaw Bay.
Combined user's accuracy for the Phragmites class significantly improved (Table 8). Four Not Phragmites validation points were incorrectly classified as Phragmites resulting in a combined user's accuracy for 90%. However, the omission of Phragmites at Saginaw Bay resulted in a combined producer's accuracy of 43% for the Phragmites class.
Three of the four study areas were classified with the addition of a satellite-derived CHM (Figure 7) using Scheme 3 (Tables 2 and 3). Classifications with a satellite-derived CHM resulted in slightly lower accuracies than the UAS-derived CHM (Tables 9 and 10). Hallett Dock had a user's accuracy of 100% and a producer's accuracy of 91% for the Not Phragmites class. Twelve of the 128 Not Phragmites validation points were incorrectly classified as Phragmites. An increase in misidentified vegetation was seen using a satellite-derived CHM. The total misclassified area of Hallett Dock was 9.1% when using a satellite-derived CHM compared to the 0.7% when using a UAS-derived CHM. Commission errors were higher at forest boundaries, which are a result of a lower spatial resolution CHM. Tallas Island had comparable results to Hallett Dock when using a satellite-derived CHM. The user's accuracy for the Not Phragmites class was 100%, and the producer's accuracy was 83%. The total area of vegetation misclassified as Phragmites was 12.9% of the study site. Twenty-two of the 130 Not Phragmites validation points were incorrectly classified as Phragmites. Similar to Hallett Dock, commission errors were frequent around forest boundaries. Saginaw Bay had a slight increase in classification accuracy when using a satellite-derived CHM compared to using a UASderived CHM. Phragmites was correctly identified in four of the five patches, but the shapes of the objects did not match the Phragmites extent. Six Not Phragmites validation points were incorrectly classified as Phragmites resulting in a user's accuracy of 57% for the Phragmites class. Twenty-nine of the 37 Phragmites validation points were incorrectly classified as Not Phragmites resulting in a producer's accuracy of 22%. The Not Phragmites class had a user's accuracy of 75% and a producer's accuracy of 94%. Table 9. Validated assessment points from each study area for the classification using the RGB UAS imagery and a satellite-derived CHM. Points are split by class (Phragmites, Not Phragmites) and if they were correctly identified (Correct, Incorrect).
Including SAR for Phragmites Identification
Each study area was classified with the addition of a UAS-derived CHM and the RADARSAT-2 HH polarization (Figure 8) using the rule set described above (Scheme 4, Tables 2 and 3). The satellite-derived CHM was not tested with the RADARSAT-2 HH polarization. Accuracies of classifications incorporating SAR performed comparable to classifications using only the UAS-derived CHM (Scheme 2, Tables 2 and 3) at the Minnesota study sites (Tables 11 and 12). Differences were the removal of misidentified objects and the changes to the object shapes. Four Not Phragmites validation points were misclassified as Phragmites resulting in a user's accuracy of 90% for the Grassy Point Park Phragmites class. Eleven of the 45 Phragmites validation points were omitted resulting in a producer's accuracy of 76% for the Grassy Point Park Phragmites class. The Not Phragmites class had a user's accuracy of 88% and a producer's accuracy of 95%. Accuracies for Hallett Dock and Tallas Island did not improve with the inclusion of the RADARSAT-2 HH polarization. Hallett Dock had a user's accuracy of 100% and a producer's accuracy of 99% for the Not Phragmites class. Only one Not Phragmites validation points was incorrectly classified as Phragmites. The total area of misclassified vegetation at Hallett Dock was 0.7% of the study area. Tallas Island followed the same trend of minimal change. The user's and producer's accuracy of the Not Phragmites class was 100%. The total area of misclassified vegetation was 1.1% of the study site. Including the RADARSAT-2 HH polarization resulted in the lowest classification accuracy of Saginaw Bay. No Phragmites in the study area was identified correctly resulting in a producer's accuracy of 0%. The user's accuracy of the Not Phragmites class was 72%, and the producer's accuracy was 100%. Overall, results from this study suggest that the inclusion of the RADARSAT-2 HH polarization does not necessarily increase the classification accuracy of Phragmites (Table 12). Five Not Phragmites validation points were incorrectly classified as Phragmites resulting in a combined user's accuracy for 87%. However, the omission of Phragmites at Saginaw Bay resulted in a combined producer's accuracy of 41% for the Phragmites class.
Discussion
This study used OBIA to identify Phragmites from three-band UAS imagery and tested the functionality of CHMs and SAR within those classifications. Based on the results in four study locations, accurate identification of Phragmites from UAS imagery is unlikely without the use of a CHM. This is clearly demonstrated by the low producer's and user's accuracies of the Phragmites class when not using a CHM. Visual analysis of the resulting classifications showed that trees, shrubs, and Typha spp. were highly subject to commission errors in each of the four study sites. Phragmites in the training location exhibited rough textures due to shadowing, which corresponds to low GLCM homogeneity values. Misclassification of woody vegetation and Typha spp. may be due to their similar textural values as Phragmites.
Use of CHMs for Phragmites Identification
Inclusion of a CHM allowed for a significant increase (>20%) in classification accuracy for three of the four study sites. The increased accuracy is attributed to the exclusion of wetland trees, shrubs, and shorter wetland vegetation from the final Phragmites class. It is possible that the CHM had further impacts. The CHM was used with the RGB imagery when calculating GLCM homogeneity and contrast. Texture values of objects changed, which resulted in objects being included or excluded from the final Phragmites class. Although the separation of vegetation by height resulted in the most significant change to classifications using a CHM, the texture values calculated with the addition of a CHM were important for the classification and refinement of Phragmites objects. In comparison, using texture without a CHM resulted in broad misclassification. Further research is needed to determine the role of textural algorithms for Phragmites classification.
Although the inclusion of a CHM provided a large increase in classification accuracy in three study sites, results from the Saginaw Bay study site demonstrate that classification accuracy is tied to the quality of the CHM. Each study site was flown without ground control points while using single-frequency GNSS equipment. Errors in positional estimates of the UAS will propagate through the workflow, lowering the accuracy of the subsequent UAS data. Phragmites frequently invades locations where ground control points are difficult to establish. A UAS with post-processing kinematic (PPK) or real-time kinematic (RTK) capabilities would increase positional accuracy [79][80][81] and decrease CHM errors. CHM accuracy is also dependent on the quality of the lidar used to create the DEM. Density of Phragmites patches may inhibit the lidar laser pulse from striking the ground or water surface below, resulting in returns being incorrectly classified (i.e., vegetation returns classified as ground returns). This issue may be circumvented where Phragmites is growing as emergent vegetation. A single parameter, mean water level, for example, could be used to create the CHM. However, this method would not be appropriate elsewhere. More testing is needed to determine how to achieve accurate DEMs in locations that Phragmites invades, as well as the impact of a PPK or RTK enabled UAS on classification accuracy.
The results seen at Saginaw Bay were attributed to the potential CHM errors described above. Portions of live and dead Phragmites stands can be seen in the lidar DEM. Additionally, the GNSS instruments on the DJI Mavic Pro may not be accurate enough for estimating emergent vegetation height. Visual assessment of the UAS-derived DSM showed significant underestimation of vegetation height. This was present even when accounting for potential errors in the lidar-derived DEM. For example, most vegetation was estimated to be under one meter tall. More research is needed to confirm whether a DJI Mavic Pro can accurately estimate the height of Phragmites and non-Phragmites herbaceous vegetation.
Field conditions and Phragmites patch characteristics during data acquisition are another critical aspect to CHM quality. The density of a Phragmites patch directly determines what can be captured in the CHM. The potential to capture individual stalks of Phragmites in a CHM is unlikely. This is due to the increased difficulty of point matching at the top of a Phragmites stalk because of its small size and frequent movement in wind. Higher density Phragmites patches provide more options for point matching, which will lead to a more accurate CHM. The potential issues with proper point matching can be compounded during flooded conditions due to the movement of water. Future research should prioritize methods, data, and field conditions that produce high-quality CHMs.
Satellite-derived CHMs had classification accuracies that were slightly lower than the UAS-derived CHMs. Saginaw Bay was the only study area that had a higher classification accuracy with the satellite-derived CHM than the UAS-derived CHM, but this is attributed to the accuracy of the GNSS equipment on the DJI Mavic Pro. Commission errors were more abundant at the boundaries of tree lines and low vegetation than when using a UAS-derived CHM. This is attributed to the lower spatial resolution of the satellite-derived CHMs. Larger pixel size inhibits a distinct boundary between the tree canopy and low vegetation canopy. Additionally, commission errors may be attributed to the vertical accuracy and adjustment method of the satellite-derived CHMs. Others have noted errors of multiple meters when using commercial satellite stereo retrievals to estimate canopy height [82][83][84]. Errors of this magnitude will directly impact the detection of Phragmites.
Examples of this error can be seen in both the Hallett Dock and Tallas Island study sites. Large sections of herbaceous vegetation were correctly classified in these two sites when using a UAS-derived CHM, but the same areas were incorrectly classified when using a satellite-derived CHM. This issue highlights the need for the evaluation of DSMs created using the SETSM algorithm.
Implications of large-area Phragmites identification is promising despite the commission errors when using a satellite-derived CHM. Knowing that a CHM is necessary for accurate Phragmites identification, a major issue with large-area identification is having a CHM from the same growing season as the optical imagery. Phragmites patches can expand rapidly during a growing season [85], new patches can establish between years, and active management can reduce patch size and change patch shape. Outdated CHMs would result in inaccurate classifications, potentially causing unaccounted or incorrectly identified Phragmites patches. These errors could lead to mismanagement of resources apportioned to control Phragmites. Large-area lidar acquisitions are infrequent, and vegetation structure and abundance will change after the acquisition. Satellite-derived CHMs allow for classifications to use CHMs from the same day as the optical imagery. In addition to up-to-date CHMs, optical satellites frequently collect more than just the RBG spectral bands, which will assist in the discrimination of vegetation [86]. Large-area identification of Phragmites using satellite-derived CHMs is the subject of our ongoing studies.
SAR for Phragmites Identification
Use of the RADARSAT-2 HH polarization did not improve classification accuracy at any of the four study sites. These results are similar to Millard and Richardson (2013) [87] who found no improvement when pairing SAR with lidar derivatives for wetland classification. Tallas Island and Hallett Dock had no increase in accuracy or decrease in commission errors when incorporating the HH polarization. Commission errors at Grassy Point were reduced to only three objects being misclassified as Phragmites. Contrarily, omission errors were significantly higher in Saginaw Bay where no Phragmites was correctly identified. This is attributed to different environmental conditions in the St. Louis River Estuary and Lake Huron. However, these environmental conditions, which are not exclusive to Saginaw Bay, could be present in wetlands in any location. Higher σ • values were detected in the Saginaw Bay study site compared to the other study sites. A majority of the vegetation in Saginaw Bay was flooded, while the vegetation in the Minnesota study sites was not. The HH polarization is more sensitive to double-bounce scattering, and flooded, tall vegetation facilitates double-bounce scattering with perpendicular water-stem surfaces [32]. Although the CHM caused most omission errors in Saginaw Bay, the failure of the classifier in Saginaw Bay when incorporating the HH polarization was caused by the omission of Phragmites patches due to higher HH backscatter. The differences between Grassy Point and Saginaw Bay show that environmental conditions have great influence over SAR backscatter. The dielectric properties of all scattering surfaces and the geometry of those surfaces determines the intensity and polarization of the returned signal [33]. Future work should normalize SAR variables to account for environmental differences between locations.
Results from including the HH polarization demonstrate that using a single polarization from a single date is not effective for differentiating Phragmites from other wetland vegetation. Different SAR methods than those used in this study may prove beneficial for Phragmites mapping. For example, others have employed multiple dates of SAR imagery when identifying Phragmites [26,88]. A multi-temporal approach may provide the ability to track changes in wetland vegetation structure across the growing season. Phragmites has more aboveground biomass than other wetland species [89,90] and fast rates of growth [5]. A multi-temporal approach could differentiate Phragmites from other species due to seasonal growth characteristics. Others have used polarimetric SAR (PolSAR) to identify different wetland types [91][92][93]. PolSAR decompositions are suitable for differentiating surface features based on their scattering behaviors [94]. Phragmites may have unique scattering characteristics relative to other wetland vegetation due to its density of stems, large amount of aboveground biomass, and the orientation of large, overlapping leaves. Further testing is needed to determine whether other SAR methods can improve the classification accuracy of Phragmites.
Validation
Validation of this study was done entirely through visual interpretation of the UAS imagery. Physical mapping of the St. Louis River Estuary had been completed by the GLIFWC, and other individuals provided point data on the location of known Phragmites populations, but the randomized assessment points were not physically validated. Physical validation of each assessment point would allow for higher confidence in the accuracy of each point. Furthermore, the number of Phragmites validation points in Grassy Point Park and Saginaw Bay was low, which likely reduced the precision at which class accuracies could be calculated. The size and shape of the Phragmites patches in the two study areas was not conducive for the creation of more validation points. A higher point density could have been generated, but the generally narrow shape would result in validation points being located too closely together. This brings into question how validation should be completed for classifications where the aim is to identify a cover class that covers a very small percentage of an already small study area. For invasive species mapping, this problem is especially relevant because a land manager's goal is to identify invasive species' extent before it becomes too costly to manage, i.e., when populations are small. Further research is needed to determine best practices for this unique problem.
Conclusions
Species identification using remote sensing is a difficult task, but the growth characteristics of Phragmites offer an avenue for its identification. CHMs can be used to highlight the unique growth characteristics of Phragmites and differentiate it from other vegetation types. This was demonstrated when using a UAS-derived CHM with the RGB UAS imagery to achieve reasonable identification accuracy. However, improvement is still needed for remote sensing to play a major role in the detection and control of this species. Additional classification techniques and data sources should be explored.
In addition, it is important to remember that remote sensing is a field based on creating data for others to use. Maps provided for invasive species management need to be accurate. Land managers will not tolerate widespread omission of Phragmites patches or misidentification of desirable native wetland species as Phragmites. This study showed that Phragmites can be successfully identified in one wetland while simultaneously being subject to numerous commission errors in another wetland. Future work should test classifiers in multiple study areas to determine their true capability for Phragmites identification. This need will become increasingly important if remote sensing is to be a viable tool for resource specialists coordinating Phragmites management.
|
v3-fos-license
|
2023-02-17T15:03:01.694Z
|
2017-07-13T00:00:00.000
|
256937809
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-017-05634-0.pdf",
"pdf_hash": "8e73da2cc3902d1b355081b90f3f555cd76a9c58",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45600",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "8e73da2cc3902d1b355081b90f3f555cd76a9c58",
"year": 2017
}
|
pes2o/s2orc
|
Dramatically Enhanced Spin Dynamo with Plasmonic Diabolo Cavity
The applications of spin dynamos, which could potentially power complex nanoscopic devices, have so far been limited owing to their extremely low energy conversion efficiencies. Here, we present a unique plasmonic diabolo cavity (PDC) that dramatically improves the spin rectification signal (enhancement of more than three orders of magnitude) under microwave excitation; further, it enables an energy conversion efficiency of up to ~0.69 mV/mW, compared with ~0.27 μV/mW without a PDC. This remarkable improvement arises from the simultaneous enhancement of the microwave electric field (~13-fold) and the magnetic field (~195-fold), which cooperate in the spin precession process generates photovoltage (PV) efficiently under ferromagnetic resonance (FMR) conditions. The interplay of the microwave electromagnetic resonance and the ferromagnetic resonance originates from a hybridized mode based on the plasmonic resonance of the diabolo structure and Fabry-Perot-like modes in the PDC. Our work sheds light on how more efficient spin dynamo devices for practical applications could be realized and paves the way for future studies utilizing both artificial and natural magnetism for applications in many disciplines, such as for the design of future efficient wireless energy conversion devices, high frequent resonant spintronic devices, and magnonic metamaterials.
In 2007, Y. S. Gui, et al. 1 first proposed and demonstrated the spin dynamo, first proposed and demonstrated the spin dynamo, is constructed that provides a new and interesting way to generate direct current via spin precessions to locally power nanoscopic devices and for future applications such as wireless energy conversion. Compared with the spin-driven currents in semiconductors 2 , spin dynamos are based on ferromagnetic materials 1 or spin-torque diodes 3,4 , which feature a much higher current/power ratio coupled with a much smaller internal resistance. However, the reported works are limited to sophisticated waveguide couplings (and therefore to wires), such as coplanar waveguides (CPWs) 5,6 , microstrip lines 7,8 , and bias Tees 3,[9][10][11][12] , to in-couple radio-frequency or microwave electromagnetic waves. Free space direct illumination has rarely been studied, despite its excellent suitability for wireless energy conversion. One main reason may be that the wireless conversion efficiency at present is extremely low to allow the spin dynamo to generate any discernible power.
In past decades, metamaterials or artificial resonant structures have emerged as an agile and promising way to manipulate electromagnetic fields at a deep subwavelength scale, leading to enhanced light-matter and light-spin interactions. For instance, a variety of intriguing new phenomena have been observed in plasmon-spin hybrid systems, such as the large enhancement of Faraday rotation via plasmonics 13 , plasmonics enhanced magneto-optical effects 14, 15 , and magneto-plasmonics [16][17][18][19] . Furthermore, T. Grosjean et al. 20 have theoretically predicted a diabolo resonant antenna that should exhibit a large magnetic field enhancement reaching as high as 2700-fold. Metamaterials therefore offer an appealing solution to boost the coupling between electromagnetic waves and spins and hence an enhanced spin dynamo can be expected when exploiting them for this application 21 . For a spin dynamo based on spin rectification under ferromagnetic resonance (FMR) conditions 22 , simultaneous enhancements of both the electric and magnetic fields as well as the tunability of their mutual phase are anticipated. This is, however, nontrivial since electric and magnetic field enhancements from a pure plasmonic resonance typically occur at spatially different locations with a stubborn 90° phase deviation, as suggested based on the viewpoint of the equivalent LC resonance.
In this work, we combined a modified diabolo antenna (MDA) with a photonic structure and utilized the hybrid resonance to improve the spin dynamo performance. We demonstrate that the spin dynamo rectification signal can be improved by more than three orders of magnitude and that an energy conversion efficiency of up to ~0.69 mV/mW can be achieved thanks to the simultaneous enhancement of the microwave electric field (~13-fold) and the magnetic field (~195-fold) with a relative phase distinctive from 90°. Our work provides an innovative way to optimize spin dynamo performance and holds potential for general applications in the form of wireless high frequent spintronic devices such as magnetic tunnel junctions 15,23,24 , spin-torque diodes 3, 25 , spin pumping 26,27 , and spintronic microwave sensors 28,29 . Figure 1(a) shows the real sample and 1(b) shows a schematic drawing of the PDC's designed metal/insulator/ metal (MIM) sandwich structure; its top consists of a MDA (with dimensions of L × L mm 2 in the x-z plane with a copper strip of width w and two pairs of copper strips (each with a gap of width g) evaporated onto a polyethylene terephthalate (PET) substrate. The layer sandwiched in the middle was glass (with dielectric constant ε = 6.8), while the bottom mirror layer was a flat pieces of Al foil (see Fig. 1(b)). The spin dynamo device (insert of Fig. 1(b)) was located 60 μm below the MDA. The MDA structure provides plasmon resonance with both localized e-and h-fields around the centre while the MIM tri-layer structure offers Fabry-Perot-like photonic resonance. The resulting hybridized mode functions to enhance both the electric and the magnetic field.
Results
As the spin dynamo requires both electric and magnetic field enhancements, Fig. 1(c) shows the enhancement of the product between the electric field (z direction) and the magnetic field (x direction) at the monitor in the x-z plane calculated using a finite-difference time-domain (FDTD) simulation method. We can see that e z* h x is maximum in the centre region below the centre of the MDA. The ferromagnetic microstrip sample of Permalloy Ni 80 Fe 20 (Py) for the spin dynamo is placed within this centre region (as shown in Fig. 1(b)) with the spin-rectifying photovoltage (PV) being measured via two parallel electrodes ( Fig. 1(a) and (b)).
Dramatically enhanced PV of spin dynamo. We began our experiment by applying a DC magnetic field with an angle of θ = 135° (the angle at which the largest PV is typically obtained); Fig. 2(a-c) shows the results for configurations with a PDC with a 3-mm-thick cavity, a MDA without the flat Al foil at the bottom, and a bare structure (without an MDA and without Al foil) with only a spin dynamo device, respectively. From the typical PV spectra, we can see that the normal FMR of these three conditions consistently follows Kittel's formula shown in the two-dimension spectrum (red dashed lines in Fig. 2(a)-(c)), which can be attributed to the intrinsic properties of the magnetic material (Py). The spin dynamo PV induced by the spin rectification effect in the three configurations (as shown in Fig. 2(d)) near the FMR condition, shows a relatively small and non-resonant amplitude of ~0.17 μV (red solid circles) with the bare structure and ~17 μV (blue solid squares) with the MDA, resonant at a frequency of ~9.4 GHz. More remarkably, the PV with the PDC was as large as ~432 μV (at ~9.4 GHz, green solid triangles), constituting an enhancement factor of ~2541, which is much larger than in the pure plasmonic case with only a MDA (~100-fold enhancement). The conversion ratio (defined as the PV, A PV , divided by the microwave excitation power, P MW ) achieved 0.69 mV/mW with the PDC-a record efficiency for wireless power conversion in spin dynamo.
To evaluate the respective contribution of the electric field e z and the magnetic field h x , the bolometric effect 30 was examined with different configurations but without an external DC magnetic field. As is pointed out in ref. 30, the resistance change (ΔR) of the Py strip caused by the bolometric effect under microwave irradiation satis- where P 0 is the absorbed microwave power, τ is the thermal energy relaxation time, C is the absolute heat capacity of the spin dynamo (i.e. of the Py stripe). Meanwhile, the electric field correlates with the resistance change: . Therefore, we can calculate the enhancement of e z by measuring ΔR. In our experiment, a lock-in amplifier with an applied sine current (3.13 kHz, 0.17 μA) was used to measure the resistance of the Py strip, which was pulsed with microwaves (9.4 GHz) for a period of 42 s. As shown in Fig. 2(e), the resistance change (ΔR) jumps from 0.18 Ω (red solid circles) for the bare structure up to 30.02 Ω (green solid triangles) for the PDC and up to 4.44 Ω (blue solid squares) for the MDA. These resistance changes lead to a ~13-fold electric field enhancement (ξ e ) for j z or e z , which is too small to explain the observed PV enhancement (~2541×). Consequently, the additional enhancement can be ascribed to the enhancement of the microwave magnetic field (ξ h ), which is approximately evaluated to be ~195 = 2541/13 at the resonant frequency (~9.4 GHz). Compared with the case for the pure MDA structure, where ξ e ≈ 5 and ξ h ≈ 20 = 100/5 at resonant frequency, the PDC structure shows a larger enhancement of both the electric and the magnetic field.
Line shape of FMR caused by relative phase in PDC. To take account of the spectral line shape near the FMR, we then analysed the spin rectification effect more quantitatively 22 . Taking the time average <> of the electric field integrated along the z direction, we get where ΔR is the resistance change caused by the anisotropic magnetoresistance (AMR) effect, j, is the microwave current in the Py strip induced by the microwave e-field, and m is the non-equilibrium magnetization driven by the microwave h-field. Figure 3(a-c) where A L and A D are the amplitudes for the Lorentzian and dispersive components, respectively, ΔH is the line width, and H 0 is the resonant magnetic field. We define the amplitude of the PV at the FMR to be = + A A A PV L D 2 2 , as shown in Fig. 2(d). From Fig. 3, we find that the line shape of the FMR is quite different at the angle of θ = 135° for the three structures: for the bare structure the FMR line shape is closer to the dispersive line shape, for the MDA structure it is closer to the Lorentzian line shape, while for the PDC structure it is somewhere between that of the dispersive and Lorentzian line shape. To understand the origin of the different line shapes, it should be noted that the spin rectification effect leads to different amplitudes of the Lorentzian (A L ) and dispersive (A D ) components depending on the relative phases between the microwave magnetic field h and the microwave current j (or electric field, e z ); the microwave electric field (Φ x , Φ y , and Φ z , in the x, y, and z directions, respectively) and both A L and A D are represented in the following equations 31 : where ΔR and θ are the resistance change caused by the AMR effect and the angle between the H and Py stripe, respectively. As already mentioned, j z , is the microwave current along the Py strip and the pre-factors A xx , A xy , and A yy are real numbers that are related to the Py properties. From Eqs (2) and (3) it can be seen that for the case where Φ x = Φ y = Φ z = 0, the dispersive component A D dominates the line shape, leading to an antisymmetric shape, while for Φ x = Φ y = Φ z = π/2, the Lorentzian component A L dominants the line shape, leading to a symmetric shape 31 . The θ dependent experiments (conducted by changing the orientation of H relative to the Py strip on x-z plane) show the variation of A L and A D (hollow/solid circles/squares/triangles, respectively, in Fig. 3(d)-(f)). We noted that h x is the dominant component in our configuration (h x ≫ h y , h z ), thus both of A L and A D are found to follow a sin(2θ)·cos(θ) dependence on the external DC magnetic field angle. In Fig. 3(d)-(f) it can be observed that the PV signal undergoes a transition from A D -dominance to A L -dominance after introducing the MDA, but the proportion of A D increased with the PDC configuration. That is, the line shape transformed from a dispersive to a Lorentzian shape (as shown in Fig. 3(a,b)), while the line shape in the PDC configuration was a mix of both the dispersive and the Lorentzian shape (shown in Fig. 3(c)). Through curve fitting we can calculate the relative phase when using a MDA (Ф x = −71.5°) or a PDC structure (Ф x = −59.9°); these values differ greatly from when using a bare structure (−6.37°, Fig. 3(d)). These values agree reasonably well with the theoretical predictions that the relative phase when using a pure plasmonic MDA structure should be closer to −π/2 while it should be 0 for the bare structure (plane wave or photonic resonance case). The distinctive value of Ф x = −59.9° for the PDC configuration, which diverges from both −π/2 and 0, suggests that the dramatically enhanced PV arises both from plasmonic and photonic resonances. Meanwhile, we can see that A PV reach the maximum at θ =°°°°45 , 135 , 225 , 315 as shown in Fig. 3
(d)-(f) (gray lines).
Fabry-Perot-like photonic resonance of PDC. To verify the contribution from the photonic-like resonant mode, we systematically varied the thickness S of the cavity and examined the enhancement of the PV signal. Figure 4(a) shows the two-dimensional plot of the PV spectrum as a function of the microwave frequency (8-12 GHz) and of the thickness of the cavity (2-18 mm). It is obvious that the enhanced PV band displays a systematic evolution as S increases. To determine the physical origins of these resonances, the dotted curves in Fig. 4(a) demonstrate the expected Fabry-Perot-like modes, which follow where N is the order of the cavity mode; c and ε are the velocity of light and the dielectric constant of glass, respectively; and S is the thickness of the PDC. Note that the electromagnetic field near the sample surface or the MDA should be close to the maxima associated with the spin-rectifying PV we detected (the hot sport in Fig. 4(a) shows the maximum PV); a straightforward physical model of this is demonstrated in Fig. 4(b), where the difference between the photonic-like resonant mode and the traditional Fabry-Perot mode 32 is that a 1/2 item is added to accommodate the hybrid mode and where the thicknesses of the cavities for different orders are λ ε λ ε λ ε , , and λ ε 7 4 , as shown in Fig. 4(b).
Discussion
In summary, we proposed a novel PDC structure composed of a MDA and a flat metal layer, which has the ability to significantly enhance the spin dynamo rectification signal (by almost three orders of magnitude) and achieve a high-energy conversion efficiency ~0.69 mV/mW. We experimentally obtained an enhancement factor of ~2541× for PV, ~195× for the microwave magnetic field, and ~13× for the microwave electric field at the resonant frequency (9.4 GHz). Besides, the PDC structure also could modulate the relative phase of e-and h-field wildly via sophisticated design due to its hybrid mode, which originated from two resonant effect: plasmon resonance provided by the MDA structure, the relative phase close to π/2 corresponding a Lorentzian shape of FMR; the Fabry-Perot-like photonic resonance offered by the MIM tri-layer structure, quite different with the conventional Fabry-Perot cavity mode, and our theory explain the distinct phenomenon well. Our work opens a door for future studies utilizing both artificial and natural magnetism, and further improvements can be considered in the following two aspects: Firstly, MIM structure could achieve perfect absorption 33 of light, which provide a possibility to dramatically enhance the spin relevant effects because it would increase their energy conversion efficiency for the above-mentioned devices; then the plasmonic diabolo cavity structure could be developed into a perfect metamaterial absorber. Secondly, because the anisotropic magnetoresistance (AMR) effect of a single permalloy strip is not efficient ( < 1%) -therefore, much higher spintronic rectification effect such as giant magnetoresistance (GMR ~70% 34 ), tunneling magnetoresistance (TMR ~600% 35 ), or colossal magnetoresistance (CMR ~127000% 36 ) can be adopted for future applications. The broad range of prospects for research in artificial and natural magnetism promises many exciting possibilities for the realization of efficient wireless energy conversion devices and wireless control devices in future.
Methods
Sample fabrication. In the experiment, standard optical lithography and Magnetron sputtering methods were used; a MDA copper structure with a thickness t copper of 500 nm was fabricated on a 60-μm-thick PET substrate. It was then integrated into a ferromagnetic permalloy (Py or Ni 80 Fe 20 ) microstrip sample (typically 600 μm × 20 μm × 40 nm) with gold electrodes (thickness t gold of 200 nm) supported by a glass substrate. The bottom consisted of a flat metal (Al) layer to form a PDC device.
Spin Rectification measurement setup. To measure the spin rectification photovoltage of Py, an external DC magnetic field was applied in the x-z plane with an angle θ with respect to the Py strip (z direction). A microwave generator (Agilent E8257D) whose amplitude was modulated by a square wave with a period of 0.12 ms, emitted an 8-12 GHz band electromagnetic wave through an honour antenna with its polarization along the z direction to normally illuminate the sample (i.e., it propagated along the y-direction). We detected the microwave SR PV generated in the Py strip by using a lock-in amplifier (Stanford SR830) triggered by the square wave. All the measurements were performed at room temperature.
|
v3-fos-license
|
2022-05-24T13:18:28.777Z
|
2022-01-01T00:00:00.000
|
248991362
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-031-06249-0_6.pdf",
"pdf_hash": "28856545d726372e6039daf19bb57d9d60bfc49f",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45604",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "63c419ccf50a45c34418655913c7a27f7e142d81",
"year": 2022
}
|
pes2o/s2orc
|
Speed Discrimination in the Apparent Haptic Motion Illusion
. When talking about the Apparent Haptic Motion (AHM) illusion, temporal parameters are the most discussed for providing the smoothest illusion. Nonetheless, it is rare to see studies addressing the impact of changing these parameters for conveying information about the velocity of the elicited motion sensation. In our study, we investigate the discrimination of velocity changes in AHM and the robustness of this perception, considering two stimulating sensations and two directions of motion. Results show that participants were better at discriminating the velocity of the illusory motion when comparing stimulations with higher differences in the actuators activation delay. Results also show limitations for the integration of this approach in everyday life applications.
Introduction
Haptic illusions are a major tool to enhance tactile stimulations in a large variety of domains [12,11].They are an interesting topic of research as they enable to convey rich sensations with rather simple stimulation techniques.One major illusion is the Apparent Haptic Motion (AHM) illusion.The apparent haptic motion illusion aims at conveying a sensation of continuous movement along the skin when only discrete points are stimulated.In his original work, Burtt [1] found that two distinct vibrotactile stimuli elicited in close proximity on the skin with overlapping actuation were not perceived as localized sensations but rather as a single moving vibration.
Studies regarding AHM were conducted on different body parts [13,8] to test its robustness and understand the essential parameters driving this sensory illusion.The illusion was demonstrated to be effective in conveying directional cues and proved to be robust in both 1D and 2D patterns [14], which suggests a potential for providing directional information during navigation tasks.Besides spatial parameters, i.e., the position of the activation points and their distance to each other, temporal aspects have also been studied, so as to deepen the understanding of the illusion mechanisms [9,15].In this respect, some studies showed that the temporal parameters, i.e., activation delays between motors, were actually not strongly constrained.Indeed, Stimuli Onset Asynchrony (SOA) and Duration of Signals (DoS) that are different from those proposed by Sherrick and Rogers [16], can also efficiently elicit this illusion [8].
Speed perception and impacting parameters
Perception and discrimination of tactile speed has been studied in a large variety of conditions such as textures and vibrations [2].These works mainly realized experiments with a surface sliding under the fingertip, creating a contact and skin stretch.Hence, the literature provides information on the influence of textures and vibrations on speed discrimination for different velocity ranges, going, e.g. from 33 to 120 mm.s −1 [3,6].The results from [3,2] show that smooth surfaces are systematically felt as sliding slower than textured surfaces, even when presented with an identical sliding velocity.It was found that the Pacinian corpuscles have a crucial role in the discrimination of tactile speed [3], which explains the impact of material-induced vibrations on speed perception.
Speed perception of the apparent motion illusion
As previously mentioned, various studies confirmed the presence of the AHM illusion at different distances between the stimulation points (the position of the actuators) and with different SOA and DoS, deviating from the parameters indicated by Sherrick and Rogers [16].Interest has been put to investigate various parameters regarding the spatial and temporal dimensions of the AHM illusion [9,13], enabling the creation of more complex and informative stimulations.Although other works have focused on determining the optimal actuation timing for conveying the most natural apparent motion, to the best of our knowledge, no study focused on the perception of speed and duration as a source of information in AHM.Understanding the parameters that make two stimulations easily distinguishable could indeed be relevant for tactile communication or navigation.For example, the speed perception of the apparent motion could help representing a moving obstacle or a safe direction to follow.
Contribution
The goal of this paper is to investigate the perception of the velocity conveyed during the AHM illusion.To go further, we also tested the robustness of this perception based on how the stimulation is provided.Indeed, while historically the AHM is conveyed with vibrations, our previous study [10] suggested that the illusion can also be conveyed by intervals of mechanical pressure.To explore that possibility, we conducted a study with two main objectives.First, we determine and compare the threshold of velocity discrimination for the apparent motion using both vibrations and pressure intervals ("taps") on the skin.Secondly, we study the impact of these modes on the participants' confidence when answering.
User study
This study aims to investigate the ability of discriminating a velocity change in the AHM illusion.The study has been approved by Inria's ethics committee (COERLE Dornell -Saisine 513).
Experimental set-up and stimulation modes
The experimental setup is shown in Fig. 1.It is composed of three custom actuators inspired by the work of Duvernoy et al. [5], with a coil as a stator and two magnets glued together in their repulsive position as a mover to increase the magnetic field.The actuators are mounted onto a curved 3D printed hand-rest, positioned in a comfortable bend for the participants.The signals for the three actuators are first created with Matlab and then processed through a National Instrument USB-6343 series controller, which sends them to three amplifiers enabling to deliver a 6.5V signal to the motors, which corresponds to a force of approximately 0.4 N exerted on the hand.This last measure was recorded during a previous study, in which we characterized the force exerted by these actuators with a Nano17 force sensor (ATI, USA).The two magnets of the electromagnetic actuators move upward and downward along the center of the coil, depending on the electrical tension passing through it.This design enables to implement two stimulation modes: (i) a vibratory mode, where the actuators vibrate at 120 Hz, and (ii) a "tap" mode, where the magnets elicit a single impact to the user's skin.The vibrating frequency for (i) was set based on previous studies investigating the apparent haptic motion illusion with vibrotactile stimuli, such as [17].Fig. 2 shows the signals imparted to the three motors in the two stimulation modes.In both modes, asynchronous overlapping stimulations are sent to the same three locations on the hand (see Fig. 1).While the duration of activation of the actuators is fixed, we seek to change the time delay between the actuators activation, also called Stimuli Onset Asynchony (SOA).Based on pilot tests and [7], we set DoS = 220 ms and the reference SOA = 110 ms in both stimulation conditions.In the following experiment, we tested SOA values of 90%, 80%, 70%, 60% and 50% of the reference SOA, making the comparison SOA values [99, 88, 77, 66, 55] ms.
Experimental design
Stimulations are conveyed between the middle finger and the proximal part of the palm, as shown in Fig. 1.We consider the two stimulation modes presented in Sec.2.1, vibratory and tap, as well as two directions of motion, proximal-todistal (orange-to-green in Fig. 1) and distal-to-proximal (green-to-orange).
The two modes (vibrations or taps) are tested in two blocks, carried out one after the other.A block is thus made of only vibratory or only tap trials.Each block is composed of 80 trials of which the changing parameters are the SOA and the direction of the motion.A trial is a sequence of the reference signal with a SOA=110 ms and then a comparison signal with a different SOA, both having the same orientation (see also Sec. 2.3).The sequence of two signals is repeated a second time before the participants answer the questions.The order of the signal presentation is pseudo-randomized.Thus, blocks are only differentiated by the type of signal that is provided (vibratory or tap) and the order of the comparison, pseudo-randomized differently for each block and each participant.
Experimental procedure
Ten persons participated in the experiment.They were all between twenty-one and thirty years old, of which two were women, and one was left-handed.Stimulations were delivered on the dominant hand.Participants were naive about the hypotheses and process of the experiment.Participants carried out the experiment while wearing headphones playing white noise, so as to mask the sound coming from the motors.Indications about the global number of stimulations and questions were given before the experiment.During a trial, participants would receiv, in a random order, (i) the reference stimulation (SOA = 110 ms), delivered with the stimulation mode of the block at hand, and (ii) one of the comparison signals having a different SOA, delivered with the same stimulation mode and same orientation.The identical sequence was played a second time to end a trial.After each trial, the participants answered two questions about what they perceived: (i) "Which one of the two motions was faster?" and (ii) "How certain are you of your answer?"The data collected from the participants were the index of the signal that seemed faster (1 or 2) and their confidence from 0 (no confidence at all) to 100 (total certainty).The mode of the starting block was counterbalanced between participants.At the end of the experiment, participants were also able to give open comments and feedback about their sensations and the experiment in general.
Results
Results are reported in Figs. 3 and 4. Fig. 3 shows the rate of correct responses (score), while Fig. 4 shows the reported confidence.These two parameters are the dependent variables of our statistical analysis.As independent variables, we report the results for five SOA levels and four experimental conditions: two directions of motion (proximal-to-distal or distal-to-proximal) and two types of stimulation (vibrations or taps).
Results showed a significant decrease of correct answers when the time delay between actuators, the SOA, increases and thus, gets closer to the reference SOA.This performance trend is observable in all conditions, both in taps and in vibration mode as well as with proximal-to-distal or distal-to-proximal direction patterns.To confirm the visual perception, we performed a Friedman statistical test on the four experimental conditions.The test highlighted the effect of the changing value of the SOA on the participants' performance to discriminate the fastest stimulation they received for the conditions of proximal-to-distal taps, distal-to-proximal taps, proximal-to-distal vibrations and distal-to-proximal vibrations (p < 0.01).An identical Friedman test was performed on the effect the compared SOAs have on the confidence rates.A significant effect was also noted for the four experimental conditions (p < 0.01).
To interpret the effect of the direction (distal-to-proximal or proximal-todistal) and the stimulation mode (tap or vibration), we performed matched-pairs Wilcoxon tests.The test showed no significant effect of the stimulation mode (p > 0.05) but it showed significant differences on the score between the proximal-to-distal and distal-to-proximal direction in tap stimulations (p < 0.01).However, a post-hoc test operated separately for each SOA did not show a significant difference for any of the comparisons.Finally, the matching between performance and confidence was tested by a Spearman correlation test Fig. 5 and was found significant for all conditions, but with a rather low r coefficient of around 0.4.
Discussion and Conclusions
This paper investigated the perception of the velocity of the apparent movement as well as the impact of two experimental conditions: the direction of the AHM and the stimulation mode.We investigated two stimulation modes, standard vibrations and taps to the palm of the hand.We also considered two directions of motion, from the fingertip to the palm and vice-versa.We studied the role of the delay between the activation of the actuators in the perception of velocity As expected, the smaller the delay compared to the reference, the better participants' speed discrimination.However, it was surprising to observe performance around 85% even for the easiest comparison stimuli, for which the SOA was divided by a factor 2 compared to the reference.Another important objective was to determine the matching between participants' performance and their confidence.As expected, the confidence and score correlated but the r coefficient was surprisingly low suggesting that participants struggled to assess their own performance.There was no significant influence of the mode of stimulation on the score or confidence, which showed a similar perception of both modes.Overall, the task was very challenging to participants and a few of them highlighted the difficulty of the task in their free comments.Thus, AHM illusions with different speeds might not be intuitive enough for people to use during everydat navigation tasks; the outcomes of the experiment were quite interesting in terms of haptic perception and confirm that human perception of tactile speed is inaccurate and prone to artefacts.The apparent haptic motion could still become a useful directional cue to integrate in navigation devices for impaired people, e.g., power wheelchairs, walkers, prewalkers [4] but modulating the speed might not be very informative.We wish to conduct further experiments, in which we let participants set what they perceive to be the best parameters for the AHM, e.g., duration, delay, intensity.
Fig. 1 .
Fig. 1.Experimental set-up.A) The signals are generated via a controller and then amplified before being played by the custom-built actuators.B) Three electromagnetic actuators are placed on a curved hand-rest.The colored dots show the contact points of the actuators on the hand.
Fig. 2 .
Fig. 2. Signals sent to the three actuators in the two actuation modes.A) Vibratory mode, made of sinusoïdal oscillations at 120 Hz within ramp envelopes.B) "Tap" mode made of single ramp signals.In this Figure, we used DoS = 220 ms and SOA = 110 ms.
Fig. 3 .
Fig. 3. Score when comparing five different SOAs vs. the reference one of 110 ms, across the two directions of motion (proximal-to-distal or distal-to-proximal) and type of stimulation (vibrations or taps).The boxplot gives the median, 25 and 75 percentiles with extrema values.
Fig. 4 .
Fig. 4. Reported confidence of answer when comparing the SOAs, across the two direction of motion (proximal-to-distal or distal-to-proximal) and type of stimulation (vibrations or taps).The boxplot gives the median, 25 and 75 percentiles with extrema values.
Fig. 5 .
Fig. 5. Spearman correlation tests with the corresponding p-values and statistical dependence factors "r".We tested the correlation between the confidence and the score for the different conditions of mode and direction.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2006-01-06T00:00:00.000
|
6895970
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://biomedical-engineering-online.biomedcentral.com/track/pdf/10.1186/1475-925X-5-2",
"pdf_hash": "08ad33901dff5034d47598f50abceba582b19ce4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45606",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Engineering"
],
"sha1": "831dbfe2531b30bc014c38720d6299f2c216d375",
"year": 2006
}
|
pes2o/s2orc
|
Biomechanical analysis of rollator walking
Background The rollator is a very popular walking aid. However, knowledge about how a rollator affects the walking patterns is limited. Thus, the purpose of the study was to investigate the biomechanical effects of walking with and without a rollator on the walking pattern in healthy subjects. Methods The walking pattern during walking with and without rollator was analyzed using a three-dimensional inverse dynamics method. Sagittal joint dynamics and kinematics of the ankle, knee and hip were calculated. In addition, hip joint dynamics and kinematics in the frontal plane were calculated. Seven healthy women participated in the study. Results The hip was more flexed while the knee and ankle joints were less flexed/dorsiflexed during rollator walking. The ROM of the ankle and knee joints was reduced during rollator-walking. Rollator-walking caused a reduction in the knee extensor moment by 50% when compared to normal walking. The ankle plantarflexor and hip abductor moments were smaller when walking with a rollator. In contrast, the angular impulse of the hip extensors was significantly increased during rollator-walking. Conclusion Walking with a rollator unloaded the ankle and especially the knee extensors, increased the hip flexion and thus the contribution of hip extensors to produce movement. Thus, rollator walking did not result in an overall unloading of the muscles and joints of the lower extremities. However, the long-term effect of rollator walking is unknown and further investigation in this field is needed.
Background
The rollator is a popular assistive walking device in most European and especially the Nordic countries [1]. The exact number of rollator users is unknown but about 6.4% of Danish 56-84 year-old people use a rollator and in Sweden about 4% of the total population use a rollator [1]. The terms "wheeled walker", "rolling walker", "three-wheeled walker", four-wheeled walker" [2][3][4] are frequently used synonymously with rollator, which can be defined as a frame with three or four wheels; the rollator has handles with brakes, and in some cases it has a seat, a basket or a tray ( Fig. 1) [1].
The main purpose of using a rollator is to improve the walking performance and minimize the risk of falling. Studies have shown that the walking performance in elderly subjects measured in terms of distance, cadence and velocity is improved when they walk with a rollator [2]. Furthermore, a recent study has shown that rollator users are generally satisfied with their rollator and consider it an important prerequisite for living a socially active and independent life [1].
However, knowledge about how the rollator affects the walking pattern is limited. To our knowledge no studies have investigated the biomechanical differences between walking with and without a rollator except from one study that observed a reduction in the vertical ground reaction force during rollator walking [5]. Such information may be clinically relevant in the decision-making process of whether a rollator would be beneficial to a subject or not, or whether the use of a rollator should be supplemented with e.g. balance and/or strength training. Studies of walking with canes or walking poles have shown that these walking-aids reduce the load on the lower extremities [6][7][8]. Presumably, the rollator reduces the loads on the leg muscles and the joints to some extent as well. However, the specific changes in kinematic and kinetic walking pat-tern parameters when walking with a rollator have not yet been quantified.
It is unclear whether an unloading of certain muscle groups and joints during walking would impair the functional ability during other types of daily physical activities and movements like sit-to-stand, short walking distances, stair climbing, balance control during standing/squatting etc. One study concluded that the use of walking-aids combined with a high activity level may protect against falls in elderly subjects [9]. Thus, information about how muscle groups and joints in the lower extremities are affected by walking with a rollator may be used in the development of specific rehabilitation strategies in elderly and disabled rollator users.
Accordingly, the purpose of the present study was to investigate the biomechanical effects of walking with a rollator on the walking pattern of healthy subjects. The reason for studying a group of healthy subjects was that it was both unethical and difficult to ask actual rollator users to walk without their rollator.
Subjects
Seven healthy women (age: 34.7 (range: 25-57) years, height: 1.70 (range: 1.64-1.78) m, weight: 64.7 (range: 55-75) kg) participated in the study. None of the subjects had any history of injuries or musculo-skeletal dysfunctions in their lower extremities. All subjects gave their informed consent to participate in the experiments which were approved by the local ethics committee.
Gait analysis
The subjects were fitted with fifteen small reflecting spherical markers (12-mm diameter) according to the marker set-up described by Vaughan et al. [10]. The markers were placed on the head of the fifth metatarsal, the heel, the lateral malleous, the tibial tubercle, the lateral femoral epicondyle, the greater trochanter, the anterior superior iliac spine and sacrum. All subjects wore lightweight flexible shoes with a thin, flat sole. The subjects were asked to walk across two force platforms (AMTI, OR6-5-1) both with and without a rollator (Fig. 1, Dolmite Maxi 650, Dolomite AB, Anderstorp, Sweden) at a speed of 4.5 km/ h. The rollator was adjusted to each subject in an upright standing position with the arms hanging down along the body so that the handles were on a level with processus styloideus ulnae. The Dolmite Maxi 650 rollator model was used because it was wide enough to pass next to the force platforms without touching them. However, pilot studies showed that the wheels of the rollator sometimes hit the first platform anyway. To solve this problem a metal rail was fixed to the ground along the first platform to ensure that the rollator wheels did not touch it.
This rollator, a Dolmite Maxi 650, Dolomite AB, Anderstorp, Sweden, was used in the study Figure 1 This rollator, a Dolmite Maxi 650, Dolomite AB, Anderstorp, Sweden, was used in the study. It resembles a typical rollator with four wheels, handles with brakes and a seat.
The subjects were allowed to practice walking both with and without the rollator to become familiar with the movements and the pre-determined walking speed. The speed was controlled by photocells, which made it possible to teach the subjects to approach 4.5 km/h. Five video cameras (Panasonic WV-GL350) operating at 50 Hz were used to record the movements. The video signals and the force plate signals were synchronized electronically with a custom-built device. The device put a visual marker on one video field from all cameras and at the same time triggered the analogue-to-digital converter which sampled the force plate signals at 1000 Hz. The subjects triggered the data sampling and synchronization when they passed the first photocell.
The video sequences were digitized and stored on a PC. Sixteen non-coplanar points on a standard calibration frame (Peak Performance 5) were digitized to calibrate each of the video sequences. The calibration frame was placed in the middle of the walkway and covered both force plates. Three-dimensional co-ordinates were then reconstructed by direct linear transformation using the Ariel Performance Analysis System (APAS).
Prior to the calculations, the position data were digitally low-pass filtered by a fourth order Butterworth filter with a cut-off frequency of 6 Hz, and the 1000 Hz force plate signals were downsampled to 50 Hz to fit the video signals.
Calculations
Internal flexor and extensor joint moments about the ankle, knee and hip were calculated using a three-dimensional inverse dynamics approach described by Vaughan et al. [10]. Furthermore, the internal adductor/abductor moment was calculated for the hip joint. The joint moments were expressed in an anatomically based reference system. The anatomical axes for the flexor and extensor moments of the ankle, knee and hip joint were the mediolateral axes of the segment reference frames of the shank, the thigh and the pelvis, respectively [10]. The anatomical axis for the hip adductor/abductor moment was the so-called floating axis that was perpendicular to the mediolateral axis of the pelvis segment frame and the longitudinal axis of the thigh segment frame [10]. Ankle dorsiflexor, knee extensor, hip flexor and hip abductor joint moments were considered positive, while ankle plantarflexor, knee flexor, hip extensor and hip adductor joint moments were considered negative. The angular impulse (i.e. the area under the joint moment curve) quantifies the total contribution of a joint moment towards producing movement. It has been shown that in some cases the angular impulse values may be relevant to the evaluation of walking patterns of different groups [11]. Accordingly, the angular impulse (Nm· s) was calculated by integration of the area under the joint moment curves. The angular impulses of the plantarflexors (i.e. the negative part of the ankle moment), knee extensors (i.e. the first positive part of the knee moment) and the hip extensors (i.e. the negative part of the sagittal hip moment curve), flexors (i.e. the positive part of the sagittal hip moment curve) and abductors (i.e. the positive part of the frontal hip moment curve).
The peak values as well as the angular impulses of the ankle, knee and hip moment were calculated and used as input parameters for the statistical analyses.
The angular position of the ankle, knee and hip joints was calculated to describe the movements in the sagittal plane. In addition, the angular position of the hip movement in the frontal plane was calculated. Zero degrees defined the anatomical position (foot at 90° to leg) and positive values reflected ankle dorsiflexion, knee hyperextension, hip flexion and hip abduction.
The average angle as well as the range of motion (ROM) [12], i.e. the difference between the maximum and minimum joint angles, were calculated for the stance phase and used as input parameters for the statistical analyses.
MATLAB was used for all calculations.
Data reduction
Data obtained from the left leg during the stance phase were analyzed. Six gait cycles were normalized and averaged for each subject and situation (with (rollator-walking) and without (normal walking) a rollator, respectively). Normalization was performed by interpolating data points to form 500 samples for each gait cycle. The joint moments were normalized to body mass. Ensemble averages were then calculated for rollator (n = 7) and normal walking (n = 7) using the mean value for each individual subject.
Statistics
A Student's t-test for paired data was used to identify statistically significant differences between rollator-and normal walking in selected kinematic and kinetic variables of the walking patterns. All results are presented as means (SD). The level of significance was set at 5%.
The joint angular kinematics were significantly different between rollator-and normal walking (Fig. 2, Table 1).
During rollator-walking the ankle and knee joints were less dorsiflexed/flexed and had a smaller ROM than during normal walking (Fig. 2, Table 1). In contrast, the hip joint was more flexed during rollator-walking than normal walking (Fig. 2, Table 1). There was no difference in the hip ROM in the sagittal plane, while the hip ROM in the frontal plane was significantly smaller during rollatorwalking than normal walking ( Table 1).
The joint moments were significantly different at each joint between the two situations (Fig. 3, Table 2). The peak plantarflexor moment and the plantarflexor angular impulse of the ankle joint were significantly smaller during rollator-walking than during normal walking (Fig. 3, Table 2). The knee joint moment was significantly reduced during rollator-walking (Fig. 3, Table 2). During rollator-walking both the peak knee joint moments and the angular impulse of the knee extensors were reduced by approximately 50% when compared to normal walking (Fig. 3, Table 2). The angular impulse of the hip flexors was significantly smaller during rollator-walking (Fig. 3, Table 2). In contrast, the angular impulse of the hip extensors was significantly larger during rollator-walking than during normal walking (Fig. 3, Table 2). Thus, the shift from hip extensor dominance to flexor dominance in the stance phase occurred significantly later during rollatorwalking (54.5% (9.5) % of stance phase) than during normal walking (40.0% (7.7) % of stance phase) (p < 0.001) (Fig. 3).
The peak hip abductor moment in the first half of the stance phase was significantly smaller during rollatorwalking than during normal walking (Fig. 3, Table 2). Although the angular impulse of the hip abductors tended to be smaller during rollator-walking no statistical significance was observed in this parameter between the two situations (Table 2).
Average joint angular curves (degrees) of the ankle, knee and hip in the sagittal plane and of the hip in the frontal plane Figure 2 Average joint angular curves (degrees) of the ankle, knee and hip in the sagittal plane and of the hip in the frontal plane. Dotted lines reflect walking with a rollator (n = 7) and solid lines reflect normal walking (n = 7). Positive values indicate hip abduction/extension, knee extension and ankle plantarflexion. 0 % on the x-axis is heel strike and 100 % is toe-off.
Discussion
The present study demonstrated significant differences between normal and rollator-walking patterns. The study included seven healthy subjects between the ages of 25 and 57 years who were able to walk with and without a rollator at identical walking speeds (4.5 km/h), which is important when comparing joint moment curves [13][14][15].
The main findings of the present study showed that walking with a rollator resulted in a remarkable reduction in the knee extensor moment and thus an unloading of the quadriceps muscle during the stance phase. There were also small but significant reductions of the ankle plantarflexor and hip abductor moments. In contrast, the angular impulse of the hip extensors and the duration of the hip extensor moment were increased during rollator-walking. The hip joint was generally more flexed throughout the whole stance phase during walking with the rollator, while the ankle and knee joint were less dorsiflexed/ flexed. In addition, the ankle and knee ROM in the sagittal plane along with the hip ROM in the frontal plane were decreased during rollator-walking.
These results confirm that although the weight of the trunk was supported by the rollator, this did not result in an overall reduction of the joint moments around all three joints in the lower extremities. The unloading of the ankle and knee joints during rollator-walking seemed to be partly compensated by an increase in the hip extensor moment, which probably was needed to push the rollator in a forward direction and keep up its horizontal velocity.
The increased hip flexion throughout the whole stance phase was due to the increased forward flexion of the trunk during rollator-walking. The increased hip flexion could possibly explain the increased hip extensor moment during rollator-walking. This concurs with other studies that have observed increased hip flexion along with an increase in the hip extensor moment during walking [12].
The sagittal ankle, knee and frontal hip joint ROM's were reduced and the knee and ankle joints were less flexed during rollator-walking. During normal walking the time period between heel strike and peak knee flexion in the first half of the stance phase reflects the weight acceptance and energy absorption controlled by the knee extensors [16]. During rollator-walking the demand for knee extensor energy absorption is reduced because part of the body weight is supported by the rollator which possibly may explain the reduced knee moment and knee flexion observed in the present study. The reduced knee flexion during rollator-walking could possibly explain the reduced dorsiflexion of the ankle joint observed in this situation.
The rollator is a common and popular walking-aid among elderly and disabled subjects [1,2,17]. Rollator users are typically older than the subjects that participated in the present study or disabled and they would probably not be able to walk safely at the same walking speed without their rollator. Therefore, the observed changes in the walking pattern during walking with and without a rollator may not necessarily apply to elderly and/or disabled rollator users. However, it may be very difficult, if not impossible to investigate the differences between walking with and without a rollator in actual rollator users as they are unlikely to be able to walk without any walking-aid. Thus, in the present study a biomechanical method was established to investigate the differences between walking with and without a rollator, and the results may be used as a Average joint moment curves (Nm/kg·100) of the ankle, knee and hip in the sagittal plane and of the hip in the frontal plane Figure 3 Average joint moment curves (Nm/kg·100) of the ankle, knee and hip in the sagittal plane and of the hip in the frontal plane. Dotted lines reflect walking with a rollator (n = 7) and solid lines reflect normal walking (n = 7). Positive values indicate hip abductor/flexor dominance, knee extensor and ankle dorsiflexor dominance. 0 % on the x-axis is heel strike and 100 % is toe-off. Asterisks indicate statistical significant differences between peak values of the joint moments during rollator-and normal walking.
model for general changes in the joint moment pattern and the kinematics during rollator-walking in healthy subjects.
The rollator is definitely a very effective walking-aid that supports the body, improves the walking performance in terms of distance, cadence and velocity [2] and in many cases serves as a pre-requisite for living a normal life [1]. From a clinical viewpoint there is no doubt that if the alternative is complete immobilization of a person, the rollator seems a perfect solution that ensures at least a minimum of physical activity, which is ultimately beneficial for the cardiovascular [18,19] and the musculo-skeletal systems [20,21]. However, the rollator may also be used as part of a rehabilitation program in order to help a person to learn to walk without a walking-aid. In such situations it may be important to be aware of the results that revealed that rollator-walking led to a remarkably reduction of the knee extensor moment and thus an unloading of the quadriceps muscle, which is a very important muscle in movements like sit-to-stand, postural control, stair climbing in healthy subjects [21,22]. The hip abductor moment, which plays a significant role in balancing the trunk during walking [23], was also reduced during rollator-walking. It is unclear whether this unloading has negative consequences for balance control and functionality in other types of movement and daily activities. One study concluded that the use of walking aids combined with a high activity level may protect against falls in elderly subjects [9]. Another study concluded the functional ability was not negatively influenced in long term rollator users [24].
Conclusion
The rollator-walking pattern in healthy subjects was characterized by increased hip flexion, decreased ankle dorsiflexion and knee flexion, and reduced the ankle and especially the knee joint moments significantly, while the contribution from the hip extensors to produce movement was increased. However, the functional consequences of these changes and the long-term effects of rollator-walking are unclear and further investigation in this field is needed.
|
v3-fos-license
|
2019-07-02T13:47:52.211Z
|
2019-06-27T00:00:00.000
|
195757555
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://res.mdpi.com/d_attachment/ijms/ijms-20-03150/article_deploy/ijms-20-03150.pdf",
"pdf_hash": "c8a4e2927bd9777dd9f738395c24536a9979a52f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45607",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "c8a4e2927bd9777dd9f738395c24536a9979a52f",
"year": 2019
}
|
pes2o/s2orc
|
Marek’s Disease Virus Infection Induced Mitochondria Changes in Chickens
Mitochondria are crucial cellular organelles in eukaryotes and participate in many cell processes including immune response, growth development, and tumorigenesis. Marek’s disease (MD), caused by an avian alpha-herpesvirus Marek’s disease virus (MDV), is characterized with lymphomas and immunosuppression. In this research, we hypothesize that mitochondria may play roles in response to MDV infection. To test it, mitochondrial DNA (mtDNA) abundance and gene expression in immune organs were examined in two well-defined and highly inbred lines of chickens, the MD-susceptible line 72 and the MD-resistant line 63. We found that mitochondrial DNA contents decreased significantly at the transformation phase in spleen of the MD-susceptible line 72 birds in contrast to the MD-resistant line 63. The mtDNA-genes and the nucleus-genes relevant to mtDNA maintenance and transcription, however, were significantly up-regulated. Interestingly, we found that POLG2 might play a potential role that led to the imbalance of mtDNA copy number and gene expression alteration. MDV infection induced imbalance of mitochondrial contents and gene expression, demonstrating the indispensability of mitochondria in virus-induced cell transformation and subsequent lymphoma formation, such as MD development in chicken. This is the first report on relationship between virus infection and mitochondria in chicken, which provides important insights into the understanding on pathogenesis and tumorigenesis due to viral infection.
Introduction
Mitochondria, the well-known cytoplasmic organelles for energy making in eukaryotic cells, play important roles in many cell processes, such as small molecules metabolism [1], ion homeostasis [2], immune response [3,4], cell proliferation, and apoptosis [5,6]. Mitochondrion is very special because it contains its own genome (mtDNA), which is a circular molecule and encodes a total of 13 proteins that are all core components of oxidative phosphorylation. There may be hundreds of mitochondria in one cell and one mitochondrion may have multiple copies of mtDNA. It is speculated that the copy number of mtDNA plays a part in mitochondrial biogenesis and regulates mitochondrial functions. Diploid cells may contain a range of 1-10,000 mtDNA molecules depending on cell types and can change over time, where cells with greater energy needs usually have more mitochondria or mtDNA than cells with less needs [7,8]. The change in mtDNA contents is reported to be a useful clinical biomarker for disease diagnose [9,10].
Mitochondria are very essential in immune response because they not only involve in the immune and inflammation pathways, but also regulate the activation, proliferation, and function of leukocytes, including macrophages, B and T cells [11][12][13]. Due to the multifunctional characteristics, mitochondria usually serve as the targets of pathogens including viruses [14]. Viral infection can either activate or inhibit mitochondrial functions, alter mitochondrial contents, and influence gene expressions [15]. In return, mtDNA contents often negatively correlate with immune pathways [16] and subsequently influence virus infection and proliferation. Moreover, depletion and damages of mtDNA can lead to inflammation and apoptosis, and ultimately trick oncogenesis in host.
Marek's disease (MD) is a highly infectious oncogenic disease of chicken, caused by Marek's disease virus (MDV), an alpha-herpesvirus. MDV is a DNA herpesvirus that integrates into the host genome upon infection [17][18][19], and is characterized by T cell transformation and fatal lymphomas in visceral organs [20,21]. Nowadays, we have known that genetic and epigenetic background of host have significant effects on MD incidence [22][23][24]. Two highly inbred lines of chickens have been developed for more than half a century at the Avian Disease and Oncology Laboratory (ADOL) [25]. One of the inbred lines, the line 6 3 , was selected for resistance to tumors, while the other, the line 7 2 , was selected for susceptibility, which provides valuable and unique models for immunity and tumorigenesis researches. Recently, a long intergenic non-coding RNA, GALMD3, was identified being highly expressed post MDV-infection, which might cause mitochondrial dysfunction and lead to MD in chickens [26]. Although a great amount of efforts has been made for deciphering the virus infection and the host response, the link between MDV infection and mitochondria dynamics remains unclear.
To fill the gap about the function and regulation of mitochondria in MD, this study was designed to examine mitochondrial DNA copy number variation and changes in mitochondrial as well as mitochondria-related nuclear gene expression in three immune organs (bursa of Fabricius, thymus, and spleen). To our knowledge, this is the very first study aimed to explore mitochondrial role in viral infection in chickens.
Mitochondrial DNA Copy Number Variation
The relative content of mtDNA was determined using qPCR analysis of three mitochondrial genes ND2, ND3, and COX1, which encode the NADH dehydrogenase subunit 2 and 3, and the cytochrome C oxidase subunit 1, respectively. The nuclear gene β-actin was used as a control. The standard curves showed that the three mitochondrial genes and the β-actin control had similar amplification efficiencies, with ND2 and β-actin having the values of 0.90 and 0.95, respectively ( Figure S1). Comparison analyses of mtDNA were calculated separately for the three lymphoid organs. The copies of mtDNA per cell predicted with the three mitochondrial genes against the control β-actin showed a relative order of ND3>COX1>ND2, though mtDNA variations generated from the three mitochondrial genes exhibited similar trends (see Figure 1 and Figure S2).
The mtDNA copy numbers based on the ND2 gene over three time-points in the three lymphoid organs are demonstrated in Figure 1. In bursa, the copy numbers of mtDNA remained relatively constant over time in all groups and no difference was observed between the two lines (p > 0.05). Likewise, no statistically significant changes in mtDNA contents were detected in thymus (p > 0.05). Nevertheless, a significant difference was observed between the two MDV-infected groups at 21 dpi (p'≤ 0.01) in spleen, due to a continuously decrease of mtDNA contents in the susceptible birds and an increasing recovery in the resistant birds. The findings implied that 21 dpi was a very important stage for the mitochondria changes after MDV infection. Hence, transcriptome sequencing at this time-point was carried out to further explore the underlying mechanisms. show the mtDNA abundance in bursa of Fabricius, thymus, and spleen, respectively. MtDNA copies per cell were generated with ND2 and β-actin qPCR data. Birds used in each group were five. The symbols * and ** indicate statistical significance at p ≤ 0.05 and p ≤ 0.01 levels, respectively, between lines or treatment groups.
The mtDNA copy numbers based on the ND2 gene over three time-points in the three lymphoid organs are demonstrated in Figure 1. In bursa, the copy numbers of mtDNA remained relatively constant over time in all groups and no difference was observed between the two lines (p > 0.05). Likewise, no statistically significant changes in mtDNA contents were detected in thymus (p > 0.05). Nevertheless, a significant difference was observed between the two MDV-infected groups at 21 dpi (p ≤ 0.01) in spleen, due to a continuously decrease of mtDNA contents in the susceptible birds and an increasing recovery in the resistant birds. The findings implied that 21 dpi was a very important stage for the mitochondria changes after MDV infection. Hence, transcriptome sequencing at this time-point was carried out to further explore the underlying mechanisms.
The Expressions of Mitochondrial DNA-coding Genes
To ascertain whether the mitochondrial gene expression was also altered in MD, the 13 mtDNA protein coding genes were examined using RNA sequencing data ( Figure 2). Two of the 13 genes, ATP6 and ATP8 that encode subunits of the Complex V, showed the higher expression levels, while the genes, ND1, ND2, ND3, ND4, ND4L, ND5, and ND6 that encode the NADH dehydrogenase (Complex І) subunits, showed the lower expressions. We also found gene expressions were obviously lower in spleen in contrast to those of bursa and thymus. Additionally, the expression levels of the mtDNA genes were noticeably higher in spleen of the line 72 MDV challenged birds than those of the other three groups. show the mtDNA abundance in bursa of Fabricius, thymus, and spleen, respectively. MtDNA copies per cell were generated with ND2 and β-actin qPCR data. Birds used in each group were five. The symbols * and ** indicate statistical significance at p ≤ 0.05 and p ≤ 0.01 levels, respectively, between lines or treatment groups.
The Expressions of Mitochondrial DNA-coding Genes
To ascertain whether the mitochondrial gene expression was also altered in MD, the 13 mtDNA protein coding genes were examined using RNA sequencing data ( Figure 2). Two of the 13 genes, ATP6 and ATP8 that encode subunits of the Complex V, showed the higher expression levels, while the genes, ND1, ND2, ND3, ND4, ND4L, ND5, and ND6 that encode the NADH dehydrogenase (Complex I) subunits, showed the lower expressions. We also found gene expressions were obviously lower in spleen in contrast to those of bursa and thymus. Additionally, the expression levels of the mtDNA genes were noticeably higher in spleen of the line 7 2 MDV challenged birds than those of the other three groups.
Specific mitochondrial genes with differential expression levels in different comparisons are given in Table 1. After MDV infection, seven and ten out of the 13 mitochondrial genes were up-regulated in spleen of resistant and susceptible lines, respectively. In line 6 3 , the expressions of ND1, ND2, ND4, ND5, COX1, COX2, and CYTB were significantly up-regulated with fold changes all between 0 and 1. Besides, ND3, ND6 and ATP6 were also expressed significantly higher after MDV infection in the line 7 2 birds than in the controls. All of the ten up-regulated genes in line 7 2 after infection showed big fold changes. Moreover, the expression levels of eight genes in line 7 2 , such as ND1, ND2, ND3, ND4, ND5, CYTB, COX2, and ATP6, were higher than those in line 6 3 (fold changes > 1). Specific mitochondrial genes with differential expression levels in different comparisons are given in Table 1. After MDV infection, seven and ten out of the 13 mitochondrial genes were upregulated in spleen of resistant and susceptible lines, respectively. In line 63, the expressions of ND1, ND2, ND4, ND5, COX1, COX2, and CYTB were significantly up-regulated with fold changes all between 0 and 1. Besides, ND3, ND6 and ATP6 were also expressed significantly higher after MDV infection in the line 72 birds than in the controls. All of the ten up-regulated genes in line 72 after infection showed big fold changes. Moreover, the expression levels of eight genes in line 72, such as ND1, ND2, ND3, ND4, ND5, CYTB, COX2, and ATP6, were higher than those in line 63 (fold changes > 1). However, no mitochondrial genes changed in bursa of both lines after MDV infection in contrast to the non-infected controls. Uniquely, ND6 was the only down-regulated gene by MDV challenge in thymus of the line 72 birds. However, no mitochondrial genes changed in bursa of both lines after MDV infection in contrast to the non-infected controls. Uniquely, ND6 was the only down-regulated gene by MDV challenge in thymus of the line 7 2 birds.
Differentially Expressed MitoProteome Nuclear Genes
By comparing with the known human and mouse MitoProteome genes (the mitochondrial protein encoding genes), 873 nuclear genes were identified in the chicken genome. Those were further investigated based on the RNA sequencing data and differentially expressed gene (DEG) analysis results performed between lines and treatments.
Mitochondria-Related Nuclear DEGs Induced by MDV
The numbers of differentially expressed nucleus-encoded MitoProteome genes cross the three lymphoid organ tissues between the two lines post MDV challenge are shown in Figure 3. The contrast between MDV-infection and non-infection indicated that MDV challenge induced significantly more DEGs in line 7 2 than in line 6 3 , and in the line 7 2 birds most of the DEGs were up-regulated, especially in spleen ( Figure 3A). Compared to non-infected control birds, only 27 genes changed in spleen of the line 6 3 birds, while 219 genes were differentially expressed in spleen of the line 7 2 birds, which consisted of a quarter (219/873) of all the studied chicken MitoProteome nuclear genes. Among the 219 DEGs observed in spleen of line 7 2 , 181 genes were up-regulated and 38 were down-regulated. In contrast, the number of genes in thymus that changed following infection was very small. Only three DEGs were identified in thymus of the line 6 3 birds, with the gene AMN being up-regulated and TDRKH and SUOX being down-regulated. Meanwhile, four genes (STOM, OSBPL1A, ACOT9 and C15orf48) were up-regulated and only one gene TDRKH was down-regulated in thymus of the line 7 2 birds (see Table S1). Interestingly, the TDRKH gene was down-regulated by a fold change lower than -2 in both lines. Similarly, the number of significantly changed genes in bursa of line 6 3 and 7 2 also had a distinct difference, with the DEG numbers being 15 and 106, respectively. results performed between lines and treatments.
Mitochondria-related Nuclear DEGs Induced by MDV
The numbers of differentially expressed nucleus-encoded MitoProteome genes cross the three lymphoid organ tissues between the two lines post MDV challenge are shown in Figure 3. The contrast between MDV-infection and non-infection indicated that MDV challenge induced significantly more DEGs in line 72 than in line 63, and in the line 72 birds most of the DEGs were upregulated, especially in spleen ( Figure 3A). Compared to non-infected control birds, only 27 genes changed in spleen of the line 63 birds, while 219 genes were differentially expressed in spleen of the line 72 birds, which consisted of a quarter (219/873) of all the studied chicken MitoProteome nuclear genes. Among the 219 DEGs observed in spleen of line 72, 181 genes were up-regulated and 38 were down-regulated. In contrast, the number of genes in thymus that changed following infection was very small. Only three DEGs were identified in thymus of the line 63 birds, with the gene AMN being up-regulated and TDRKH and SUOX being down-regulated. Meanwhile, four genes (STOM, OSBPL1A, ACOT9 and C15orf48) were up-regulated and only one gene TDRKH was down-regulated in thymus of the line 72 birds (see Table S1). Interestingly, the TDRKH gene was down-regulated by a fold change lower than -2 in both lines. Similarly, the number of significantly changed genes in bursa of line 63 and 72 also had a distinct difference, with the DEG numbers being 15 and 106, respectively. Since there were only a small number of DEGs identified in thymus, DEG comparison was only performed between the bursa and the spleen ( Figure 3B). There were five DEGs in common between the two lines, while 10 and 101 DEGs exclusively between line 63 and line 72 in bursa, respectively. The five common DEGs were TYSND1, ABCB8, MRPL17, MSRB3, and PDK4. The first three DEGs were up-regulated and the last two were down-regulated in line 63, while all were up-regulated in line 72 (Table S1) Since there were only a small number of DEGs identified in thymus, DEG comparison was only performed between the bursa and the spleen ( Figure 3B). There were five DEGs in common between the two lines, while 10 and 101 DEGs exclusively between line 6 3 and line 7 2 in bursa, respectively. The five common DEGs were TYSND1, ABCB8, MRPL17, MSRB3, and PDK4. The first three DEGs were up-regulated and the last two were down-regulated in line 6 3 , while all were up-regulated in line 7 2 (Table S1). However, there were 11 DEGs in common between lines 6 3 and 7 2 , with 16 and 211 DEGs being exclusively identified in the line 6 3 and line 7 2 , respectively, in spleen tissues. The 11 common nuclear DEGs were ABAT, COX11, MSRB3, FKBP10, ME3, CYP11A1, SLC25A30, BCL2, ACSS3, AIFM2, and PMP22. Additionally, we noticed that there were 3 and 24 genes in common between bursa and spleen tissues in the control and infection comparison subsets (L6 3 vs. L7 2 ), respectively. Of note, MSRB3 was the only gene dysregulated in both bursa and spleen upon MDV challenge in both lines with opposite directions in the two tissues. In bursa the expression of MSRB3 was up-regulated, while in spleen it was down-regulated.
Mitochondria-Related Nuclear DEGs between Two Chicken Lines
The contrasts between the two chicken lines ( Figure 4A) showed that there was a total of 221 DEGs in spleen of the line 7 2 challenged birds compared to line 6 3 ones; 188 of those DEGs were up-regulated. In contrast, more DEGs (24 out of 33) were expressed at lower levels in spleen of the line 7 2 control birds than the line 6 3 birds. In spleen, there were eight DEGs in common between the line comparisons (L6 3 vs. L7 2 ) of MDV challenge and control groups ( Figure 4B). These genes were CMPK2, HK2, C15orf48, HEBP1, VDAC1, STOML1, ACSS3, and GLDC. Among which, the CMPK2 gene in line 7 2 was over 2-fold up-regulated.
challenge in both lines with opposite directions in the two tissues. In bursa the expression of MSRB3 was up-regulated, while in spleen it was down-regulated.
Mitochondria-related Nuclear DEGs between Two Chicken Lines
The contrasts between the two chicken lines ( Figure 4A) showed that there was a total of 221 DEGs in spleen of the line 72 challenged birds compared to line 63 ones; 188 of those DEGs were upregulated. In contrast, more DEGs (24 out of 33) were expressed at lower levels in spleen of the line 72 control birds than the line 63 birds. In spleen, there were eight DEGs in common between the line comparisons (L63 vs. L72) of MDV challenge and control groups ( Figure 4B). These genes were CMPK2, HK2, C15orf48, HEBP1, VDAC1, STOML1, ACSS3, and GLDC. Among which, the CMPK2 gene in line 72 was over 2-fold up-regulated. No DEG was identified in thymus between the lines of control groups. Four DEGs, STOM, STOML1, ACOT9, and C15orf48, were identified between the lines in the MDV challenged groups of thymus tissues, which were all up-regulated in expression in the line 72 birds in contrast to the line 63. Additionally, we compared DEGs between the bursa and spleen tissues and noticed that 24 DEGs were in common between the two tissues in the infected groups, while only three were, in the control groups. In bursa tissues, 56 and 57 DEGs were identified between the two lines of the MDV challenge groups and the control groups, respectively. Relatively, more DEGs were up-regulated post MDV challenge in line 7 2 birds in contrast to the line 6 3 birds. Again, more DEGs from the comparison of control groups expressed at lower levels in the line 7 2 than line 6 3 . Five genes, SCCPDH, LAP3, CHCHD10, STOML1, and TDRKH, were in common between the MDV challenged and control groups of bursa tissues.
No DEG was identified in thymus between the lines of control groups. Four DEGs, STOM, STOML1, ACOT9, and C15orf48, were identified between the lines in the MDV challenged groups of thymus tissues, which were all up-regulated in expression in the line 7 2 birds in contrast to the line 6 3 . Additionally, we compared DEGs between the bursa and spleen tissues and noticed that 24 DEGs were in common between the two tissues in the infected groups, while only three were, in the control groups.
Canonical Pathways Prediction
To better understand the biological functions of those differentially expressed mitochondria-relevant nuclear genes, DEGs from the bursa and spleen within the four comparisons were submitted to Ingenuity Pathway Analysis (IPA). Because the DEG numbers in thymus were very small, here we didn't do the IPA analysis for the thymus. IPA predicts the significance of certain pathways with p-value and z-score, which reflect the percentage of genes in the database that are in the pathway and whether the pathway is activated or inhibited, respectively.
Totally, IPA showed that DEGs in bursa and spleen were significantly enriched in 76 and 101 pathways, respectively (see Tables S2 and S3, p ≤ 0.05). After infection, DEGs in bursa of line 6 3 were enriched in four significant pathways (sirtuin signaling pathway, induction of apoptosis by HIV1, glycine degradation and creatine-phosphate biosynthesis) in contrast with the 51 enriched pathways in line 7 2 . DEGs from spleen of line 6 3 and 7 2 in response to MDV infection were significantly enriched in 20 and 66 pathways, respectively. When comparing the top important pathways in bursa and spleen, we found a considerable number of pathways associated with the mitochondrial function and metabolism, such as sirtuin signaling pathway, mitochondrial dysfunction, oxidative phosphorylation (OXPHOS), and folate polyglutamylation ( Figure 5A,B).
value and z-score, which reflect the percentage of genes in the database that are in the pathway and whether the pathway is activated or inhibited, respectively.
Totally, IPA showed that DEGs in bursa and spleen were significantly enriched in 76 and 101 pathways, respectively (see Tables S2 and S3, p ≤ 0.05). After infection, DEGs in bursa of line 63 were enriched in four significant pathways (sirtuin signaling pathway, induction of apoptosis by HIV1, glycine degradation and creatine-phosphate biosynthesis) in contrast with the 51 enriched pathways in line 72. DEGs from spleen of line 63 and 72 in response to MDV infection were significantly enriched in 20 and 66 pathways, respectively. When comparing the top important pathways in bursa and spleen, we found a considerable number of pathways associated with the mitochondrial function and metabolism, such as sirtuin signaling pathway, mitochondrial dysfunction, oxidative phosphorylation (OXPHOS), and folate polyglutamylation ( Figure 5A, B). In bursa, the OXPHOS pathway was significantly inhibited in line 72 control birds compared to line 63 counterparts, while the sirtuin signaling pathway was activated ( Figure 5C). Conversely, those two pathways showed opposite regulation directions in spleen, with the OXPHOS pathway being significantly activated and the sirtuin signaling pathway being inhibited in infected birds, especially in line 72. Furthermore, genes involved in the two oppositely regulated pathways were checked. Four genes (NDUFA2, NDUFB2, NDUFS5, and NDUFV1), down-regulated in line 72 normal birds than line 63, were shared in the two pathways in bursa. Those four genes encode subunits of mitochondrial complex I (NADH dehydrogenase), which is the first enzyme complex in the respiratory chain. Meanwhile, for the spleen 11 genes were shared in the two pathways, including eight from complex I (NDUFA8, NDUFB1, NDUFB3, NDUFB4, NDUFB5, NDUFB8, NDUFB9, and NDUFS6), one from complex Ⅱ (SDHA), and two from complex V (ATP5B and ATP5C1).
Very interestingly, we noticed that many genes that encodes the mitochondrial oxidative phosphorylation complexes were up-regulated in spleen of line 72 with MDV infection. When compared to line 63, 28 genes were higher expressed in line 72, including ten from complex І (NDUFA8, NDUFAB1, NDUFB1, NDUFB3, NDUFB, NDUFB5, NDUFB8, NDUFB9, NDUFS6, and NDUFV3), two from complex II (SDHA and SDHA2), five from complex III (UQCC1, UQCRB, In bursa, the OXPHOS pathway was significantly inhibited in line 7 2 control birds compared to line 6 3 counterparts, while the sirtuin signaling pathway was activated ( Figure 5C). Conversely, those two pathways showed opposite regulation directions in spleen, with the OXPHOS pathway being significantly activated and the sirtuin signaling pathway being inhibited in infected birds, especially in line 7 2 . Furthermore, genes involved in the two oppositely regulated pathways were checked. Four genes (NDUFA2, NDUFB2, NDUFS5, and NDUFV1), down-regulated in line 7 2 normal birds than line 6 3 , were shared in the two pathways in bursa. Those four genes encode subunits of mitochondrial complex I (NADH dehydrogenase), which is the first enzyme complex in the respiratory chain. Meanwhile, for the spleen 11 genes were shared in the two pathways, including eight from complex I (NDUFA8, NDUFB1, NDUFB3, NDUFB4, NDUFB5, NDUFB8, NDUFB9, and NDUFS6), one from complex II (SDHA), and two from complex V (ATP5B and ATP5C1).
In addition, several other pathways associated with energy metabolism, such as TCA cycle, gluconeogenesis, and fatty acid β-oxidation, were also activated in susceptible birds, implicating the high-energy demands in this organ.
Meanwhile, two pathways relating with apoptosis (induction of apoptosis by HIV1 and apoptosis signaling) were also activated in spleen of line 7 2 after infection with MDV, suggesting the non-negligible role of apoptosis in MDV inducing tumorigenesis.
Nuclear Genes Involving in the mtDNA Replication, Transcription and Maintenance
Subsequently, some important nuclear genes, which play essential roles in mtDNA replication, transcription and viability maintenance, were further investigated. No relevant genes were found to be differentially expressed in thymus at 21 dpi while many were found to be dysregulated in the other (Table 2). In bursa, SLC25A4, also known as ANT1, was the only significantly up-regulated gene in both lines upon MDV infection and the expression of this gene in line 7 2 was lower than in line 6 3 , whether challenged or not. In spleen of the MD-susceptible and -resistant lines, infection had contrasting effects on the expression of genes relating to mtDNA replication and transcription. Upon MDV infection, only POLG2 were up-regulated in line 6 3 , while five genes (TWNK, SSBP1, DNA2, MGME1, and SLC25A4) showed up-regulation in line 7 2 . Moreover, the expression of eight important genes (TWNK, SSBP1, DNA2, MGME1, SLC25A4, TFAM, MTERF2, and SUPV3L1) were significantly higher in line 7 2 infected chickens comparing to the line 6 3 counterparts, indicating that MDV infection can significantly up-regulate those genes that are closely related to mtDNA replication, transcription, and maintenance. Obviously, the POLG2 gene expression level was lower in the spleen of line 7 2 infected birds comparing to line 6 3 , because this gene increased in line 6 3 while it remained unchanged in line 7 2 .
MtDNA Content and Gene Expression
To our knowledge, this is one of few studies to explore the relationship between mtDNA and avian herpesvirus infection, especially covering the three phases of MDV infection. At 10 dpi, a slight decreasing tendency was observed in thymus with the infection for both lines, indicating that the latency period in thymus deserves further studies. It was reported that herpes simplex virus type 1 (HSV-1) and HSV-2 in human trigger mtDNA damage or loss following by mitochondrial dysfunction and the depleting of mRNA encoded by the mitochondrial genomes [15,27]. Similarly, in our study the mtDNA contents deduced significantly in spleen of the MDV-infected line 7 2 birds. Interestingly, a significantly elevated mitochondrial gene transcriptional activity was observed. However, mtDNA content alone cannot be used as a surrogate for the respiratory activity in abnormal situations, for example, tumors [28]. At 21 dpi, about 40 percentage cells in spleen were MDV-integrated, whereas only 3.7% cells in thymus were MDV-integrated and bursa had the media number of cells being integrated [18]. Importantly, the virus genome integrating into the host genome is a key feature of tumor cell population [17]. Many studies have implicated that oncoviruses, viruses that transform cells into tumors, can modulate mitochondrial functions and bioenergetics by altering mitochondrial pathways, for example, reprogramming of energy metabolism [29]. Hence, we consider that further studies need to explore the regulatory mechanisms of the transformed cells working with mitochondrial together, which may manipulate cell signaling and energy metabolism of host to fulfill its high-energy demand in virus proliferation phase.
Mitochondria-Related Nuclear Genes and Pathway Analysis
The mitochondrial biogenesis consists of a great deal of proteins. Besides the 13 proteins that encoded by its own genome, there still remain 1000-1500 mitochondrial proteins being encoded by the nuclear genome and imported into mitochondria from the cytoplasm [30]. When comparing the expression of those mitochondria-related nuclear genes in the three immune organs, we found that the thymus had the smallest transcriptional response while the spleen possessed the maximum number of differentially changed genes, which is in consistent with the results from others [31]. As expected, many genes and pathways were altered in spleen of the MD-susceptible birds compared to MD-resistant ones. First, oxidative phosphorylation (OXPHOS), one of the most important function in mitochondria, was significantly activated in spleen of line 7 2 infected birds. On which, 28 genes included in the mitochondrial oxidative phosphorylation complexes were up-regulated. Besides, several other pathways associated with energy metabolism including gluconeogenesis and fatty acid β-oxidation were also significantly activated in the spleen in line 7 2 . Cancer cells use glucose and glutamine to promote cell growth and proliferation, a process known as metabolic reprogramming [32]. In this process, OXPHOS is essential not only for fulfilling the increased demands for energy to support the high rate of proliferation but also for macromolecules biosynthesis that are critical for enhanced tumor growth [33][34][35]. Coincidentally, the cholesterol biosynthesis pathway, often elevated in proliferating normal tissues and tumors [36], was also activated in line 7 2 . Taken together, we speculate that the transformed lymphocyte in spleen of the MD-susceptible chickens rewired the metabolic process in mitochondria to fulfill the high energy demands.
Another important pathway in MDV infection is sirtuin signaling pathway, which is famous for the roles in metabolism, aging, and cancer [37][38][39]. Sirtuins are nicotinamide adenine dinucleotide (NAD +) -dependent deacetylases and can acetylate metabolic proteins, such as tricarboxylic acid (TCA) cycle enzymes, fatty acid oxidation enzymes, and subunits of OXPHOS complexes in response to metabolic stress [40]. Mammals have seven sirtuins, three out of which, SIRT3, SIRT4, and SIRT5, are found to be located in mitochondria [41]. The genes SIRT3, SIRT4, and SIRT5 were detected to be dysregulated in bursa and spleen of chickens, illustrating their importance in mitochondrial basic biology upon MDV infection. Additionally, apoptosis signaling is also an import part in spleen upon MDV infection, in which mitochondria play a pivotal role as well. A series of genes responsible for cell programmed death or tumorigenesis were found to be dysregulated, for example, the gene MSRB3, PNPT1, AIFM2, and so on. It is considered that down-regulation of MSRB3 could increase the levels of cellular reactive oxygen species (ROS) and active intrinsic mitochondrial pathway through increasing the Bax to Bcl-2 ratio and cytochrome c releasing, finally inducing cell apoptosis [5]. Interestingly, the gene MSRB3 showed conversely regulating styles in bursa and spleen, down-regulated in spleen and up-regulated in bursa. Coincidentally, MDV infection in line 7 2 increased the expression of BAK1 (pro-apoptotic) and meanwhile decreased the expression of BCL2 (anti-apoptotic) in spleen, and meanwhile, the CYCS expression was up-regulated, indicating the active apoptosis process in this organ. Another gene PNPT1, which was documented recently to be released from mitochondria coordinately with CYCS and to possess a new pro-apoptotic role, was similarly increased in line 7 2 spleen samples after infection. Additionally, the AIFM2 gene, a gene with pro-apoptotic function and often being down-regulated in various cancers [42] was indeed significantly decreased in spleen of the line 7 2 infected birds. Moreover, two mPTP genes VDAC1 and VDAC2, which was reported to be activated by linc-GALMD3, an up-regulated long intergenic non-coding RNA in MDV infection leading to apoptosis and cell death [26], were also found significantly up-regulated in spleen samples of the line 7 2 infected birds. It has been reported that several viruses can induce apoptosis of lymphoid through multiple pathways. MDV replicates in the infected B and T cells may induce apoptosis of various cells including virus-infected cells or transformed cells, resulting in a depletion of lymphocytes and transient immunosuppression in the host [43].
The Mitochondrial DNA Replication and Transcription upon MDV Infection
Although the mitochondria have their own genome, the replication and transcription of mtDNA are completely controlled by the nuclear genome. It is estimated that 250-300 nuclear proteins are dedicated to the replication, transcription, maintenance, and copy number control of this muticopy genome [44]. Many out of the well-appreciated genes, e.g., TWNK, SSBP1, DNA2, MGME1, and SLC25A4 were all detected to be significantly up-regulated in spleen of the line 7 2 infected birds. TWNK is a mitochondrial 5 -3 helicase, which binds to and unwinds double stranded DNA and is necessary for replication of mtDNA [45,46]. SSBP1 is the mitochondrial single stranded binding protein, whose function is to restrict initiation of light-strand mtDNA synthesis to the specific origin of light-strand DNA synthesis [47] and SSBP1 also interacts with TWINK and polymerase gamma (Polγ), the only DNA polymerase in mitochondria, to ensure their functions [48]. DNA2, MGME1, and SLC25A4 are three genes essential for mitochondrial genome processing, maintenance, and stability, and mutations in those genes are often responsible for the loss of mitochondrial copy numbers [49][50][51][52]. It is also implicated that the high expression of DNA2 may promote the cancer cells proliferation [53]. Besides, three other important genes MTERF2, SUPV3L1, and TFAM also highly expressed in the line 7 2 infected birds. The MTERF2 gene belongs to the mitochondrial transcription termination factor (MTERF) family, which has been reported to be linked with the regulation of mtDNA replication and transcription [54,55]. In human MTERF2 is highly expressed in tissues that are highly dependent on the mitochondrial energy production and may regulate oxidative phosphorylation by modulating mitochondrial DNA transcription [56,57]. The MTERF family in mammals has four members, named MTERF1 to MTERF4, in which MTERF1 was explored more widely. MTERF1 is considered as a "contrahelicase" in mtDNA replication and may prevent collisions between mtDNA replication and transcription [58]. Thereby, it is possible that MTERF2 had the similar role and further work is deserved to exploit MTERF2 functions in chicken. Additionally, mitochondrial helicase SUV3 (encoded by SUPV3L1) is predominantly required for the processing of mitochondrial polycistronic transcripts [59]. It is known that SUV3 can interact with polynucleotide phosphorylase (PNPase), that is encoded by PNPT1, to form SUV3·PNPase complex and modulate mt-mRNA poly(A) tail lengths in response to changes in energetic states of mitochondria [60], suggesting its crucial role in control of the amount and translation of each mitochondrial mRNA. The protein encoded by the TFAM gene is one of the essential components for the mitochondrial DNA transcription, replication, organization, and maintenance [61,62], which can bind, unwind and bend DNA to initiate the mitochondrial transcription. T cells with TFAM being depleted proliferated less than wild type T cells [63]. In spleen of the line 7 2 infected birds, the TFAM gene was significantly up-regulated in contrast the counterparts of line 6 3 . Meanwhile, it has been shown that TFAM expression is regulated by PPARGC1A. Interestingly, the expression of PPARGC1A was also up-regulated in spleen of the line 7 2 infected birds. Combining with the higher expression of mitochondrial genes, it is conceivable that the proliferation of T cells was activated at 21 dpi in spleen of the line 7 2 birds infected with MDV.
Of note, only the POLG2 gene in spleen of the MD-resistant line 6 3 infected birds was significantly up-regulated. POLG2 encodes the accessory subunits of DNA polymerase gamma (Polγ), which is the only DNA replicative polymerase involved in the human mitochondria and is crucial for the replication and repair of mtDNA [64,65]. Polγ has two subunits: A catalytic subunit and an accessory subunit, which were encoded by POLG and POLG2, respectively. POLG2 could enhance interactions with the DNA template and increases both the catalytic activity and the processivity of POLG, suggesting it is the major regulator of polymerase activity. In Drosophila melanogaster, over-expression of POLG2 rather than POLG can definitely increase the amount of mtDNA within individual cells [66]. In human, mutations in POLG2 have a dominant negative effect and lead to multiple mtDNA deletions [67]. In neuronally-differentiated (ND)-PC12 cells being quiescently infected with herpes simplex virus type 1 (HSV-1), POLG2 was also noted to be down-regulated [68]. Accordingly, we speculate that the upregulation of POLG2 played a key role in the mtDNA maintenance in line 6 3 .
Ethics Statement
The study protocols for animal experiments were in strict accordance with the Animal Care and Use Committee (ACUC) Guidelines approved by USDA, ADOL (April 2005, Project Number 6040-31320-009-00-D) and the Guide for the Care and Use of Laboratory Animals by Institute for Laboratory Animal Research (Eighth Edition, 2011).
Chickens, Treatment, and Samples
Chicks were obtained from the specified-pathogen-free (SPF) parent flocks of lines 6 3 and 7 2 , and were housed in a BSL-2 facility on the farm of the Avian Disease and Oncology Laboratory (ADOL, East Lansing, Michigan, USDA). On the fifth day after hatching, young birds were randomly selected and divided into challenge and control groups in each line. The birds of challenge groups for both lines were given a dosage of 500 plaque-forming units (PFU) of 648A passage 40 MDV intra-abdominally each. Chicks of different treatment groups were housed separately in negatively pressured isolators of uniform conditions. Bursa of Fabricius (referred as bursa in this paper), thymus, and spleen samples were collected at 5, 10, and 21 days post-infection (dpi) from 5 birds per line, per group at each of the time-points, which were individually placed in RNAlater (Qiagen, Valencia, CA, USA) immediately and stored at −80 • C until further analysis. Firstly, mtDNA copy number were detected in all 60 samples (3 tissues × 5 individuals × 4 groups). Then, according to the mtDNA variation results, tissues at 21 dpi were decided for gene expression experiment and two individuals were randomly selected from each group.
Quantification of mtDNA Copy Number
Genomic DNA was isolated from all of the sampled tissues using Wizard Genomic DNA Purification Kit (Promega, Madison, WI, USA). The DNA concentration was detected using Synergy HTX Multi-Reader (BioTek, Winooski, VT, USA) and adjusted to 50ng/uL. The relative amounts of mtDNA were determined by qPCR. The β-actin gene was used as the reference nuclear gene with the primers: β-actin_F GAGAAATTGTGCGTGACATCA, β-actin_R CCTGAACCTCTCATTGCCA. Three mitochondrial genes, ND2, ND3, and COX1, were selected in this study, which were described by Reverter et al. [7]. The PCR amplicons were generated on a C1000 Touch™ thermal cycler (BioRad, Hercules, CA, USA) in a 10 µL reaction, which contained 5 µL of 2×SYBR Green PCR mix (Biorad), 3 µL of ddH 2 O, 1 µL primers (10 pmol/µL per primer) and 1 µL DNA template (50 ng/µL). The reactions for each sample were carried out in triplicates along with a negative control (without template). To construct the standard curves, a pooled template was prepared with equal amount of DNA from each of the 30 individual samples (10 for each tissue) and 1:10 dilution series ranging from 1 mg to 0.001 ng were made and used.
Relative mtDNA copy numbers were calculated following equation [7]: MtDNA copy number = 2 1+(Ct n_gene −Ct mt_gene ) , where Ct represents the average cycle threshold. The mtDNA copy number data from the three tissues were analyzed separately. The PROC GLM procedure in SAS 9.4 was used to carry out the analysis.
RNA Sequencing
Total RNA samples extracted from all three tissues at 21dpi from two randomly selected individuals in each group were used for deep sequencing. The RNA extraction, cDNA synthesis, library preparation, transcriptome sequencing, and qPCR validation were carried out following the reported protocols [69]. Raw RNA-seq data were treated with Trimmomatic first for quality control and mapped onto the chicken reference genome (Gallus gallus Galgal 5.0) using HISAT2. Differential expression analyses were performed with Cuffdiff tools for comparisons between the treatment groups within each line, and also between the lines within each treatment group. Fragments per kilobase of transcript per million (FPKM) was used as the relative gene expression level. Genes showing a p-value ≤ 0.05 and a false discovery rate (FDR) ≤ 0.1 were considered differentially expressed genes (DEGs).
MitoProteome Gene Differential Expression Analysis and Pathway Analysis
To fully use the available mitochondrial information, 1158 human MitoProteome genes were downloaded from the website https://www.broadinstitute.org/files/shared/metabolism/mitocarta/ human.mitocarta2.0.html, which released the nuclear and mitochondrial DNA genes encoding proteins with strong support of mitochondrial localization [70]. Gene names were directly compared with those in chicken. Finally, 886 genes were matched including 13 mtDNA encoded protein genes as well as 873 mitochondria-related nuclear genes. The gene expression and differential expression data of those MitoProteome genes were picked out from the RNA-seq results for further analysis. The Ingenuity Pathway Analysis (IPA) software was used for gene pathway analysis.
Conclusions
In summary, in this study we have investigated the variability of mtDNA copy number and gene expression level in the three lymphoid organs in response to MDV challenge. We found that MDV challenge had little impact on mtDNA contents in chickens of the MD-resistant line, but the mitochondrial DNA abundance and gene expression level were obviously altered at the transformation phase, especially in spleen, in chickens of the MD-susceptible line. MDV infection significantly increased the mitochondrial gene expression in the spleen tissue of the MD-susceptible birds, albeit a significant decrease of the mtDNA copy number was observed. Meanwhile, many of the nuclear genes related to mitochondrial genome maintenance and gene expression were up-regulated except for POLG2, which was conversely up-regulated in the MD-resistant line. The data indicated that the POLG2 gene may be a potential regulator for the conflict between the mtDNA copy number and the gene expression of mitochondria in the MD-susceptible birds, directly resulting in imbalance between metabolic and cell signaling and finally the MD pathogenesis and oncogenesis. Further work is warranted to look into mtDNA replication and gene transcription as well as the mitochondria regulation mechanism in relation with MDV infection in chicken.
|
v3-fos-license
|
2021-09-18T13:52:13.209Z
|
2021-09-13T00:00:00.000
|
238473686
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://wellcomeopenresearch.org/articles/6-227/v2/pdf",
"pdf_hash": "6cc2a64ed9554f94d4a8c5a75fbf9c0d3db02bed",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45608",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "c2cff59d3e7dd07cc76257085d2d1a4ac23bc46a",
"year": 2021
}
|
pes2o/s2orc
|
The genome sequence of the yellow-tail moth, Euproctis similis (Fuessly, 1775)
We present a genome assembly from an individual male Euproctis similis (the yellow-tail; Arthropoda; Insecta; Lepidoptera; Lymantriidae). The genome sequence is 508 megabases in span. Over 99% of the assembly is scaffolded into 22 chromosomal pseudomolecules, with the Z sex chromosome assembled. The complete mitochondrial genome, 15.5 kb in length, was also assembled.
Introduction
Euproctis similis, the yellow-tail moth, is widespread across temperate Europe and Asia. In the UK, the moth is relatively common across much of England and Wales, with scattered records from southern Scotland and Northern Ireland. The larvae of E. similis feed on a range of deciduous trees and shrubs, including Crataegus, Prunus, and Betula, in some situations becoming a pest on ornamental and fruit trees. Larvae of are also notable for bearing long hairs that can cause skin irritation in humans, although the effects are rarely as serious as those caused by larvae of the closely related Euproctis chrysorrhoea (brown-tail). A genome sequence for E. similis, therefore, may have agricultural and biomedical relevance, in addition to its use in evolutionary biology, ecology and genome biology. The karyotype of E. similis has been previously recorded as n=22 or 23 (Belyakova & Lukhtanov, 1994). This is not unexpected since Lepidoptera exhibit considerable variation in chromosome number, although n=31 is the most common karyotype (Ahola et al., 2014). The genome of E. similis was sequenced as part of the Darwin Tree of Life Project, a collaborative effort to sequence all of the named eukaryotic species in the Atlantic Archipelago of Britain and Ireland. Here we present a chromosomally complete genome sequence for E. similis, based on one male specimen from Wytham Woods, Oxfordshire (biological vice-country: Berkshire), UK.
Genome sequence report
The genome was sequenced from a single male E. similis ( Figure 1) collected from Wytham Woods, Oxfordshire (biological vice-county: Berkshire), UK (latitude 51.772, longitude -1.338). A total of 70-fold coverage in Pacific Biosciences single-molecule long reads (N50 17 kb) and 78-fold coverage in 10X Genomics read clouds were generated. Primary assembly contigs were scaffolded with chromosome conformation Hi-C data. Manual assembly curation corrected 40 missing/ misjoins and removed 3 haplotypic duplications, reducing the assembly length by 0.10% and the scaffold number by 42.00%,
Amendments from Version 1
The Introduction has been expanded to include further information about the habitat and distribution of the species and the potential uses for the genome assembly.
Details of the RNAseq data accession, which were omitted in v1, have been included, alongside details of the intended use for these data in the Data availability section. The legend to Figure 2 and Figure 5 (formerly Figure 1 and Figure 4) have been expanded to aid understanding.
Other minor changes requested by reviewers have been made.
An image of the E. similis specimen has been included as Figure 1.
Any further responses from the reviewers can be found at the end of the article Figure 1. Image of the Euproctis similis specimen (ilEupSimi1) used for genome sequencing. Image captured during preservation and processing. Specimen is shown below a FluidX storage tube 43.9 mm in length.
REVISED
and increasing the scaffold N50 by 14.24%. The final assembly has a total length of 508 Mb in 30 sequence scaffolds with a scaffold N50 of 24 Mb (Table 1). Over 99.9% of the assembly sequence was assigned to 22 chromosomal-level scaffolds, representing 21 autosomes (numbered by sequence length), and the Z sex chromosome (Figure 2- Figure 5; Table 2). The assembly has a BUSCO (Simão et al., 2015) v5.1.2 completeness of 98.6% using the lepidoptera_odb10 reference set. The complete, unbroken mitochondrial genome was assembled and is 15.5 kb in length. While not fully phased, the assembly deposited is of one haplotype. Contigs corresponding to the second haplotype have also been deposited.
Methods
A single male E. similis, ilEupSimi1, was collected from Wytham Woods, Oxfordshire (biological vice-country: Berkshire), UK (latitude 51.772, longitude -1.338) by Douglas Boyes, University of Oxford, using a light trap. The specimen was snap-frozen in dry ice using a CoolRack before transferring to the Wellcome Sanger Institute (WSI).
DNA was extracted at the Tree of Life laboratory, WSI. The ilEupSimi1 sample was weighed and dissected on dry ice with tissue set aside for RNA extraction and Hi-C sequencing. Thorax/abdomen tissue was cryogenically disrupted to a fine powder using a Covaris cryoPREP Automated Dry Pulveriser, receiving multiple impacts. Fragment size analysis of 0.01-0.5 ng of DNA was then performed using an Agilent FemtoPulse. High molecular weight (HMW) DNA was extracted using the Qiagen MagAttract HMW DNA extraction kit. Low molecular weight DNA was removed from a 200-ng aliquot of extracted DNA using 0.8X AMpure XP purification kit prior to 10X Chromium sequencing; a minimum of 50 ng DNA was submitted for 10X sequencing. HMW DNA was sheared into an average fragment size between 12-20 kb in a Megaruptor 3 system with speed setting 30. Sheared DNA was purified by solid-phase reversible immobilisation using AMPure PB beads with a 1.8X ratio of beads to sample to remove the shorter fragments and concentrate the DNA sample. The concentration of the sheared and purified DNA was assessed using a Nanodrop spectrophotometer and Qubit Fluorometer and Qubit dsDNA High Sensitivity Assay kit. Fragment size distribution was evaluated by running the sample on the FemtoPulse system.
RNA was extracted from thorax/abdomen tissue in the Tree of Life Laboratory at the WSI using TRIzol (Invitrogen), (10X) and Illumina HiSeq 4000 (RNA-Seq) instruments. Hi-C data were generated from head tissue using the Qiagen EpiTect Hi-C kit and sequenced on HiSeq X.
Assembly was carried out with HiCanu (Nurk et al., 2020); haplotypic duplication was identified and removed with purge_dups (Guan et al., 2020). The assembly was polished with the 10X Genomics Illumina data by aligning to the assembly with longranger align, calling variants with freebayes (Garrison & Marth, 2012). One round of the Illumina polishing was applied. Scaffolding with Hi-C data (Rao et al., 2014) was carried out with SALSA2 (Ghurye et al., 2019). The assembly was checked for contamination and corrected using the gEVAL system (Chow et al., 2016) as described previously (Howe et al., 2021). Manual curation was performed using gEVAL, HiGlass (Kerpedjiev et al., 2018) and Pretext. The mitochondrial genome was assembled using MitoHiFi (Uliano-Silva et al., 2021). The genome was analysed and BUSCO scores generated within the BlobToolKit environment (Challis et al., 2020).
Data availability
European Nucleotide Archive: Euproctis similis (yellow-tail). Accession number PRJEB42127: https://identifiers.org/ena.embl: PRJEB42127 The genome sequence is released openly for reuse. The E. similis genome sequencing initiative is part of the Darwin Tree of Life (DToL) project. All raw sequence data and the assembly have been deposited in INSDC databases. The genome will be annotated using RNAseq data and presented through the Ensembl pipeline at the European Bioinformatics Institute.
Raw data and assembly accession identifiers are reported in Table 1. I only have three small suggestions for additional information, although I also see that none of these are commonly supplied in the notes coming from the Darwin Tree of Life project. First, as a biologist, I would be interested in knowing a little bit more about the organism. For example that it is a night-active moth, wide-spread across the Eurasian continent, that they're active from August to June and that they are associated with both urban and non-urban habitats and with several host plants. Second, in the presentation of the methods there are no details about the bioinformatic analyses beyond the programs that were used. It would be good to specify any deviation from default settings or even to have a brief summary of the commands used to perform the analyses. This could be done in a separate file archived along with the note or in a Table, possibly integrated in Table 3. Third, the data presentation can benefit from brief expansion of the results. The interactive figures are nice, because some explanation of what is shown can also be found at the corresponding blobtoolkit repository. However, there is no text accompanying these figures beyond a single sentence referencing the number of scaffolds and citing figures 1 through 4. Some expansion of the genome assembly statistics seems desirable. And figure 4 could use a legend as well as axis labels with the chromosome numbers. Again, I do see that other examples of notes on genomes coming from this project also do not necessarily contain these additional pieces of information, so I guess it is up to the authors to decide whether that continuity matters more or whether the details are simply not necessary.
Is the rationale for creating the dataset(s) clearly described? Yes
Are the protocols appropriate and is the work technically sound? Yes
Are sufficient details of methods and materials provided to allow replication by others? Partly
Are the datasets clearly presented in a useable and accessible format?
Major updates:
We have updated the Introduction section to include information about abundance, habitat and distribution of the species, and a description of potential uses for the genome. Also included is a brief mention of lepidopteran karyotypes -a paper discussing lepidopteran chromosome evolution using the sequences generated by this project is forthcoming.
We addressed the issue of the missing RNA-Seq data, which will be used for annotation by Ensembl in the near future as part of the Darwin Tree of Life project pipeline. We have also included details of the method of library preparation.
An image of the specimen used for genome sequencing has been included as Figure 1. The legends of Figures 2-5 have been expanded to make them easier to understand.
|
v3-fos-license
|
2020-10-29T09:04:54.749Z
|
2020-10-23T00:00:00.000
|
226320059
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2227-7390/8/11/1857/pdf",
"pdf_hash": "8f68fce1d5ffa3eb0231456b549890a04c40f23e",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45609",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "693fe2c5cb485552570da3aa2d3960d8c720e95b",
"year": 2020
}
|
pes2o/s2orc
|
An Integrated Decision Approach with Probabilistic Linguistic Information for Test Case Prioritization
: This paper focuses on an exciting and essential problem in software companies. The software life cycle includes testing software, which is often time-consuming, and is a critical phase in the software development process. To reduce time spent on testing and to maintain software quality, the idea of a systematic selection of test cases is needed. Attracted by the claim, researchers presented test case prioritization (TCP) by applying the concepts of multi-criteria decision-making (MCDM). However, the literature on TCP su ff ers from the following issues: (i) di ffi culty in properly handling uncertainty; (ii) systematic evaluation of criteria by understanding the hesitation of experts; and (iii) rational prioritization of test cases by considering the nature of criteria. Motivated by these issues, an integrated approach is put forward that could circumvent the problem in this paper. The main aim of this research is to develop a decision model with integrated methods for TCP. The core importance of the proposed model is to (i) provide a systematic / methodical decision on TCP with a reduction in testing time and cost; (ii) help software personnel choose an apt test case from the suite for testing software; (iii) reduce human bias by mitigating intervention of personnel in the decision process. To this end, probabilistic linguistic information (PLI) is adopted as the preference structure that could flexibly handle uncertainty by associating occurrence probability to each linguistic term. Furthermore, an attitude-based entropy measure is presented for criteria weight calculation, and finally, the EDAS ranking method is extended to PLI for TCP. An empirical study of TCP in a software company is presented to certify the integrated approach’s e ff ectiveness. The strengths and weaknesses of the introduced approach are conferred by comparing it with the relevant methods.
Introduction
Multi-criteria decision-making (MCDM) is an attractive concept that involves a set of options selected based on a set of criteria, either linguistically or numerically. Each criterion is associated with an importance value utilized by the ranking approach to form the ranking order [1]. Zadeh [2] introduced the philosophy of linguistic decision-making and discussed its merits and the flexibility offered to decision-makers (DMs) in preference information. Later, Herrera et al. [3] fine-tuned the notion and made it more applicable to MCDM. Rodriguez et al. [4] identified a crucial weakness of linguistic term sets (LTSs) and proposed hesitant fuzzy linguistic term sets (HFLTSs) to resolve the same. As stated, the HFLTS has the ability to accept more than one rating for a particular alternative-criterion
Literature Review on TCP
With the agile paradigm's implementation by many software enterprises, we discern increasing attention in continuous integration (CI) settings. Such settings permit more repetitive integration of software alterations, creating software progression that is more quick and cost-effective [17]. The outcomes are utilized to tackle issues and find faults, and speedy feedback is essential to diminish progress costs [18]. Within an integration cycle, regression testing (RT) is an action that takes a momentous volume of time. Many times, a test set comprises thousands of test cases whose execution grosses numerous hours or days [19].
To assistant in the RT task, we discover in the literature various methods, which are generally categorized into three key types [20]: minimization, selection, and prioritization. Models with test case minimization (TCM) generally eliminate redundant test cases, minimizing the test set based on several attributes. Test case selection (TCS) chooses a subset of test cases, the vital ones to test the software. Test case prioritization (TCP) endeavors to re-order a test suite to recognize an "ideal" order of test cases that maximizes precise goals, namely early fault detection. TCP methods are renowned in the enterprises and are the subject of study.
Recent software structures constantly progress because of the fixing of detected bugs, accumulating original functionalities, and refactoring structure architecture. RT is used to confirm that the reformed source code does not familiarize new defects. It can become lavish to run a whole RT suite since its size naturally increases through software maintenance and evolution in an industrial case testified by Rothermel et al. [21]. For instance, the execution time for running the whole test suite could take some weeks. RT case prioritization (RTCP) has become one of the most operative structures to lessen the overheads in regression testing [22][23][24][25]. RTCP procedures reorder the execution sequence of RT cases, aiming to execute those test cases and increase the possibility of detecting faults as quickly as possible [26,27]. Tahvili and Bohlin [28] gave a novel decision model TCP under the context of an industrial perspective.
Traditional RTCP techniques [21,29,30] typically utilized a code coverage criterion (CCC) to lead the prioritization procedure. Naturally, a CCC specifies the percentage of selected code units enclosed by a test case. The expectancy is that test cases with greater code coverage degrees have a greater chance of detecting faults [31]. Furthermore, Li et al. [22] gave two search-based RTCP models to discover the search space to assess a sequence with a better fault detection rate. Jiang et al. [23] explored adaptive random models [32] to rank test cases by CCC. To bridge the gap between the two greedy schemes, Zhang et al. [29] suggested an integrated model with the fault detection probability for each test case. Pradhan et al. [33] introduced and conducted an overall empirical assessment of a rule mining and search-based dynamic prioritization methodology with three key components to detect faults earlier. Shrivathsan et al. [34] developed two fuzzy-based clustering methods for TCP by similarity coefficient and dominancy measure. Additionally, they adopted the weighted arithmetic sum product assessment (WASPAS) model for ranking with both inter-and intra-perspectives.
Next, Khatibsyarbini et al. [35] classified and criticized the current state and trend of TCP models, using a systematic literature review (SLR) to conclude the work. With the SLR structure, several applicable research questions (RQs) were framed based on the study's goal. Banias [36] proposed a dynamic programming model for handling TCP assessment problems using low memory consumption in pseudo-polynomial time complexity applicable in the assessment. Chi et al. [37] presented a relation-based TCP technique called additional greedy method call sequence (AGCS) based on method call sequences. The developed approach leverages dynamic relation-based coverage as a measurement to extend the original additional greedy coverage procedure in TCP techniques. Lima and Vergilio [38] presented the outcomes of a systematic mapping investigation on TCP in continuous integration (TCPCI) settings that reported the key features of TCPCI models and their assessment facets. The mapping was used as a plan that contains the definition of RQs, assessment attributes, search string, and evaluation of search engines. Huang et al. [39] proposed a coverage attribute, a code arrangements coverage that groups the conception of code coverage and combination coverage. Mahdieh et al. [40] proposed a model that enhances coverage-based TCP procedures by taking fault-proneness distribution over code units. Additionally, they presented the outcomes of a case study that showed that the approach expressively recovers the additional strategy, which is a broadly utilized coverage-based TCP model.
Research Challenges in TCP
Driven by the inference from the literature review conducted above, some research challenges are put forward.
•
Uncertainty in preference elicitation is not properly handled in TCP.
•
Preference information from different software personnel is not presented holistically for better TCP. A flexible structure to depict the views holistically is lacking in the state-of-the-art models.
•
Weights of criteria that are conflicting and competing with each other are not calculated systematically by capturing DMs' hesitation. Besides, the attitude of DMs is also not taken into consideration during weight calculation.
•
The nature of criteria is not considered during the prioritization of test cases, affecting the decision process. Besides, broad rank values are missing in the state-of-the-art models that could promote proper backup management.
Contributions of the Integrated Approach
Driven by the research challenges presented in Section 1.2, certain key contributions are put forward.
• PLI [8] is adopted as the preference structure that handles uncertainty better and provides a holistic view of the data from different software personnel. This concept resolves the first and second challenges.
•
An attitude-based entropy measure is proposed with PLI for criteria weight calculation that would capture DMs' hesitation and consider the attitude of DMs during preference elicitation. This resolves the third challenge.
•
Further, an evaluation based on distance from an average solution (EDAS) approach is extended to PLI for rational prioritization of test cases. The approach considers the nature of criteria during the ranking of test cases and produces broad rank values that promote effective backup management. This resolves the fourth challenge.
•
Finally, the integrated approach is exemplified with a real case study of test case prioritization in a software project. The advantages and weaknesses of the introduced method are discussed by comparing it with diverse TCP models.
Outline of This Paper
The paper is constructed with the following outline. Section 2 provides the basic concepts that form the base for the research. Section 3 describes the core idea of the research, which begins with data collection and transformation, followed by criteria weight calculation and ranking of test cases. To clearly understand the usefulness of the proposed work, Section 4 presents a numerical example of test case prioritization in a software company. Section 5 focuses on comparative analysis that clarifies the strengths and weaknesses of the proposed work. Finally, Section 6 offers the concluding remarks and future directions for the research.
Preliminaries
This section is committed to describing certain elementary concepts related to the LTS and its generalized structures. Definition 1. (Herrera and Herrera-Viedma, [3]) T is an LTS of the form s c c = 0, 1, . . . β . Here, β + 1 is the cardinality of T, s 0 is the first element, and s β is the last element of T. Certain features of T are If c1 > c2 then, s c1 > s c2 ; neg(s c1 ) = s c2 with c1 + c2 = β is called the negation operation.
Definition 2.
(Rodriguez et al. [4]) T is defined as before. HFLTS is an ordered finite subset of T and it is given by where h TH (xx) = h(xx) has terms from T and it is represented as h(xx) = s k c c = 0, 1, . . . , β; k = 1, 2, . . . , #h(xx) . Here, #h(c) refers to total number of instances.
Definition 3. (Pang et al. [8]) T is defined as before. Probabilistic linguistic term set (PLTS) is an ordered finite subset of T with associated probability for each term, and it is given by where p k is the associated probability value for each term TH k and #th(p) refers to the total number of instances. For convenience, th(p) = th = s k c p k is the PLI, and the collection of PLI yields the PLTS.
Definition 4.
(Gou et al. [15]) Two PLI, th 1 and th 2 , are as stated above. Some operational laws are given by where f and f −1 are functions described in [15].
Data Transformation to PLI
This section focuses on the process of transforming Likert scale-based rating information into PLI without loss of generality. To achieve this, data from different people/personnel/experts are collected in a Likert-scale rating. These are linguistic ratings that are natural and easy from the human point of To form a holistic decision/preference matrix without generality loss, the occurrence probability of linguistic terms is determined. The instances are formed based on the descending order of probability values. For ease of understanding, let us consider an example. Six personnel (experts) rate a car with respect to its safety measures, and they adopt a five-point Likert scale rating, where E1 = good gives the values: E 1 = good E 2 = f air, E 3 = good, E 4 = bad, E 5 = f air, and E 6 = f air. The occurrence probability of each linguistic term is calculated as good = 2/6 = 0.3333, f air = 3/6 = 0.5, and bad = 1/6 = 0.1667. Since the committee has planned to use two instances for analysis, PLI is constructed with two instances, and it is given by good, (0.3333); f air(0.5) . Clearly, it follows Definition 3. Likewise, the entire preference matrix is constructed.
Attitude-Based Entropy Measure
This section focuses on a new approach for criteria weight calculation by properly capturing the hesitation/confusion during the elicitation of preferences. Further, DMs' attitude is also gathered from the top officials that are used in the formulation for the determination of weights. Generally, weights are determined either with completely unknown information or partially known information. The latter context requires additional information on each criterion, which is sometimes difficult to obtain, creating additional overhead. To resolve the issue, the former context is developed. The popular method under the former context is the analytical hierarchy process [41], which faces the problem of consistency maintenance and follows a pairwise comparison that complicates the weight calculation process.
In this study, a novel attitude-based entropy measure is suggested for rational computation of criteria weights to avoid such issues. The Shannon entropy measure is extended to PLI for rational weight calculation, and it is the measure of the expected information of content. Additionally, entropy is the measure of the degree of uncertainty in the form of a probability distribution. In general, MCDM concepts can effectively adopt entropy [42,43] as there is an intrinsic average information transfer between DMs, and variation in the preference information can be captured effectively. Driven by the effectiveness of entropy measures, in this section, a stepwise process for assessing the criteria weight by extending the Shannon entropy measure to PLI is provided as below.
Step 1: Generate a matrix of order de × n with PLI that is called the criteria weight calculation matrix, where de and n are the number of DMs and criteria, respectively.
Step 2: Convert the PLI into a single value by applying Equation (5), and it forms a matrix of where c is the subscript of the linguistic part, and p is the probability value associated with each term.
Here, s * k c is a weighted linguistic value, and p * k is a weighted probability associated with linguistic value. It is calculated as s * k c = s k ζ l c and p * k = 1 − 1 − p k ζ l . Here ζ l is the attitude value associated with the l th expert, and its value is in the unit interval with the sum of values equal to unity.
Step 3: Define the deviation value for each criterion by applying Equation (6) that forms a matrix of order de × n.
where v j is the mean value determined for the j th criterion.
Step 4: Determine the information entropy by adopting Equation (7) that produces a vector of order 1 × n.
where D tot j is the total deviation value determined for the j th criterion.
Step 5: Normalize these entropy values by using Equation (8) to obtain a vector of order 1 × n, which denotes the criteria weights.
where wt j is the j th attribute weight and j wt j = 1 with the weights in the unit interval. Note: The entropy measure provided in Equation (7) is inspired from a popular measure called the Shannon entropy whose validity in PLI is verified by acquiring theoretical foundation from [44,45]. Readers are requested to refer these articles for clarity. As a novelty, in this section, we adopted the measure for calculating weights of each criterion by considering not only the deviation in the distribution, but also the attitude of each expert. Readers may refer to Appendix A for theoretical aspects of the entropy measure.
PLI-Based EDAS Method
This section focuses on the issue of prioritization of test cases. For this reason, in this study, a new extension is made to the EDAS method with PLI. Ghorabaee et al. [46] presented the EDAS approach with the core idea of assessing alternatives based on average values and applied it for inventory classification. Karaşan and Kahraman [47] extended EDAS to the neutrosophic set and used it for ranking sustainable goals. Peng and Liu [48] came up with a neutrosophic soft EDAS approach with a new similarity measure for solving a software project's investment problem. Mishra et al. [49] also used intuitionistic fuzzy EDAS for healthcare waste management. Liang et al. [50] used the intuitionistic fuzzy EDAS approach to select energy-saving green building projects. Feng et al. [51] proposed an HFLTS-based EDAS approach with a weighted arithmetic operator for project evaluation in a company.
Inspired by the literature, it is observed that (i) EDAS is a powerful ranking method that uses a distance measure in the formulation; (ii) TCP by using the EDAS approach is not done to the best of our knowledge; and finally, (iii) EDAS is a simple and straightforward approach that could be effectively integrated with PLI for rational decision-making.
The step by step procedure for the PLI-based EDAS approach is given by Step 1: Achieve the holistic decision matrix of order m × n with PLI from Section 3.1, where m and n are the number of test cases and the fault identified by the test case, respectively.
Step 2: Obtain the weight vector of order 1 × n by utilizing the process in Section 3.2.
Step 3: Transform the matrix values as weighted single-value elements by using Equation (9) with the same order.
where c is the subscript of the term, p is the probability associated with the term.
Step 4: Determine the average of values for each criterion to form a vector of order 1 × n. Use this average and the values to analyze the positive and negative distance from average (PDA and NDA) by using Equations (10) and (11).
where wsv j is the average value for the j th criterion. Though Equations (10) and (11) look similar, the value in the matrix is normalized and then transformed to adhere to the nature of criteria rationally. For this purpose, benefit type criteria are complemented in Equation (11), and cost type criteria are complemented in Equation (10), respectively, before calculating the distance measure. d(a, b) is given by |a − b| where a is the normalized value, and b is the mean of the set of normalized values.
Step 5: Test cases are prioritized by taking a linear combination of the vector values from Equations (10) and (11), and it is given in Equation (12).
where θ is a strategy value in the unit interval.
Arrange the values from RTCP i in descending order to form the prioritization order of test cases. Figure 1 depicts the proposed decision model for the rational prioritization of test cases. Initially, complex linguistic expressions are obtained as opinions from experts (software test personnel) on each test case over the criteria. These values are transformed into PLI by using the procedure proposed in Section 3.1. Experts also share their opinions on each criterion used as input for criteria weight calculation by utilizing the method proposed in Section 3.2. Finally, the method presented in Section 3.3 is used for the prioritization of test cases by acquiring input from Sections 3.1 and 3.2. A vector is obtained as output that denotes the test cases' order and aids software test personnel to make rational judgments. respectively, before calculating the distance measure. ( , ) is given by | − | where a is the normalized value, and is the mean of the set of normalized values.
Step 5: Test cases are prioritized by taking a linear combination of the vector values from Equations (10) and (11), and it is given in Equation (12).
where is a strategy value in the unit interval. Arrange the values from in descending order to form the prioritization order of test cases. Figure 1 depicts the proposed decision model for the rational prioritization of test cases. Initially, complex linguistic expressions are obtained as opinions from experts (software test personnel) on each test case over the criteria. These values are transformed into PLI by using the procedure proposed in Section 3.1. Experts also share their opinions on each criterion used as input for criteria weight calculation by utilizing the method proposed in Section 3.2. Finally, the method presented in Section 3.3 is used for the prioritization of test cases by acquiring input from Sections 3.1 and 3.2. A vector is obtained as output that denotes the test cases' order and aids software test personnel to make rational judgments.
Numerical Example-TCP in an SME
In this section, the reasonableness and usefulness of an integrated decision model are revealed by considering a software company's real case study. PTX (name anonymous) is a popular software company in Tamil Nadu that develops software related to the banking and commercial sector with core financial concepts. The company is a small and medium enterprise (SME) that has 52 personnel working across various platforms of software development and actively participate in the horizontal and vertical growth of the company. The company's main office is in Chennai, and it is spread around Tamil Nadu with varying workforces and projects. Around eight private commercial sectors are customers of PTX, and they have built a harmonious relationship with the company for around a decade.
A team of seven members works on the software development life cycle testing phase and has five to six years of experience in software testing and maintenance. The new software is close to its launch, and the company decided to conduct a comprehensive software testing before it is delivered to the market (customer) for live use. Based on the experience, the software personnel create a test suite with five crucial test cases that could identify faults that are critical with varying grades. Though the software needs many such test cases to identify different faults, these five test cases are substantial for the software under consideration. The faults identified by these test cases hinder the reliability aspect of the software. To make an apt call of test cases from the test suite, six criteria are
Numerical Example-TCP in an SME
In this section, the reasonableness and usefulness of an integrated decision model are revealed by considering a software company's real case study. PTX (name anonymous) is a popular software company in Tamil Nadu that develops software related to the banking and commercial sector with core financial concepts. The company is a small and medium enterprise (SME) that has 52 personnel working across various platforms of software development and actively participate in the horizontal and vertical growth of the company. The company's main office is in Chennai, and it is spread around Tamil Nadu with varying workforces and projects. Around eight private commercial sectors are customers of PTX, and they have built a harmonious relationship with the company for around a decade.
A team of seven members works on the software development life cycle testing phase and has five to six years of experience in software testing and maintenance. The new software is close to its launch, and the company decided to conduct a comprehensive software testing before it is delivered to the market (customer) for live use. Based on the experience, the software personnel create a test suite with five crucial test cases that could identify faults that are critical with varying grades. Though the software needs many such test cases to identify different faults, these five test cases are substantial for the software under consideration. The faults identified by these test cases hinder the reliability aspect of the software. To make an apt call of test cases from the test suite, six criteria are put forward by the software test team. We conducted a detailed discussion with the personnel and based on the voting strategy, six potential criteria for evaluating the test cases are finalized. As a part of our research work, we requested seven personnel for an interview slot, of which five personnel agreed to provide a slot for the interview. During this session, we asked the five members to share their preferences/grades over these five test cases' ability over the six criteria. Each expert/personnel linguistically shared his/her grade. We clarified the core reason behind data collection and how their data would be used in the research without deviating from the company's ethical policies.
To achieve integrity in the data collection process, we had two interview sessions and clarified our doubts with the software test team. In addition, the test team personnel clarified their queries related to our research and how the inferences would benefit the test team. For the sake of confidentiality, the names associated with the test cases and faults are kept anonymous. Let TC = (TC 1 , TC 2 , TC 3 , TC 4 , TC 5 ) be a set of five test cases that are evaluated with six criteria, viz., reliance, fault coverage, the agility of execution, tractability, memory utilization, and cost from the set F = (F 1 , F 2 , F 3 , F 4 , F 5 , F 6 ). The first four criteria are benefit type, and the rest are cost type. Here SP = (SP 1 , SP 2 , SP 3 , SP 4 , SP 5 ) is a set of five software test personnel who provided their grades for TCP. Each personnel shared the grade value, which is transformed to PLI using the procedure presented in Section 3.1. This procedure provides a holistic view of the data and retains the data integrity and adheres to ethical practices. Figure 2 elaborates the proposed research model depicted in Figure 1. A clear view of the workflow of the proposed model is presented. Initially, a holistic data matrix (decision matrix) is obtained from the data collected from software personnel. This decision matrix adopts PLI as the preference structure as it reduces information loss and provides flexibility to software personnel during preference elicitation. Then, the criteria weight vector is determined with the help of the data provided by the personnel on each criterion. By using the holistic decision matrix and the weight vector, ranking is performed to determine the suitable test case for the process. The popular EDAS approach is extended to PLI for TCP. Average values from each criterion are determined, which are further used to calculate the PDA and NDA values for each test case. Finally, by adopting the linear combination principle, the rank values are calculated for each test case the prioritization/ranking order is obtained. A detailed stepwise workflow is show below for clarity to readers. A stepwise procedure for rational TCP is given below.
Step 1: Collect data related to the test cases' performance with respect to each fault from the five personnel involved in the interview session. They provided data linguistically in the form of a seven-point Likert scale.
Step 2: The procedure described in Section 3.1 is adopted to transform the data to PLI to A stepwise procedure for rational TCP is given below.
Step 1: Collect data related to the test cases' performance with respect to each fault from the five personnel involved in the interview session. They provided data linguistically in the form of a seven-point Likert scale.
Step 2: The procedure described in Section 3.1 is adopted to transform the data to PLI to clearly gain potential information with a holistic data view. Table 1 shows the PLI-based preference matrix obtained by transforming the linguistic data from experts by using the procedure provided in Section 3.1. In this way, a holistic view of the data is obtained for TCP. Table 1. Transformed probabilistic linguistic information (PLI) for decision-making-TCP data.
Test Cases
Criteria for Evaluation Step 3: Three software personnel (out of five) also provided their data as complex verbal expressions for describing each criterion's importance. In the interview session, we asked the experts (software personnel) to share their opinions on each criterion's importance. By heuristics and discussion with experts, these expressions are transformed to PLI. Table 2 depicts the opinions of experts on each criterion. This is in the form of PLI. By applying the procedure depicted in Section 3.2, entropy values are determined as shown above, and diversification values are obtained, which are normalized to get the weight vectors as 0.174, 0.169, 0.164, and 0.169, 0.152, and 0.172, respectively. Table 2. Weight calculation matrix for criteria with PLI.
Experts
Criteria Step 4: From Step 2, we obtain a matrix of order 5 × 6, which is used by the ranking method in Section 3.3 to form a prioritization order for the test cases. Similarly, Step 3 yields a matrix of order 3 × 6 for the weight calculation of criteria where five software personnel provide their grades on six criteria as complex linguistic expressions.
Step 5: A vector of order 1 × 6 is obtained from Step 4, along with a matrix of order 5 × 6. By applying the procedure proposed in Section 3.3, test cases are prioritized to obtain a vector of order 1 × 5. Table 3 clearly shows the parameter values of the EDAS approach under the PLI context. We obtain three vectors of order 1 × 5, and the vector from the last column in Table 3 depicts the rank value, and the order is given by TC 3 TC 1 TC 2 TC 4 TC 5 . Step 6: Conduct sensitivity analysis of criteria weight values by generating six sets of weight vectors with a single left shift operation. Figure 3 (x-axis-different weight sets from shift operation) depicts test cases' ranking order for all six sets. The figure shows that the ranking order does not change, and the proposed model is unaffected by criteria weight alteration with high robustness to weight changes.
Comparative Investigation with Existing Models
This section tackles the comparative investigation of the introduced model with extant models. We conduct a comparison with respect to TCP and PLI models. State-of-the-art TCP models from Pradhan et al. [33], Shrivatsan et al. [34], and Banias [36] are considered for investigation with the proposed model. Table 4 provides the investigation with respect to different characteristics gathered from experts' intuition and the literature.
Comparative Investigation with Existing Models
This section tackles the comparative investigation of the introduced model with extant models. We conduct a comparison with respect to TCP and PLI models. State-of-the-art TCP models from Pradhan et al. [33], Shrivatsan et al. [34], and Banias [36] are considered for investigation with the proposed model. Table 4 provides the investigation with respect to different characteristics gathered from experts' intuition and the literature. [14] model is given by TC 3 TC 2 TC 1 TC 4 TC 5 . Using Spearman correlation, the coefficient values and the two-tailed significance value are determined for the ranks, and they are given by (1,0; 0.6, 0.285; 0.6, 0.285; 0.9, 0.037) for the proposed work versus other methods. From the values it is inferred that for the second and third pair needs additional samples to make further arguments that is planned for future. Figure 4 shows that the introduced work is consistent with the extant models with the correlation, and rho values are shown above. On the x-axis, the labels 1, 2, 3, and 4 represent proposed vs. proposed, proposed vs. Sivagami et al.'s [13] model, proposed vs. Krishankumar et al.'s [12] model, and proposed vs. Zhang and Xing's model [14], respectively.
Some key outcomes of the introduced framework are given by
•
The preference structure used in the paper for TCP is an innovative and flexible structure that only allows experts to share complex linguistic expressions and associate occurrence probability to each term. This enhances uncertainty handling and rational MCDM by providing a holistic view of the data from different experts. probability to each term. This enhances uncertainty handling and rational MCDM by providing a holistic view of the data from different experts. • Criteria considered for evaluating TCP are competing and conflicting with each other, hence, weights are systematically calculated to mitigate bias and capture hesitation better. • Moreover, test cases are prioritized systematically with broad rank values to make backup plans easily. • Information loss is mitigated by avoiding the transformation of data that promotes rational prioritization of test cases. • The proposed model also reduces the computational overhead by not acquiring additional data from experts in the form of constraints. The model's limitations are (i) matrices are assumed to be the complete and systematic imputation of missing values are not considered; and (ii) cross-functional views of test cases from diverse software test teams are not considered in the present study.
Conclusions
This paper develops a new model for TCP by properly handling uncertainty with the help of PLI, which is a flexible structure to handle complex linguistic expressions. Opinions from different software test personnel are holistically represented in a preference matrix, and systematic prioritization of test cases is carried out. Entropy measures are proposed to calculate weights with reduced bias and proper handling of hesitation. Test cases are prioritized by using the EDAS approach to promote backup management during catastrophes. The proposed model is highly robust to weight alteration and produces consistent ranking compared to other models. These are evident from sensitivity analysis and Spearman correlation. Furthermore, the model produces broad rank values for effective backup plans that are evident from deviation analysis.
Some managerial implications of the study are (i) the proposed model is a readymade framework for deployment that could rationally perform TCP; (ii) experts involved in the process must be trained with the PLI structure and framework for effective utilization of the systematic tool; (iii) the model not only helps in TCP but also allows for the effective building of test cases for designing high-quality software; and finally, (iv) the tool could be flexibly adapted by the IT sector for other crucial decision-making problems such as strategy and risk management. As a future research direction, the model's limitations are planned to be addressed and machine learning approaches could be integrated with the proposed framework for enhanced decision-making in terms of test case evaluation and management. Additionally, the idea of comparative linguistic expressions [52,53] could be integrated with PLI for efficient handling of uncertainty during preference elicitation with ease and flexibility for experts to provide their preferences in a natural cognitive way that would promote rational decision-making.
Author Contributions: All authors have read the paper and accept its submission to the journal. The following are the authors' contributions. The first three authors A.D.S., R.K., and A.R.M. prepared the initial design of the proposed research model that was fine-tuned by the K.S.R. and S.K. Furthermore, KS.R. and S.K. provided valuable advice and suggestions for coding, and A.D.S., R.K., and A.R.M. developed the complete code for the model. V.B. gathered the data needed for the validation of the code and helped us in refining the data with sufficient pre-processing. K.S.R., S.K., and V.B. discussed the model's overall workflow and provided crucial improvements undertaken by A.D.S., R.K., and A.R.M. A.D.S., R.K., and A.R.M. prepared an initial draft of the paper that was completely refined with presentation improvement by K.S.R and V.B. S.K. and V.B. gave suggestions for improving the results section and presentation of tables and figures. K.S.R. and V.B. edited the language of the paper along with the fifth author. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no funding.
Conflicts of Interest:
The authors declare no conflict of interest. It must be noted that the base idea for the proofs are adapted from [44,45].
Appendix A
Theorem A1. Formulation in Equation (7) is a valid entropy measure that satisfies the following properties.
Proof. Let us consider v i = D lj D tot j for proof purpose.
Given that v β/2 = 0.5 and based on the binary entropic measure, η j s β
If s i , hence, En j th (1) (p) ≤ En j th (2) (p) . The second part may also be proved in a similar fashion.
|
v3-fos-license
|
2016-10-08T01:47:31.943Z
|
2016-09-01T00:00:00.000
|
13154648
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "84891fae01f9b3c3834874af3c46b31bacc9e127",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45611",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "84891fae01f9b3c3834874af3c46b31bacc9e127",
"year": 2016
}
|
pes2o/s2orc
|
Satisfaction with Hearing Aids Based on Technology and Style among Hearing Impaired Persons
Introduction: Hearing loss is one of the most disabling impairments. Using a hearing aid as an attempt to improve the hearing problem can positively affect the quality of life for these people. This research was aimed to assess satisfaction of hearing impaired patients with their hearing aids regarding the employed technology and style. Materials and Methods: This descriptive-analytic cross-sectional research was conducted on 187 subjects with hearing loss who were using a hearing aid. The subjects were over 18 years of age and were using a hearing aid for at least 6 months. The Persian version of Satisfaction with Amplification in Daily Life (SADL) questionnaire was the instrument which was used for assessing satisfaction with the hearing aid. Cronbach’s alpha was calculated to be 0.80 for instrument reliability. Results: A significant difference was observed among satisfaction subscales’ mean scores with hearing aid technology. Also a significant difference was observed between the total satisfaction score and the hearing aid model. With respect to the analysis of satisfaction with the hearing aid and its style, cost and services was the only subscale which showed a significant difference (P=0.005). Conclusion: Respondents using hearing aids with different technology and style were estimated to be quite satisfied. Training audiologists in using more appropriate and fitting hearing aids in addition to using self-reporting questionnaires like SADL for estimating patients’ social condition and participation in their life can essentially change their disability condition and countervail their hearing loss.
Introduction
estimated that 360 million people in the world are suffering from disabling hearing loss (1). In recent years, hearing loss has not just been evaluated from a biological approach. Rather, it has also been considered from the economic, social, and personal approaches when involvement in communication with others and participation in community are concerned (2). Hearing loss can consequently lead to social isolation, less activity, and decreased quality of life (3). Hearing aids are the first practical step in aural rehabilitation process for the majority of those who suffer from hearing loss (1,4).
Under 25% of individuals who can improve by hearing aids are real users and this rate is lower in developing countries (5). It can be due to some factors such as getting labelled for using hearing aids, consumers' dissatisfaction, and high costs of hearing aid and rehabilitation services (6).
The aim of using a hearing aid is amplifying signals that make sounds audible for hearingimpaired people. Basically, all hearing aids use analogue technology to amplify sounds (6). Although each hearing aid contains a microphone and a receiver system, their main difference is their function. Analogue hearing aids include some limited controls. Programmable hearing aids use digital control circuits and usually make a more accurate fitting than analogue hearing aids. In digital hearing aids, analogue input signals are converted to digital input and then the processes continue (7).
Advancements in digital technology and the rising speed of speech signal processing have ensued current developments in existing features of modern hearing aids. However, hearing aid users still have complaints hearing speech signals in noisy environments and while talking on the phone (8).
Digital hearing aids are more flexible for fitting and include more complex processing (7). They also have some extra features such as being multi-programmable and having automatic feedback control in comparison with customary hearing aids (9). From another perspective, digital hearing aids are getting smaller in size and consuming less power compared with the analogue ones (7). However, success in the hearing aid adaptation process depends on the user's satisfaction with hearing aid results (10). Consumer satisfaction assessment is a key part of comprehensive assessment programs in health care (11). Satisfaction is a subjective phenomenon that shows patients' concept of structures, processes, and the outcomes of delivered services (12). Studying the efficacy of rehabilitation services and the satisfaction with hearing aids in hearingimpaired people can result in delivering more appropriate services which are adjusted to their needs (2). Since desirable sound amplification influences the efficacy of aural rehabilitation, self-report questionnaires can be considered as appropriate instruments for assessing consequences of using hearing aids and the users' satisfaction (12). Satisfaction of amplification in daily life (SADL) questionnaire is a self-reported questionnaire, developed by Cox and Alexander (1999), to evaluate user's satisfaction in various dimensions of using such a device. This research aimed to assess satisfaction with hearing aids based on technology and style among hearing impaired people.
Materials and Methods
This research was an analytical and crosssectional study. The participants were hearingimpaired individuals who were referred to an audiology clinic in the south of Bushehr province, Iran. The inclusion criteria for participation in the study were: referring to a clinic in the last two years, over 18 years of age ,and at least a 6 months' experience in using hearing aids. There was no exclusion criteria to participate in this research. The population size included 187 people all of whom consented with the research procedures. The population under study included 100 male and 87 female participants aged between 18 to 90. Initially, audiology evaluations were performed for the subjects. Then, they filled the questionnaire (for illiterate individuals, a reviewer read questions and marked their answers on the questionnaire). Subject evaluations included two phases:
Audiological Evaluation Including:
Otoscopy for examining ear appearance and pure tone audiometry was performed by a calibrated audiometer in an acoustic room. In this evaluation audio absolute thresholds for Air Conditions (in octave and half-octave frequencies of 250-8000 Hz) and Bone Condition (in octave and half-octave frequencies of 250-4000 Hz) were determined. Speech audiometry evaluated Speech Recognition Thresholds (SRT) and word Recognition Score (WRS). Immitance audiometry was used for the evaluation of the accurate function of the sound transmission system to the inner ear.
Satisfaction Assessment Instrument:
A standard Persian version of SADL and a questionnaire for demographic characteristics were used for data gathering. The questionnaire was validated through face and content validity by 5 experts. Cronbach's alpha calculated 0.80 for reliability of the data gathering instrument (13). The SADL questionnaire was developed for assessing satisfaction of hearing-impaired people with their current hearing aid. The questionnaire contained 15 questions and four subscales comprised of 1) Positive effects 2) Negative features 3) Services and costs, and 4) Personal image. Positive effects subscale included 6 questions about acoustic and psychological advantages of the hearing aid. Negative features encompassed three questions about amplifying background noise and acoustics as well as using a phone. Three questions about the skills of the prescribing specialist, hearing aid price, and repairing times were included in the cost and services subscale. Personal image was assessed in the last subscale involving three questions about motivation, cosmetic, and labelling factors with using the hearing aid. The mean score of these four subscales was used to assess a respondent's satisfaction and was called his or her global score. A Likert seven-option scale was used for ranking the answers whose range varied from "strongly disagree" to "strongly agree" options. In 11 questions, choosing "strongly agree" meant complete satisfaction and scored 7, while choosing "strongly disagree" meant complete dissatisfaction and scored 1. Four questions were scored reversely and choosing "strongly disagree" meant complete satisfaction and scored 7. The questionnaire's validity was approved by the developer in 2001. They declared instrument reliability was more than 0.83 for all of the questions. Demographic characteristics included age, sex, education, experience with hearing aids, and the daily use of hearing aids.
This project was conducted under the ethics committee of Ahvaz Jundishapur University of Medical Sciences (Ethics Approval No. ajums.rec.1393.5). All respondents declared and signed their consents formally. The data was analyzed by descriptive statistics (mean and standard deviation) and inferential statistical (independent T, ANOVA and LSD Post Hoc tests) in SPSS.
Results
One hundred males and 87 females (range: 18 to 90 years old) were evaluated in this study. Demographic characteristics of the participants are presented in Table 1. As seen in Table 1, the majority of subjects were illiterate. Most of them were recognized with moderate hearing loss.
The majority of the participants (79.20%) used a hearing aid 8-16 hours per day. 14.40% and 6.40% of subjects were using their hearing aid respectively as long as 4-8 and 1-4 hours daily. Most respondents had been using their current hearing aid for 1 to 10 years. 107 (57.21%), 22 (11.76%), and 58 (31.03%) subjects were using digital, programmable, and analogue hearing aids respectively. Satisfaction assessment results with hearing aid based on technology is shown in Table 2. ANOVA test showed a significant difference in SADL subscales for different technologies of hearing aids. In the cost and services subscale, significant differences were seen between participants who used a digital hearing aid and the other two groups. In the personal image subscale, significant differences were observed between subjects with a digital hearing aid and those with analogue and programmable hearing aids (using LSD post hoc test).
In the global score, a significant difference was observed between people who used digital hearing aids and those with analogue hearing aids. Satisfaction level with hearing aids with different technologies was estimated at the same level. A maximum level of satisfaction was in the positive effect subscale where a high degree of respondents' satisfaction was observed. A minimum level of satisfaction was observed in the negative features. Users were estimated relatively satisfied in the other two subscales.
Fifty (26.73%) respondents were using ITE types of hearing aids and 137 (73.27%) were using BTE ones. Results of assessing the satisfaction level based on the model of hearing aids are shown in Table 3. We found a significant difference between different hearing aid models in the global score (Table.3). Subjects with ITE hearing aids were significantly more satisfied in all subscales except for the "negative features". Maximum level of satisfaction was seen in the positive effect subscale. Nineteen (10.16%) subjects were using a hearing aid binaurally and 168 (89.84%) were using it monaurally. Results of assessing the satisfaction level with a hearing aid based on the style of hearing aids are shown in Table 4.
Our analysis demonstrated that users with binaural style of hearing aids were significantly more satisfied in the Cost and Services subscale. Other subscales showed no significant difference. Figure 1 shows that the maximum level of satisfaction with hearing aids was in the positive effect subscale and the minimum satisfaction level was observed in the negative features.
Discussion
Subjects were estimated to be relatively highly satisfied based on their mean global score in this study. This result is in agreement with the results of the studies by Cox and Alexander, Viega et al , and Carvalho (14)(15)(16).
In the present work, no significant difference was seen in satisfaction level with hearing aid between the respondents' sex and age, which is similar to Uriarte et al's studies (12). In Hosphord-Dunn and Halpern, and Jerram and Purdy's studies, however, there was a significant difference between male and female participants (17,18).
The result of the present study showed less satisfaction in age groups in comparison with Kochkin's research (19). Although Cox and Alexander did not report any correlation between satisfaction and age (14), it must be considered that their study population comprised respondents over 60 years old. Jerram and Purdy studied patients between 30 to 88 years of age (18), and Uriarte et al's study was conducted on age groups between 29 to 104 years of age (12). However, this study was done on subjects between 18 to 90 years of age and this difference of age groups in the samples of the study may explain why a similar result was not observed as compared with other investigations on the relationship between the participants' age and the satisfaction level of hearing aids.
Significant differences were seen between all subscales of satisfaction considering different technologies: people with digital hearing aids were estimated significantly more satisfied in cost and services, personal image, and negative features subscales. Also, patients with analogue hearing aids were estimated significantly less satisfied in positive effects subscale and global satisfaction. Yet, all respondents were estimated to be satisfied with their hearing aids, which is similar to Vuchrialho et al and Uriarte et al's findings (12,20). They explained technical development in hearing aid technology could cause more satisfaction and reported that the percentage of real users of hearing aids was higher than the previous 20 years. In this study, subjects were using one of the three kinds of technology in hearing aidz: digital, programmable, and analogue; while all of the subjects in Cox and Alexander's study were using analogue hearing aids, so their results cannot be compared with this study. According to Kochkin, a hearing aid's programmability was accompanied by more satisfaction (19). This can explain differences in high scores of satisfaction in the present research compared with the results of other studies such as those of Cox and Alexander, Arlinger, and Kaplan-Neeman et al (14,21,22). Finally, despite all excessive advances in hearing aid designing as well as quality improvements, it seems that some factors such as the users' dissatisfaction arising from disregarding their very high expectations in addition to the high cost of modern hearing aid leads to less use of hearing aids.
Moreover, a significant difference was observed between different models of hearing aids in the global score as well as the positive effect, the cost and services, and the personal image subscales in this study. Dillon et al and Kochkin reported a correlation between high satisfaction in the personal image subscale with ITE hearing aids, which supports our findings (19,23). In this research, no difference was observed between global satisfaction with 5 (12,19). The positive effect subscale showed the highest mean among other subscales indicating the high satisfaction of hearing-impaired people with hearing aids in their social life. Considering the hearing aid's sound quality, only few subjects reported dissatisfaction with acoustical specifications and psychological effects of their hearing aid. This result confirms the outcomes of Cox and Alexander as the developers of the study's questionnaire (14).
Conclusion
In this study, satisfaction level with digital hearing aids was estimated to be higher than other types of hearing aids. However, these types of hearing aids impose higher costs on the users. Establishing policies in order to remove access financial barriers to these types of hearing aids need to be studied. Since the prevalence of hearing loss as well as the need for its rehabilitation is growing, these rehabilitation services need to be supported by social security and retirement funds. These organizations must specify which groups are in priority for using these resources and which patients gain more advantage with hearing aids.
Given the lower satisfaction level with their hearing aids among illiterate subjects in this study, more counselling meetings for these patients and spending more time for instructing these customers in using their hearing aid is recommended.
Besides, providing an educational protocol for using amplification in daily life can lead to better results.
|
v3-fos-license
|
2020-09-10T10:24:31.853Z
|
2020-09-05T00:00:00.000
|
225336968
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1999-4907/11/9/968/pdf",
"pdf_hash": "840a5e8ec50aeb96a18d5831c8eb454d3c371ebb",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45612",
"s2fieldsofstudy": [
"Business"
],
"sha1": "b0c37f5d7fc0eb17be27dc21c619f0fdd3a85735",
"year": 2020
}
|
pes2o/s2orc
|
A Two-Stage DSS to Evaluate Optimal Locations for Bioenergy Facilities
: Research Highlights: A set of 128 potential bioenergy facility locations is established and evaluated based on the transport cost to select optimal locations. Background and Objectives: The identification of optimal facility locations to process recovered forest biomass is an important decision in designing a bioenergy supply chain at the strategic planning level. The result of this analysis can a ff ect supply chain costs and the overall e ffi ciency of the network, due to the low density and dispersed nature of forest biomass and the high costs associated with its logistics operations. In this study, we develop a two-stage decision support system to identify the optimal site locations for forest biomass conversion based on biomass availability, transport distance and cost. Materials and Methods: In the first stage, a GIS-based analysis is designed to identify strategic locations of potential bioenergy sites. The second stage evaluates the most cost-e ff ective locations individually using a transportation cost model, based on the results from stage one. The sensitivity of inputs, such as maximum allowable transport cost, the distance of transport and their relations to the profit balance, and changes in fuel price are tested. The method is applied to a real case study in the state of Queensland, Australia. Results and Conclusions: The GIS analysis resulted in 128 strategic candidate locations being suggested for bioenergy conversion sites. The logistics analysis estimated the optimal cost and transportation distance of each one of the locations and ranked them according to the overall performance between capacities of 5 and 100 MW. An overview of the developed methodology.
Introduction
In Australia, native forests, timber plantations, and wood products absorbed 56.5 M tonnes of carbon dioxide in the year 2005, which reduced the total emissions by almost 10% [1]. Australia has 134 M hectares of forest, which is the seventh-largest reported forest area worldwide. Only one percent of this area is harvested for commercial timber and wood products [2]. The leftover material, forest biomass, can provide additional revenue streams for forest managers and supply a bioenergy market, while further contributing to climate change mitigation efforts [3]. Using forest biomass for bioenergy should be promoted to become an integrated part of forestry and a priority for all biomass utilization projects [4]. Sustainably sourced forest biomass can be combusted to generate heat, steam, and electricity [5,6]. However, this bioenergy trend receives little public attention and political support in Australia [7]. The bioenergy market represents only 4% of total energy production in Australia [8] and, of this, forest biomass is 25%, and bagasse or sugarcane residue is 29% [8,9]. With the lack Assessment Version One (BRAVO), was the first reference GIS-based DSS for bioenergy facility locations developed [25]. The BRAVO model was then altered by Voivontas et al. [20] to successfully implement the use of suitability and optimality analyses, in a GIS DSS, for locating facilities. The suitability analysis evaluated the centroids of administrative areas as potential facility locations. Shi et al. [16] converted remotely sensed biomass data for the supply of resources in a service-area model, using potential facility locations on a road network as demand points. Zhan et al. [38] established a delivery cost surface and treated every point of the surface as a potential location for energy conversion. Guilhermino et al. [39] applied a suitability analysis to different municipalities in a case study in Portugal to find the best location for energy conversion. Ranta [19] applied a resource location-allocation model according to supply resources from logging residues in Finland in a case study at the lowest cost possible. Frombo et al. [40] described an environmental DDS (EDDS) to minimize the overall cost in the planning of woody biomass logistics while taking into account the environmental impact. Freppaz et al. [42] applied a DSS to minimize the cost of transport and to maximize the capacity of six candidate facilities in a case study in Italy. Nord-Larsen and Talbot [41] estimated the total delivery cost of forest fuel resources in Denmark using a linear programming model depending on different supply-demand scenarios. Zhang et al. [27] applied a reduced transportation cost model to find the optimal location for biomass conversion and tested its sensitivity to changes in fuel price, biomass availability and transportation distance. Woo et al. [33] applied an MCA in combination with the lowest cost linear programming in order to find the best location to convert woody biomass in a case study in Tasmania. Each of these studies uses spatial components to visualize and calculate distances between supply points and candidate facilities. The spatial component is then combined with non-spatial data for transport, supply and demand quantities and other constraints in order to satisfy the objective function.
This paper presents a two-stage DSS approach that identifies the optimal location of forest biomass-to-bioenergy facilities based on available biomass, transport distance, and transport cost. The objectives of the DSS are (1) to identify strategic locations to convert forest biomass into bioenergy products based on biomass availability using an advanced GIS analysis, and (2) select the optimal bioenergy facility locations by reducing the distance and cost of transport using a transportation cost model. The state of Queensland, Australia, is used as a case example to demonstrate the developed DSS.
The rest of the article is organized as follows: Section 2 describes an outline of the two-stage DSS approach and briefly details the study area of Queensland, Australia. The results of the model implementation and sensitivity analysis are given in Section 3. Section 4 discusses the key findings. Finally, Section 5 presents concluding remarks and possible extensions for future studies.
Materials and Methods
The research applies a two-step DSS to optimize the location of bioenergy facilities. The first step of the DSS is a GIS-based analysis to identify strategic facility locations based on the availability of forest biomass and the suitability of a strategic location for bioenergy conversion. The second step is a transportation model analysis to identify optimal facility locations. An overview of the objects and attributes for strategic and optimal facility locations are presented in Figure 1.
GIS Analysis
GIS analysis is used to identify strategic locations to convert forest biomass to bioenergy products based on the availability of forest biomass. The method was previously described and tested by Van Holsbeeck and Srivastava [43] and consists of availability and suitability analyses. The availability analysis considers the following attributes, as outlined in Figure 1: forest area, type, log harvesting volumes, residue types, residue ratios, sustainability ratio, energy content, administrative area, and relative footprint. The estimated forest footprint and the available forest biomass are combined in an energy heatmap. The suitability analysis applies the Local Index of Spatial Autocorrelation (LISA) [44] on the heatmap as a technique to identify significant hotspots or clusters of high forest biomass energy in the forest. Centroids of significant highly forested areas are delineated and refined in an exclusion The remainder of the locations are identified as strategic facility locations due to their high forest biomass availability, proximity to the forest resource and road network and can serve as a source of biomass demand points for the second stage (model analysis).
GIS Analysis
GIS analysis is used to identify strategic locations to convert forest biomass to bioenergy products based on the availability of forest biomass. The method was previously described and tested by Van Holsbeeck and Srivastava [43] and consists of availability and suitability analyses. The availability analysis considers the following attributes, as outlined in Figure 1: forest area, type, log harvesting volumes, residue types, residue ratios, sustainability ratio, energy content, administrative area, and relative footprint. The estimated forest footprint and the available forest biomass are combined in an energy heatmap. The suitability analysis applies the Local Index of Spatial Autocorrelation (LISA) [44] on the heatmap as a technique to identify significant hotspots or clusters of high forest biomass energy in the forest. Centroids of significant highly forested areas are delineated and refined in an exclusion analysis. The hotspots or clusters that are not located within 200-m proximity of the road network are eliminated form further analysis. The remainder of the locations are identified as strategic facility locations due to their high forest biomass availability, proximity to the forest resource and road network and can serve as a source of biomass demand points for the second stage (model analysis).
Transportation Model Analysis
The transportation cost model uses forest locations as a source of supply, the strategic facility locations (GIS analysis) as a source of demand, a set of transportation cost formulae and the one-way supply-demand distances to find the optimal facility location. The shortest path distance between forest and strategic locations is calculated by applying the Dijkstra algorithm [45]. To find the minimum cost associated with the shortest path for each forest and strategic facility location, a distance-dependent cost formula is developed for the transportation network.
Transportation Model Analysis
The transportation cost model uses forest locations as a source of supply, the strategic facility locations (GIS analysis) as a source of demand, a set of transportation cost formulae and the one-way supply-demand distances to find the optimal facility location. The shortest path distance between forest and strategic locations is calculated by applying the Dijkstra algorithm [45]. To find the minimum cost associated with the shortest path for each forest and strategic facility location, a distance-dependent cost formula is developed for the transportation network.
Transportation Cost Formula
The development of a transportation cost formula is based on a What-IF analysis in Microsoft Excel [46] and the formula is used to compute the cost of transportation. The analysis includes a range of parameters for tractor-trailer, operator, utilization, operation distance, speed, and fuel consumption, and operating cost based on experimental trucking data, adjusted to operations in Australia according to the National Heavy Vehicle Regulator (2019) [47]. The following parameters and values are used for the What-IF analysis: • Tractor-Trailer: A B-train vehicle with a gross weight of 62.50 tonnes; A maximum volume of 300 m 3 ; Forests 2020, 11, 968
of 22
A tractor purchase price of AUD 300k and a trailer price of AUD 85k; A tractor salvage value of AUD 60k and a trailer value of AUD 8.5k; A 5 y tractor life and 10 y trailer life.
• Utilization: Two hundred and thirty operating days y −1 -1 shift day −1 ; A maximum 12-h shift −1 with 0 h of shift −1 overtime; An operational delay time of 5% and 95% available time. A loan interest rate equal to 10%; Registration costs of 8k AUD y −1 , 18k AUD y −1 insurance and 3k AUD y −1 miscellaneous costs; Profit and overhead costs equal to 8%; A weighted linear equation is established in correspondence with the parameters of the What-IF analysis described above, and is shown in Equation (1): where C T is the one-way transportation cost per unit (AUD MW −1 ), l ij is the one-way transportation distance (km) between nodes. The equation consists of two components: a fixed cost, and a variable (distance-dependent) cost. The fixed cost of 9150.77 AUD MW −1 covers the cost of salaries, maintenance, depreciation and interests, registration and insurance, and profit and overheads. The constant coefficient associated with the distance-dependent cost, 179.34 AUD MW −1 km −1 , corresponds to the average fuel cost of 1.42 AUD L −1 [48] and compensates both loaded and unloaded travel.
Transportation Model
A linear transportation cost model is formulated with the aim of calculating the total transport cost between the available biomass supply points and each candidate facility location. Therefore, facilities are not competing for the biomass; thus, one facility can be present at any given time. The model consists of a set of m potential sites (forest nodes) where a set of n potential facilities (strategic locations) can be established whose demands can be satisfied from any available sites for procurement. The index set of all candidate forest sites is denoted by I, for I = {1, . . . , i} and the index set of all potential facilities is denoted by J, for J = {1, . . . , j}. For each potential facility j ∈ J a set of capacity levels is defined. Each forest site has a certain amount of biomass available Q i in (MW) and each facility has a given demand d j in (MW) supplied from m supply points. The unit transport cost between the source node and the destination node is represented by c ij in (AUD MW −1 ). The distance from the harvesting sites and the bioenergy facility is symbolized by l ij in (km), and is calculated between each demand point j and supply point i. For each demand point, the supply points are ranked based on the shortest distance. The available quantity of forest biomass Q i of the supply points is added until the demand d j of a facility j is fulfilled. When the condition is met, the demand of a facility is defined as displayed in Equation (2): The transportation cost for the jth demand point (TC j ) is the sum product of the transport cost (c ij ) for each of the m supply points and the amount of biomass Q i associated with that point. The unit cost for operating trucks c ij is found by using the cost formula in Equation (1). The total transport cost for demand point j is found in Equation (3).
The average transportation cost per unit of biomass (AUD MW −1 ) for the jth demand point, (ATCU j ), is calculated using Equation (4) by dividing the total transportation cost for the jth demand point (TC j ) by the total demand at demand point j (d j ). ATCU j is a normalized value from TC j to improve readability and is given by Equation (4): The total transportation distance for the jth demand point (TD j ) in km is the sum of the transport distance (l ij ) in km for each of the m demand points. The value of TD j is calculated according to Equation (5): Finally, the average transportation distance per unit of biomass (km MW −1 ) for a given facility site (ATDU j ), is calculated using Equation (6) by dividing the total transportation distance for the jth demand point (TD j ) by the total demand at demand point j (d j ).
Study Area and Data Management
To demonstrate the developed DSS, this study uses the state of Queensland, Australia. Queensland is the second largest state in the country, with a total forest area of 51 M ha [2]. An area of 20 M ha of state-owned native forest is commercially available for timber harvest, together with 1 M ha of private native forest and 216,000 ha of plantations [49,50]. The total timber volume processed in the financial year 2017-2018 equalled 3,153,000 m 3 [51]. The majority of harvest operation takes place in softwood plantations mostly located in southeast Queensland. Previous studies by the Australian government estimate a total annual production of 600,000 m 3 of forest harvest residues and 950,000 m 3 of sawmill residues [52]. There are 33 softwood sawmills and 61 hardwood sawmills in Queensland [50]. A densified fuel pellet production facility is located in southeast Queensland with a production capacity of 125,000 tonnes per year [53]. Most of these pellets are shipped for overseas energy production and consumption and do not contribute to the Australian renewable energy target. Twenty-three percent of the renewable energy in Queensland is derived from biomass resources (2% of the total energy produced in Queensland) [54]. In September 2018, a total of 49 bioenergy projects were in operation in Queensland, largely sourced by municipal waste (57%) and agricultural residue (27%) [8], while wood waste, or forest biomass, is an underutilized renewable energy feedstock in Queensland that only runs 6% of renewable energy projects [8].
For the GIS availability analysis, log harvest volume databases published by the Australian Bureau of Agriculture and Resource Economics and Sciences (ABARES) [55] and the Queensland Department of Agriculture and Fisheries [49] are combined with a range of conversion factors derived from the literature [43] and a mapping dataset [56][57][58][59][60]. The log harvest database for Queensland includes a range of species under the soft and hardwood plantations and soft-hardwood native forests. For the GIS suitability analysis, the Queensland road network and Australian Statistical Geography Standard are combined with LISA analysis [61,62]. The availability and suitability analysis are performed in ArcGIS Desktop Version 10.7 (ESRI Australia Pty. Ltd., Brisbane, QLD, Australia) [63].
For the model analysis, a total of 128 demand points or strategic facility locations, 80,920 forest supply points and the Queensland road network are combined to calculate the shortest path distance using the Origin-Destination Cost Matrix in ArcGIS [45,63]. The quantity Q i of forest biomass available in megawatts (MW) at each supply point is determined in the availability study using the data described in [43]. The results of the transportation cost model are analysed using What'sBest! Version 16.0 (LINDO Systems Inc., Chicago, IL, USA) [64].
Sensitivity
The model analysis is based upon a number of assumptions and the establishment of a base case scenario which includes 100% availability of forest biomass and a fuel price of 1.42 AUD L −1 . The maximum transport distance and cost of the base case are established according to a reference scenario that includes a harvest cost of 48.25 AUD odt −1 , a stumpage cost of 0 AUD odt −1 , and a gate price of 64.80 AUD odt −1 . In reality, this might not always be the case, as fuel prices go up and down, all forest biomass might not be available and may find its way to alternative uses. Furthermore, other costs such as stumpage and harvesting costs might change as the intensity of forest biomass for bioenergy uses increases. With this in mind, the sensitivity of the transportation cost and distance and the optimal facility location should be tested against changes in these key parameters. The sensitivity analysis focused on the following three parameters: • A combination of the gate price, harvest and stumpage cost; • Biomass availability; • Fuel price.
Each of these parameters is tested separately for deviations from the base case scenario. The ATDU j and ATCU j from the different scenarios are compared to the reference base case. The combined effect of the gate price, harvest and stumpage cost does not affect the calculation of ATCU j and ATDU j according to Equations (4) and (6) but affects the maximum transportation distance by the incorporation of Equations (1) and (5). The availability of biomass and the cost of fuel directly impact the formulation of TC j in Equation (3), which is carried through into the calculation of ATCU j and ATDU j in Equations (4) and (6). The sensitivity analysis for biomass availability and fuel price is only tested on the ten best-performing facilities according to the model analysis of the base case.
Gate Price, Harvest and Stumpage Cost
As described earlier, the cost of transportation was defined by Equation (1), which corresponds to a relationship between distance and the attributes of transportation, e.g., trailer, operator, utilization. However, looking at feasible economic solutions, the cost of transport will be mostly limited by a profit margin that is defined by other costs in the supply chain. In order to be profitable, the price a contractor receives for the delivered biomass at the bioenergy facility (gate price) needs to outweigh the cost or spending that is associated with the delivery of biomass. These costs are inclusive of a stumpage cost or the price paid to the forest biomass grower, a harvest cost for the harvest of forest biomass, and a transport cost. Thus, resulting from those attributes, the maximum transport cost (TC max ) can be redefined according to Equation (7): When substituting Equation (1) into Equation (7), the maximum transport distance can be calculated according to Equation (8): Values for gate price, harvest cost, and stumpage cost can vary significantly based on the type of harvest system, forest type, type of forest biomass or tree species, the amount of biomass, equipment or even the deployment of biomass use in the area. A list of the used values is outlined in Appendix A.
The decision to include a low-cost scenario for the harvest cost is based on the motivation that the cost of the harvest, extraction and chipping of forest biomass can be reduced once more efficient supply chains are established or low-cost harvesting methods are applied in the case study area. On the other hand, a higher cost for harvest is included based on the motivation that infield chipping requires less machinery compared to typical extraction and roadside chipping operations but tends to have a higher cost of operation. The decision to include a moderate and high-cost scenario for stumpage cost is based on the growing interest in forest biomass. Increasing interest in biomass will add value to the material, which allows for landowners to create additional revenue. Increasing interest in forest biomass also justifies the reason for including a higher gate price scenario.
The effect of price and cost changes reflects on the maximum allowable transport cost and distance according to Equations (7) and (8). All possible combinations between gate price, harvest, and stumpage cost are tested in Equation (7). Only the positive values of TC max are allowed to secure a profitable supply chain. The remaining scenarios allow for the calculation of the maximum allowable transport distance (TD max ) in Equation (8). The maximum allowable capacity for the average facility is defined as the intersection between the average ATDU j of all facilities according to Equation (6) and TD max calculated in the remaining scenarios.
Biomass Availability
In the base case situation, the forest biomass quantities are calculated according to a previous study [43], and an assumption based on harvested log volumes in the state of Queensland. The estimated availability of the biomass already captures losses of biomass due to sustainable and technical restraints. However, it is unlikely that the remaining biomass will all be harvested and transported, or solely used for bioenergy purposes. Paper industries, horticulture or pellet exports will result in a reduction in forest biomass that will not be available for the Australian energy market [52]. To consider that less than 100% of the forest biomass will be converted to bioenergy in Queensland, several lower biomass availability scenarios are established and compared with the base case where 100% availability was used. The effect of reduced availabilities from 100% to 50% is tested on ATCU j and ATDU j with decrements of 10%. The reduced availability is defined as parameter Q i in Equations (2) The distance factor in Equation (1) is established in correspondence with the average fuel price in 2019 for Queensland, Australia (1.42 AUD L −1 ) [48]. In order to evaluate the effect of fuel price on the total cost of transport, the minimum and maximum fuel prices for 2019 are compared against the average. The minimum fuel price for 2019 is 1.12 AUD L −1 which corresponds to an adjustment of Equation (1) according to the What-IF analysis in Microsoft Excel [46] and can be found in Equation (9): Similarly, a maximum fuel price of 1.73 AUD L −1 in 2019 results in Equation (10): Both Equations (9) and (10) will be used to substitute c ij in Equation (3) of the transportation model for the calculation of ATCU j and ATDU j . The sensitivity is only tested for the ten best-performing facilities and compared with the results of the base case in model analysis. The ATCU j and ATDU j are calculated for capacities ranging between 5 and 100 MW.
GIS Analysis: Strategic Facility Locations
The total harvestable area of forest in Queensland is estimated to be 13.6 M ha, including both plantations and native forests. The total amount of forest biomass energy is estimated to be 732 MW. The amount of energy is produced annually in plantations and native forests combined and includes the production of pulp logs, sawmill residues and field residues. The total harvestable forest footprint and total biomass energy are aggregated in an energy heatmap and used as inputs for LISA in the suitability analysis. This results in 844 administrative area polygons with significantly high biomass energy. The centroids of the 844 locations act as potential locations for bioenergy facilities. In testing the proximity to the road network (200 m), only 131 locations are eligible. Three out of the 131 locations are located on Bribie Island and were excluded in further analysis due to connectivity issues. There are 128 strategic facility locations for bioenergy purposes in the state of Queensland. Table 1 presents the mean and standard deviation (Std Dev) of TC j , ATCU j , TD j and ATDU j values from 128 strategic locations over the different capacities. Potential demand levels from small capacity (5 MW) to high capacity (100 MW) for each location are tested. At each capacity level, we can determine what the average, minimum and maximum cost or distance per MW would be for transport if we were to open a facility.
Model Analysis: Optimal Facility Location
TC j and TD j values increase exponentially with increasing capacity. Normalizing the TC j and TD j values by the capacity creates a linear trend and smaller values for interpretation.
Because each strategic location is evaluated separately, locations for possible facilities can be ranked according to the ATCU j or ATDU j values. The location with the lowest ATDU j or ATCU j is considered the most optimal location. From the results shown in Figure 2, we can identify which of the 128 facility locations has the lowest ATCU j and ATDU j across all capacities (5-100 MW) based on hierarchy ranking. The labels of the 10 best facilities are shown. The dots represent 128 strategic locations, ranked by colour from lowest (green) to highest (red) ATCU j . In Figure 2a, the ATCU j value is averaged across all capacities for each facility. In Figure 2b, the ATCU j of a 5-MW facility is presented for each facility. Figure 2a indicates that the most optimal locations for energy conversion are located in the southeast (green) and other locations (red) are not feasible. However, looking at one particular small capacity (5 MW) in Figure 2b, it appears that some locations in the north can also be considered optimal in addition to the southeast. Location "476" is the best performing location with the lowest ATCU j and ATDU j across the range of capacities (Figure 3). The extent of the network to satisfy location "476" at a range of capacities is presented in Figure 3. As the most optimal location, the network connections are minimal but grow with increasing capacity. Table 1. Mean jth demand point (TC j ), average transportation cost per unit of biomass (AUD MW −1 ) for the jth demand point, (ATCU j ), total transportation distance for the jth demand point (TD j ), average transportation distance per unit of biomass (km MW −1 ) for a given facility site (ATDU j ) and SD values of 128 strategic locations for different capacities in Queensland.
Gate Price, Harvest and Stumpage Cost
The maximum transport cost is calculated for the base case (Scenario 1) as shown in Table 2. The base case scenario includes a stumpage cost of 0 AUD odt −1 , a harvest cost of 48.25 AUD odt −1 , and a gate price of 64.80 AUD odt −1 . The maximum transport cost is 16.55 AUD odt −1 or converted to 25,200 AUD MW −1 . The maximum transportation distance is 89 km and is the return distance from the facility to the forest. Table 2. Estimated maximum transport cost (TC max ) and maximum allowable transport distance (TD max ) values for a range of scenarios based on changes in gate price, stumpage and harvest costs. Different combinations among gate price, harvest and stumpage cost are summarized in Table 2. Only the positive values of TC max in Equation (7) are allowed to secure a profitable supply chain.
Negative TC max values are removed from Table 2. The remaining scenarios allow for the calculation of TD max in Equation (8) and are presented in Table 2. We notice that only one scenario (10) with a stumpage cost of 28.27 AUD odt −1 remains. This means that combinations between this stumpage cost and other harvest costs and gate prices resulted in a negative value for TC max . We also notice that none of the high harvest costs (77.16 AUD odt −1 ) are presented in the table, which means that these costs outweigh the price received for biomass and result in a loss in the supply chain.
In Figure 4, the maximum transport cost calculated in Table 2 for the base case scenario is compared to the mean of ATCU j values (J = 128) from the foregoing analysis (Table 1). For further comparison, the mean ATCU j of the ten (J = 10) best strategic locations ( Figure 2) for each capacity is added to Figure 4. From the graph, it appears that, based on the calculated average, the 10 best locations remain under the maximum transport cost threshold (y = 25,200). Only at capacities above 90 MW does the curve exceed TC max . On the other hand, the mean of all strategic locations exceeds TC max at every capacity. This indicates that the average facility location in Queensland would not be profitable. However, in Table 3, the number of locations with ATCU j under TC max are calculated and more detail indicates that, especially at lower capacities, a large set of locations can be feasible. The maximum allowable capacity for a facility can be defined as the intersection between the average ATDUj of all strategic locations according to Equation (6) and TDmax calculated in the remaining scenarios (Table 2). In Figure 5, the mean ATDUj of all strategic locations is compared to the TDmax of the ten best scenarios. The dotted line y = 89 is the TDmax value of the base case scenario in Table 2. The point of intersection is at y = 89; x = 6, meaning that, based on the maximum return travel distance of 89 km MW −1 , we could supply the average location in Queensland with up to 6 MW of forest biomass energy (CAPmax). With an increasing or decreasing gate price, stumpage or harvest cost, this maximum allowable capacity changes. Different values for TDmax in Table 2 are presented in Figure 5 (TDmax = y) and responding CAPmax values are indicated. Notice that at a maximum transport distance of 4 km (scenario 7), a negative capacity was retrieved and therefore this scenario The maximum allowable capacity for a facility can be defined as the intersection between the average ATDU j of all strategic locations according to Equation (6) and TD max calculated in the remaining scenarios (Table 2). In Figure 5, the mean ATDU j of all strategic locations is compared to the TD max of the ten best scenarios. The dotted line y = 89 is the TD max value of the base case scenario in Table 2.
The point of intersection is at y = 89; x = 6, meaning that, based on the maximum return travel distance of 89 km MW −1 , we could supply the average location in Queensland with up to 6 MW of forest biomass energy (CAP max ). With an increasing or decreasing gate price, stumpage or harvest cost, this maximum allowable capacity changes. Different values for TD max in Table 2 are presented in Figure 5 (TD max = y) and responding CAP max values are indicated. Notice that at a maximum transport distance of 4 km (scenario 7), a negative capacity was retrieved and therefore this scenario cannot be sustained. For scenario 5, we find a maximum capacity of 0 MW, which also indicates that there is not enough supply within this distance cut-off to satisfy a facility. Table 4 presents the percentage of strategic locations (j) that fall under the maximum allowable transport distance (TD max ) for each scenario and the range of capacities. From the table, we can see that 99% of locations can be supplied with 5 MW if we were able to transport biomass over a total distance of 302 km in scenario 3. This percentage drops with increasing capacity and decreasing distance and becomes 0% when the maximum transport distance is reduced to 4 km and capacities are over 5 MW. Despite this, 1% of the locations are able to produce 5 MW of energy under a maximum transport distance of 4 km.
Biomass Availability
The effect of reduced availabilities from 100% to 50% is tested on the mean ATCU j and ATDU j of the 10 best strategic locations with a decrement of 10%. For the sake of simplicity, the mean ATCU j and ATDU j are only calculated for capacities of 5, 10, 15 and 20 MW and are presented in Figure 6. For example, the ATCU j of location "476" at 20 MW and 100% availability is 20,300 AUD MW −1 and increases to 21,200 AUD MW −1 at the same capacity with only 50% of the biomass available. For location "801" the ATDU j is 3.73 km MW −1 at 100% biomass availability and 5 MW capacity and increases to 5.99 km MW −1 at the same capacity with only 50% of the biomass available. From the foregoing analysis and according to Figure 2, location "476" is the overall best performing location on different capacities (5-100 MW). However, according to Table 5, location "801" is the most optimal at 100% biomass availability and low capacities (5-20 MW). With decreasing biomass availability, however, longer distances will need to be covered to satisfy the potential demand of the facility. At reduced biomass availability and increasing capacity, locations "125" and "476" become more favourable (Table 5). Table 5. Optimal locations at a range of capacities and reduced biomass availability. 100% 801 801 801 801 90% 801 801 801 125 80% 801 801 801 125 70% 801 801 801 476 60% 801 801 125 476 50% 801 801 476 476 3.
Fuel Price
The ATCUj of the ten best facilities is calculated accordingly for the low and high fuel scenario (Equations (9) and (10)) and compared with the average fuel price in Figure 7. For example, the ATCUj of location "476" at 20 MW and average fuel price is 20,300 AUD MW −1 and increases to 20,700 AUD MW −1 with a high fuel price and drops to 19,800 AUD MW −1 with a low fuel price. At a low capacity (5 MW), both the high and low fuel prices result in a 1% change in the ATCUj value. At a high capacity From the foregoing analysis and according to Figure 2, location "476" is the overall best performing location on different capacities (5-100 MW). However, according to Table 5, location "801" is the most optimal at 100% biomass availability and low capacities (5-20 MW). With decreasing biomass availability, however, longer distances will need to be covered to satisfy the potential demand of the facility. At reduced biomass availability and increasing capacity, locations "125" and "476" become more favourable (Table 5).
Fuel Price
The ATCU j of the ten best facilities is calculated accordingly for the low and high fuel scenario (Equations (9) and (10)) and compared with the average fuel price in Figure 7. For example, the ATCU j of location "476" at 20 MW and average fuel price is 20,300 AUD MW −1 and increases to 20,700 AUD MW −1 with a high fuel price and drops to 19,800 AUD MW −1 with a low fuel price. At a low capacity (5 MW), both the high and low fuel prices result in a 1% change in the ATCU j value. At a high capacity (100 MW) this results in a 7% decrement of ATCU j for low fuel price and a 6% increment of ATCU j for the high fuel price.
Discussion
This research combines GIS methods with a transportation cost model to find the optimal location of forest biomass-to-bioenergy facilities. In considering the development of a profitable bioenergy facility that can sustainably produce bioenergy from forest biomass: (i) there has to be enough biomass to supply the bioenergy facility, and (ii) the biomass has to be sourced within a sensible distance at the lowest possible cost. The two-stage DSS helps to address this biomass logistics problem and this research demonstrates the method in the large study region of Queensland, Australia. The spatial component preceding the transportation cost model indicated that 732 MW of forest biomass is available per year and identified 128 strategic locations to covert this biomass into energy products. Similar quantities for biomass availability can be found in the Australian literature for the state of Queensland [65][66][67] and are roughly converted to one million dry tonnes of forest biomass. Each of the strategic locations served as inputs for the transportation cost model and further location optimization. In the literature [19,20,27], GIS-based methods have been used numerous times to assess biomass and to identify locations for biomass conversion, but seldom have they been applied to such a large-scale extent. Since the transportation of biomass is an overwhelming contributor to biomass supply chain costs [25], further refinement of strategic locations based on transport cost and distance provides a more optimal solution to the facility location problem.
By calculating the average transportation cost and distance of the strategic locations retrieved from the GIS analysis, this research can identify the most optimal location for a forest biomass-tobioenergy facility if we were to establish one. A similar approach was applied to finding the optimal location to convert forest biomass to biofuel by Zhang et al. [27]. The most optimal location for a facility would be the facility with the lowest transport cost and distance at the required capacity.
Discussion
This research combines GIS methods with a transportation cost model to find the optimal location of forest biomass-to-bioenergy facilities. In considering the development of a profitable bioenergy facility that can sustainably produce bioenergy from forest biomass: (i) there has to be enough biomass to supply the bioenergy facility, and (ii) the biomass has to be sourced within a sensible distance at the lowest possible cost. The two-stage DSS helps to address this biomass logistics problem and this research demonstrates the method in the large study region of Queensland, Australia. The spatial component preceding the transportation cost model indicated that 732 MW of forest biomass is available per year and identified 128 strategic locations to covert this biomass into energy products. Similar quantities for biomass availability can be found in the Australian literature for the state of Queensland [65][66][67] and are roughly converted to one million dry tonnes of forest biomass. Each of the strategic locations served as inputs for the transportation cost model and further location optimization. In the literature [19,20,27], GIS-based methods have been used numerous times to assess biomass and to identify locations for biomass conversion, but seldom have they been applied to such a large-scale extent. Since the transportation of biomass is an overwhelming contributor to biomass supply chain costs [25], further refinement of strategic locations based on transport cost and distance provides a more optimal solution to the facility location problem.
By calculating the average transportation cost and distance of the strategic locations retrieved from the GIS analysis, this research can identify the most optimal location for a forest biomass-to-bioenergy facility if we were to establish one. A similar approach was applied to finding the optimal location to convert forest biomass to biofuel by Zhang et al. [27]. The most optimal location for a facility would be the facility with the lowest transport cost and distance at the required capacity. However, understanding that one facility is necessary to capture the entire supply over large areas such as Queensland is complex and a challenging task. Even though the rationale of the transportation cost model is to minimize the cost of transport, our additional motivation is to maximize the capacity within the study area. In the ideal scenario, the cost of supply is kept minimal and the biomass that is produced within the forest is utilized to the greatest possible extent by the bioenergy facility. The combination of GIS and a transportation cost model allows us to tackle this problem. The ATCU j that is calculated for each facility at a range of capacities is low when either the cost of transport is minimal, or the supplied capacity is great (AUD MW −1 ). Thus, the ATCU j value is an indicator of the optimal facility location problem. For comparison, the study by Zhang et al. [27] found a total cost value of USD 4.32 M for a facility using 635,000 tonnes (50% moisture) of forest biomass. The ATCU j value from their result converts to 31,100 AUD MW −1 for the best facility. In our research, location "476" was found to be the best performing across a range of capacities and has an ATCU j value of 22,300 AUD MW −1 , which is located in the most biomass-dense area of the study region.
To understand the effect of price and cost changes in the biomass supply, the results of the transportation cost model were compared to the maximum cost and distance of the biomass supply chain. Values for gate price, harvest cost, and stumpage cost can vary significantly based on the type of harvest system, forest type, type of forest biomass or tree species, amount of biomass, equipment or even the deployment of biomass used in the area. The cost of harvest, stumpage and the gate price of forest biomass are not included as integral parts of the transportation model; however, they are incorporated as constraining elements to secure the profit of the forest biomass market. According to different cost price scenarios in relation to sensitivity, the maximum allowable transport cost and distance will vary and are independent of the capacity. This research estimated that the maximum allowable transport cost is 25,200 AUD MW −1 and the maximum allowable transport distance is an 89 km MW return journey. We found one other research example in Australia that indicated a range of costs between 48.32 AUD odt −1 and 63.25 AUD odt −1 for hardwood chips for delivering biomass residues over a transport distance of 90 km [68]. When converted, the 89-km maximum allowable transport distance in this research corresponds to a maximum transport cost of 16.55 AUD odt −1 . When we convert and add the cost of harvest (48.35 AUD odt −1 ) to the transport cost we receive a comparable value of 64.80 AUD odt −1 . The point of intersection between the 89-km maximum allowable transport distance and the average transportation distance per megawatt according to the transportation analysis is at a capacity of 6 MW. Thus, under the circumstance of the base case scenario, the average facility in Queensland becomes less profitable beyond this capacity when keeping in mind that this is based on the average of the best and worst performing locations. The introduction of a maximum allowable transport costs creates a benchmark for the strategic facility locations from the transportation cost model. When the benchmark cost or distance is exceeded, the strategic location can be rejected as an optimal solution.
To examine the sensitivity of decisions to changes in biomass availability and the cost of fuel, the transportation cost model was also executed by changing these parameters. The biomass availability can suffer considerable losses due to alternative uses; hence, this research investigated 10% decrements of biomass availability. By reducing the availability of biomass supply, the preference of location shifts. With less biomass available, more biomass needs to be collected from additional forest locations. This affects the result of the transportation cost model by increasing the cost due to increased transport distances or reducing the supplied capacity. The optimal facility location is therefore characterized by an increasing ATCU j value. On average, for a 5-MW facility, a reduction of 10% in the biomass availability resulted in an ATCU j increase of 90 AUD MW −1 . The cost of fuel can change significantly throughout the year. Cyclical changes in fuel price, sometimes up to 20%, are not uncommon and have a significant impact on the cost of transport and the fleet. Changes in fuel prices do not interfere at a capacity level of a facility or the supply of biomass to a facility. However, changes in fuel price will affect the ATCU j value from the transportation cost model and simultaneously impact the maximum allowable transportation distance. Changes in the ATCU j value up to 7% were recorded in this study based on changes in fuel price. Similar to the results found in Zhang et al. [27], a change in fuel price did not affect the optimal location, as opposed to changes in biomass availability.
There are several possible opportunities for future research to extend and enhance the developed DSS. One of the key opportunities is to take the single-facility transportation problem to a multi-facility optimization scenario, thereby focusing on utilizing as much of the forest biomass as possible throughout the study area by selecting multiple optimal locations for the conversion of biomass. It is important to consider that each potential facility can be supplied with biomass from within its service area at the lowest possible cost without competition between these facilities. Such models can reinforce the decision on the number of facilities and required capacity needed in the study area to maximize the demand, while minimizing the cost. Often, there will be limits to a capacity of a facility, so it is worth considering a minimum and maximum capacity level based on the technology and biomass resource. When optimizing the supply chain, research should consider economic, environmental and social aspects to come up with the best possible scenario. Depending on the geographical location, interruptions of the forest biomass supply occur annually due to spring/breeding season, snow or fire seasons. It is therefore recommended to improve the knowledge of biomass availability by simulating the annual harvesting cycles and biomass storage solutions.
Conclusions
This paper demonstrated the use of a two-stage decision support system that finds the optimal location of forest biomass-to-bioenergy facilities based on available biomass, transport distance, and transport cost in the study area of Queensland. In stage 1, the method identified 128 strategic locations using a GIS approach. In stage 2, the optimal location for forest biomass-to-bioenergy conversion was based upon reduced transport costs. The influence of fuel price, biomass availability, and cost and pricing of the biomass supply chain was evaluated through a series of sensitivity analyses.
From the case study, we can conclude that: • Location "476" was identified to be the optimal location for bioenergy production from forest biomass across a range of facility capacities.
•
The ATCU j of the average facility in Queensland ranges from 33,700 AUD MW −1 at 5 MW capacity to 79,400 AUD MW −1 at 100 MW with an ATDU j of 86 km MW −1 at 5 MW and 341 km MW −1 at 100 MW.
•
The sensitivity analysis showed that fuel prices and biomass availability have an influence on the transport cost. Biomass availability also influences the selection of the optimal facility location. At the lowest capacity level and 100% biomass availability, location "801" was the optimal location; with increasing capacity or reduced availability, location "476" was the optimal site. • The sensitivity analysis also showed that changes in the biomass price and the costs of the supply chain have an impact on the maximum allowable transport distance and cost. In the base case, the maximum allowable transport distance for a facility in Queensland is 89 km MW −1 and the maximum allowable transport cost is 25,200 AUD MW −1 .
The use of a two-stage DSS for the selection of an optimal facility location has been demonstrated. Additionally, the method evaluated every other possible location and created an average performance scenario for the case area, which enables future planning, investigation and investment from a strategic perspective. The single-facility transportation cost model can be extended to optimization methods that allow for multiple facilities to coexist. The method can potentially be extended to other regions and biomass scenarios.
|
v3-fos-license
|
2020-04-30T09:07:03.501Z
|
2020-04-25T00:00:00.000
|
218521551
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/cripe/2020/5292947.pdf",
"pdf_hash": "ed597d2ddee948b4044946d4b656438dced70302",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45614",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "21cceb98209745f7b46c539fd01e42d128ab5b1d",
"year": 2020
}
|
pes2o/s2orc
|
Successful Liposteroid Therapy for a Recurrent Idiopathic Pulmonary Hemosiderosis with Down Syndrome
Idiopathic pulmonary hemosiderosis (IPH) is a rare and life-threatening disorder. Early diagnosis and appropriate management are essential for their better prognosis and patients' quality of life (QOL). It is considered that Down syndrome patients with IPH have a worse prognosis compared to other IPH cases. A 2-year-old girl with Down syndrome received the diagnosis of IPH after two episodes of massive pulmonary hemorrhage requiring assist ventilation, who suffered from recurrent IPH during tapering period of oral corticosteroid, started liposteroid therapy. We report here a case of successful control of recurrent IPH and improved QOL enormously with tapering dose of corticosteroid after starting liposteroid therapy.
Introduction
Idiopathic pulmonary hemosiderosis (IPH) is a rare disorder that is characterized by the triad of hemoptysis, iron deficiency anemia, and diffuse pulmonary infiltrates on chest radiographs. It occurs more frequently in children, usually is diagnosed before at the age of 10 years, although may occur later in life [1,2]. e etiology of the disease remains unknown and several hypotheses have been reported: autoimmune, allergic, genetic, or environmental hypothesis [1,3,4]. e gold standard for IPH diagnosis is lung biopsy, but this method is challenging due to its invasive nature and potential complications, especially in young children [2,5]. Other diagnostic methods can be conducted for confirmation of hemosiderin-laden macrophages (siderophages) by bronchoalveolar lavage, sputum, or gastric lavage analysis [5][6][7]. Systemic corticosteroid is the first line treatment of IPH for acute bleeding [2,6,8,9]. Immunosuppressants, including azathioprine, hydroxychloroquine, and cyclophosphamide have been proposed in patients with unfavorable response to corticosteroid [1,4,10,11]. However, long-term use of these drugs is associated with many adverse effects and their use should be to limited to the minimum duration and the dosage necessary [12]. Various therapeutic trials have attempted to improve the prognosis with IPH. Nevertheless, no effective maintenance therapy has been established for children with refractory IPH [7,8]. Liposteroid, dexamethasone palmitate, has been introduced as a new effective therapy [8].
rough our case report, we discuss the importance of early diagnosis and management of refractory IPH in Down syndrome, who started liposteroid therapy after the recurrent bleeding with tapering of oral prednisolone and successfully controlled the disease. A 2-year-old girl with Down Syndrome was admitted to our hospital, with weakness, pail, and fever. She presented dyspnea, tachypnea, and severe anemia with hemoglobin level of 2.2 g/dl.
Case Report
She was born at term, and entered the neonatal intensive care unit (NICU) with mild respiratory distress for five days. She received the diagnosis of Trisomy 21 until the disharge of NICU. ere was no cardiac, gastrointestinal, or hematologic disease. Two years after then, prolonged routine followup showed good clinical course, although growth and development were below age appropriate milestones.
Upon admission in our ward, physical examination revealed body weight : 7720 g (−2.9 SD), height : 76.3 cm (−2.9 SD), heart rate : 140 bpm, oxygen saturation in room air 90%, and body temperature : 38.0°C. Laboratory examination we observed; severe anemia as mentioned above, red blood cells (RBC): 1.24 × 10 6 /μl, hematocrit value: 7.9%, reticulocyte count: 6%, with normal white blood cells (WBC) and platelet count. e value of mean corpuscular volume (MCV): 63.4 fl, mean corpuscular hemoglobin (MCH): 17.7 pg/cell, mean corpuscular hemoglobin concentration (MCHC): 27.9 g/dl, and serum iron (10 μg/dl) were very low. Plasma ferritin level (15.8 ng/ml) was within the normal range for patient's age. e coagulation tests, renal function, electrolytes, and liver function were unremarkable. Antiglobulin tests were negative and haptoglobin level was normal. Serum immunoglobulin levels were within the normal range, and the serologic tests for autoimmune diseases; antinuclear antibodies (ANA), antids DNA antibodies, anti-cyclic citrullinated peptide (anti-CCP) antibodies, anti-Sm antibodies, and rheumatoid factor (RF) were all negative. Chest X-ray indicated bilateral interstitial reticular infiltrates ( Figure 1). oracic CT scan was performed and revealed nodular opacities in the right lung. A bone marrow aspiration revealed erythrocyte hyperplasia without malignant cells or hemophagocytic cells. Although no definite diagnosis was obtained, packed red blood cell transfusion was administered, and oral iron was started. ree months later, she was again admitted to the hospital for severe anemia with bilateral alveolar infiltrates on chest X-ray (Figure 2(a)). Repeated thoracic CT showed widespread ground-glass appearance throughout both lungs (Figure 2(b)). She presented cough, tachypnea, pulse oximetry indicating 60% saturation, and wheezing with auscultation of respiratory failure. Subsequently, she was transferred to pediatric intensive care unit (PICU) and required tracheal intubation and mechanical ventilation. en, the same episode as above occurred 4 months later. At that time, IPH was suspected from clinical manifestation and radiologic findings, and a diagnosis of IPH was confirmed by gastric lavage fluid that demonstrated the presence of hemosiderin-laden macrophages (Figures 3(a) and 3(b)). Intravenous prednisolone (2 mg/kg/day) and blood transfusion were given immediately for acute pulmonary bleeding, and promptly improved clinical symptoms and laboratory data. en, oral prednisolone was started as maintenance therapy. However, weaning the dose of prednisolone failed to control the disease, and the episodic pulmonary hemorrhages developed every 3-4 months. For this reason, increasing the dose was resumed. She progressively became cushingoid due to prolonged moderate-to high-dose corticosteroid therapy.
After two and a half years from the onset of IPH, liposteroid was introduced in an attempt to reduce the dose of prednisolone. Liposteroid was intravenously infused at 0.06 mg/kg/day for 3 consecutive days with prednisolone for acute bleeding therapy. After the first liposteroid therapy, the single infusion of the same dose was followed by weekly with tapering prednisolone. She obtained remission three months after initiation of liposteroid even tapering dose of prednisolone, so the interval of infusion was changed step by step, and she is now on liposteroid every 4 weeks and low dose of prednisolone 0.18 mg/kg/day with better controlled for 21 months. During the following 24 months, she suffered from minor alveolar hemorrhage twice, being triggered by respiratory infections, however, her clinical condition improved promptly with 3 consecutive days of liposteroid infusion without increased dose of prednisolone.
Discussion
IPH is a rare disease with the incidence of 0.24-1.23 cases per million in selected population [2,[13][14][15]. IPH is life threatening condition and the early diagnosis is essential for early treatment in order to improve the prognosis and to avoid complications of recurrent alveolar hemorrhage. However, its diagnosis may be difficult and is usually delayed due to absence of classical triad (hemoptysis, iron deficiency anemia and diffuse infiltrates on chest X-ray), insidious onset, lack of awareness about the disorder, and the variable clinical courses [2,10]. ere is a long delay (4 months-10 years) between onset of the symptoms and diagnosis [2,5,6,10]. Especially, in young children, hemoptysis is not common as the first symptom of IPH, occurred in about 50% of patients [10,16], who swallow hemorrhage sputum. In many reports, shortness of breath, iron deficiency anemia, and alveolar infiltration on chest X-ray are typically seen [17]. Iron deficiency anemia may be the first and only manifestation with the lack of hemoptysis. Also, pulmonary involvement may not be found at the onset of IPH, and chest X-ray may show normally [2]. Plasma ferritin level could be elevated or be in normal limits because of the alveolar synthesis and release into the circulation, and do not reflect the iron deposits of the body [2,14]. In our case, severe anemia preceded typical clinical symptoms and radiologic findings of IPH impeding the ability to make a more rapid diagnosis. Greater awareness such as clinical suspicion for the diagnosis of IPH in patients with repeated iron deficiency anemia may lead to the earlier diagnosis and appropriate management, thereby lessening or entirely avoiding major complications.
Corticosteroids are suggested as the first line treatment for acute episodes of alveolar bleeding. In many case reports, corticosteroids were initiated with rapid improvement in clinical course [4,6,18]. On the other hand, their effect on the chronic phase is unclear and their effects on the prognosis remain still controversial. Furthermore, prolonged corticosteroid therapy results in cushingoid feature, weight gain, osteoporosis, and growth retardation [9,11]. Immunosuppressive agents are the second choice of drugs, especially in steroid-dependence or steroid-resistance cases [5,7,10,15,19]. However, immunosuppressants cause the change of immune system, increase risk of infection that may trigger IPH and the possible risk of developing malignancy [8,12,20]. As an another option of treatment, Ohga et al. proposed the liposteroid therapy for the improved outcome of patients with refractory IPH [8,21].
In our case, corticosteroids therapy was effective in acute bleeding, but recurred bleeding even with the corticosteroid prophylaxis. Moreover, she had a Down syndrome. A French study reported Down syndrome patients with IPH had a worst prognosis compared to others, including a fatal outcome and frequent relapse. ey explained that the higher frequency of lower respiratory tract infections in Down syndrome patients is the possible reason of the worst prognosis [10]. It is also well known that an individual with Down syndrome is susceptible to infections, autoimmune disorder and particular types of cancers due to their specific immunity and anatomical reasons [22,23]. We took into account her poor prognostic factor and immune system, and decided to introduce liposteroid to control her disease.
Dexamethasone has higher affinity for the corticosteroid receptor than prednisolone or methylprednisolone and has stronger anti-inflammatory effects [8]. Liposteroid is dexamethasone palmitate, which is a lipid emulsion containing dexamethasone. Liposteroid has the same mechanism of action as dexamethasone, however, greater efficacy and lower frequency of systemic adverse effects than dexamethasone. In addition, liposteroid has lipo-based, palmitate, which is more harmful to non-adipose tissue, is easily taken up by macrophages, and induces strongly apoptosis of macrophages [24,25]. Doi et al. suggested that low-dose liposteroid therapy accumulates effectively in the hemorrhagic inflamed sites of the lung, reduces the chance of high-dose corticosteroid therapy and prevents adverse effects [8]. Liposteroid was an agent to prevent acute and chronic bleeding and also contributed to wean steroids in our case. However, our patient has never used immunosuppressant agents, so unable to compare the effect between liposteroid and immunosuppressant agents.
Conclusion
In conclusion, we emphasize IPH should be suspected by physicians, especially about the diagnosis and management in pediatric patients. Although IPH is a life-threatening disease and causes serious complications, appropriate management is capable of altering the prognosis and patients' quality of life in a positive manner. Our case showed the appropriate diagnosis and management changed patient's clinical course for the better largely, even though she had a poor prognosis of Down syndrome. Liposteroid may be considered as an effective and promising agent for refractory IPH that may limit a patient's cumulative long term exposure to steroids and their resulting complications.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
v3-fos-license
|
2016-03-14T22:51:50.573Z
|
2015-12-09T00:00:00.000
|
15761785
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-445X/4/4/1200/pdf?version=1449662403",
"pdf_hash": "0345413126a85de076b75ecd8e39e5137bbde938",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45618",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "0345413126a85de076b75ecd8e39e5137bbde938",
"year": 2015
}
|
pes2o/s2orc
|
The Role of Citizen Science in Landscape and Seascape Approaches to Integrating Conservation and Development
Initiatives to manage landscapes for both biodiversity protection and sustainable development commonly employ participatory methods to exploit the knowledge of citizens. We review five examples of citizen groups engaging with landscape scale conservation initiatives to contribute their knowledge, collect data for monitoring programs, study systems to detect patterns, and test hypotheses on aspects of landscape dynamics. Three are from landscape interventions that deliberately target biodiversity conservation and aim to have sustainable development as a collateral outcome. The other two are driven primarily by concerns for agricultural sustainability with biodiversity conservation as a collateral outcome. All five include programs in which, management agencies support data collection by citizen groups to monitor landscape changes. Situations where citizen groups self-organise to collect data and interpret data to aid in landscape scale decision making are OPEN ACCESS
Introduction
Land and seascapes are emerging as an organising framework for reconciling biodiversity conservation with other competing land uses [1].Landscape approaches are used to achieve spatial integration of production and conservation in land cover mosaics [2,3].Some landscape configurations contribute more than others to the resilience of both human livelihoods and biodiverse natural systems in the face of threats such as climate change [4].However, landscape outcomes do not emerge as a product of grand design but are determined by an array of individual decisions of numerous diverse and often conflicted stakeholders influenced by regulations, incentives and social pressures.This process has been characterised as muddling through [5,6].Sectoral institutions and spatial planning are often inadequate in mediating these ongoing transitions and landscape outcomes generally emerge as a response to numerous conflicting drivers of change [7].
There are a number of different ways in which citizen science [8,9] can contribute to conservation at landscape scales.On the simplest level, it is common for amateur scientists or people from local communities to contribute to databases on the flora and fauna of particular areas.Interested volunteers record the presence or abundance of species in different habitats or under different conditions [10].Bird counts are a common example of this [11].Scientists may then interpret the data to add value to studies they are undertaking.This is citizen participation but not citizen-driven science.A deeper involvement can come through local people contributing knowledge and/or experience to the design of a study and the interpretation of results [12].For example, the traditional knowledge of indigenous people may be used by scientists to erect hypotheses.Similarly, knowledge acquired by farmers over generations may help scientists to develop hypotheses and interpret results.This kind of involvement can extend further to a group of concerned citizens erecting their own hypotheses and pursuing the scientific process of collecting data and testing those hypotheses, either with or without the support of professional scientists.This is, perhaps, the ideal to which citizen science could aspire, but in reality there is probably a continuum from the simplest case of using local knowledge, observation and recording data through to the ideal of a totally citizen-driven process with varying levels of involvement and co-development of knowledge in between.
The involvement of citizens across this entire continuum will surely be needed to achieve landscapes that balance conservation and development.The participating citizens are the people who experience the consequences of landscape change and in turn determine through their actions the evolution of the landscape.Lasting change will depend on the decisions they make and cannot be affected without them.Landscape approaches need to embrace the potential of citizen science more fully as a fundamental way of achieving the social learning that should be an essential driver of landscape change.
The Wet Tropics of Australia
The Wet Tropics region of Australia extends 500 km along the north Queensland coast between Townsville and Cooktown, forming a belt approximately 50 km wide.Although less than 1% of the State of Queensland in area and having a population of around 200,000 people , the region contains the highest biological diversity in Australia and is recognised as one of the world's mega-diverse regions [13].As well as outstanding natural values it has significant tourism and other economic values [14].A little less than half of the entire 2.2 million hectare Wet Tropics region was granted World Heritage status in 1988.The forests of the wet tropics border the Great Barrier Reef, providing a unique example of two World Heritage Areas interacting [15].
When this World Heritage area was established, conservation and production were treated as mutually exclusive activities [16].The original perception was that conservation would take place in the World Heritage area under the management of the Wet Tropics Management Authority.Primary production and urban expansion were to happen elsewhere.This segregated view of the landscape is gradually changing [17].There is increasing recognition that proactive management for multiple uses and values across boundaries is needed to address conservation and development challenges.The influence of land-based activities in the Wet Tropics on the adjacent Great Barrier Reef is also becoming a stronger focus of attention.
During the past decade, landscape approaches have emerged in Australia as the predominant paradigm for management of complex World Heritage areas such as the Wet Tropics [18].In addition to the formal regulatory Wet Tropics Management Authority, a community-based regional natural resource management framework, has been established, led by a parastatal body called Terrain NRM (www.terrain.org.au).Regional natural resources plans have explicitly integrated the concept of citizen science.An interface between science providers and community stakeholders has been used in the development and review of regional catchment-scale natural resource plans [19].Several scientific papers have been jointly published by scientists and local people with knowledge of natural resource management issues [20].Explicit investments have been made, for example, in involving citizen scientists in monitoring revegetation success, trends in the abundance of the spectacled flying fox Pteropus conspicillatus, the location of Cassowary, Casuarius casuarius, sightings, water quality, etc., to underpin management interventions.
Since the early 1990s a local non-governmental organisation, Kuranda Envirocare, (Kuranda.envirocare.org.au) has been restoring rainforest corridors in areas that have been colonised by exotic grasses to link habitat fragments and provide habitat continuity for rainforest biota.During 2013 volunteers undertook a series of bird surveys in restored forests of different ages.The surveys showed that most species of rainforest-dependent birds began to use newly planted stands within 10 years of establishment and many species occupied reforested areas from year one.Counts of two species of fruit-dove, the Brown Cuckoo-dove, Macropygia amboinensis and Emerald dove, Chalcophaps indica peaked in five year old stands when their favoured fruit bearing trees Alphitonia petriei and Homolanthus novoguineensis began fruiting.Two other fruit-doves, the Superb fruit-dove, Ptilinopus superbus and Wompoo fruit-dove, P. magnificus were much later in arriving (Figure 1) as their preferred fruits, laurels (Lauraceae), figs (Moraceae), and quandongs (Elaeocarpus spp.)only matured in older stands.Knowledge of trends in colonisation by some 30 rainforest bird species is now being used to establish hypotheses for testing in more recently planted rainforest stands where more intensive bird and habitat monitoring has been established.These studies determine the ages at which sites are being used for nesting and seasonal feeding and will elucidate the roles of remnant vegetation.While these studies are citizen-led they have received methodological support from university scientists and funding from the non-governmental organisation, BirdLife Australia.
In addition to the above, citizen-led research is contributing to the delivery of the Reef Water Quality Protection Plan [16,[21][22][23][24], while other projects have focused on the identification of linked biophysical and cultural indicators [25] and mapping the health of cultural ecosystem services [26].
Positive environmental and social outcomes from these research activities have been reported [20,23,27,28]; a major finding has been that citizen engagement is critical for social learning and has changed behaviour of individual land managers within the landscape.Many citizens now actively value the biodiversity on their properties and actively manage their own land for wildlife.Citizens frequently mobilise to counter any threats to nature and property values are enhanced by the presence of rain forest species.
The Lake Eyre Basin of Central Australia
Lake Eyre is an ephemeral salt lake at the centre of an internally draining basin that covers more than 1.2 million square km in central Australia [29].The Lake Eyre Basin is inhabited by fewer than 60,000 people.It cuts across multiple political and administrative jurisdictions including Queensland, South Australia and the Northern Territory.It is a unique desert river system with unregulated ephemeral rivers, permanent waterholes and artesian springs.The Basin's diverse landscapes range from sandy and stony deserts to rolling grasslands.The region contains many significant indigenous heritage sites.Land use is dominated by pastoralism, nature conservation, mining, and petroleum extraction [30].
In the late 1990's two complementary management processes were initiated-one a community-led initiative across jurisdictional borders and the other a joint government process between the Federal Government and States.The community initiative was based upon an iterative process that enabled co-learning and dialogue between different stakeholders including community members, indigenous people and scientists.
A proposal for cotton farming on the Cooper Creek, upstream of Lake Eyre in Queensland further encouraged parties to work together; in this case to challenge this potentially damaging impact on ecological and pastoral values in the catchment.Collaboration between pastoralists, scientists, traditional owners, and community members grew out of a community science workshop in 1995.Eventually the cotton proposal was defeated and citizen-led research gained credibility across the basin.In one instance a local pastoralist has become an internationally recognised citizen scientist himself, discovering a range of new species and contributing significant ecological insights (http://www.abc.net.au/radionational/programs/offtrack/angus-emmott/4648754, accessed on 17 February 2015).
The informal community and scientific collaborations described above led to the establishment of the Lake Eyre Basin Intergovernmental Agreement supported by both a Community Advisory Committee and a Scientific Advisory Panel.These bodies determine priorities, share knowledge, and advise a ministerial forum.An Intergovernmental Agreement came into effect in 2001 to provide for the sustainable management of the Lake Eyre Basin.The Agreement states: "that the collective local knowledge and experience of the Lake Eyre Basin Agreement Area communities are of significant value; and that decisions need to be based on the best available scientific and technical information together with the collective local knowledge and experience of communities within the Lake Eyre Basin Agreement Area."In accordance with these principles the committees and Ministers agreed upon and instituted a Strategic Adaptive Management process for the Basin which recognises the role of citizens in monitoring, research, and land management (http://www.lakeeyrebasin.gov.au/resources/publications,accessed on 26 February 2015).
The Lake Eyre Basin experience represents one step further along the continuum of citizen engagement with the capacity of citizens to contribute important scientific information being formally recognised in government-led processes.Citizens and scientists are co-learning to better understand the ecological and social processes that are shaping the landscape.
The Sangha Tri-National Landscape in the Congo Basin
The Sangha Tri-National landscape covers an area of 43,000 km 2 across the borders of Cameroon, the Central African Republic, and the Republic of Congo and has a population of 200,000 people.It is one of 12 landscapes identified as conservation priorities under the Congo Basin Forest Partnership adopted at the World Summit on Environment and Development in Durban in 2002 (http://www.cbfp.org,accessed on 20 December 2014).Several international conservation organisations are active in the landscape and all aspire to achieve improved livelihoods for local people and to conserve the rich biodiversity of the forests.In 2002 a process was initiated to engage local stakeholders in the scientific monitoring of changes in the environment and in local livelihoods [31].A range of participatory tools were used to enable local people to identify future landscape scenarios that they believed would meet their livelihood needs and conserve biodiversity [32].Indicators for both environmental and local livelihood values were identified by the different stakeholders in the landscape-these indicators allowed progress towards achieving the desired landscapes to be monitored.The indicators were assessed annually by local people and multistakeholder fora were held to discuss progress and assess the ways in which conservation and development interventions and external factors had impacted on landscape change [31].
Aid donors active in the landscape were strongly committed to small local interventions to improve agriculture, introduce new crops and livestock, and provide simple new agricultural technologies.It soon emerged that adoption rates for these micro-interventions were low but that macro-economic changes and large scale investments had far higher impacts on local livelihoods and the environment [33,34].Development initiatives such as mining, sawmills and, potentially, palm oil plantations, were recognised by local citizens as the real drivers of change in the landscape.External investments brought jobs and social infrastructure and concentrated people in development poles, thus reducing human pressure on the remote hinterlands.Biodiversity values would be maintained in protected areas in these hinterlands.
The citizens engaged in this science compiled data and accumulated evidence that supported the case that micro-interventions were less effective in achieving conservation and development outcomes than larger scale initiatives [31,34,35].The stakeholder forum included both local citizens of both local Bantu groups and the Baka Pygmies who inhabit the forests, together with representatives of the aid agencies and conservation organisations active in the landscape.The results of the monitoring of indicators and the debates at the stakeholder forums were communicated to higher level decision-makers.Evidence for the ineffectiveness of micro-level interventions and the potential benefits of larger scale investments ran counter to the orthodoxy of the external agencies funding the programmes.These agencies appeared to be rooted in a political economy that saw all external commercial investment as a threat to biodiversity and to the traditional livelihoods of forest dependent people.The rhetoric of the external conservation groups was about linking conservation with development but their actions tended to support the status quo.Citizen science has challenged these assumptions although it remains to be seen if this will result in significant policy changes.
The Sangha Tri-National example illustrates the potential weakness of a citizen science approach to social learning in a landscape.Citizen science can test hypotheses and build consensus amongst important groups of stakeholders but if it fails to gain traction amongst higher level decision-makers it will not succeed.Citizen science should ideally engage with all decision makers but if this is not possible it should at least be rooted in strategies for communicating with and influencing those in positions of power.
The Bird's Head Seascape, Indonesian Papua
The Bird's Head Seascape is located in eastern Indonesia, encompassing the waters and islands, and some of the coastal areas, of Papua Barat province on the Indonesian side of the island of New Guinea.Bird's Head refers to the shape of New Guinea Island at this, its western end.Over 40% of the 761,000 people living in the seascape fall below the poverty line [36] and the landscape has one of the highest poverty rates in Indonesia.Since the early 1960s the Indonesian government has implemented transmigration programs to encourage families from overpopulated islands further west, to settle in West Papua, which is on the brink of experiencing accelerated economic growth based on its vast natural resource wealth.Today the seascape remains one of the few parts of Indonesia yet to experience major development.
The Birds Head Seascape is the global epicentre of tropical shallow water marine biodiversity with over 600 species of corals and 1,638 species of coral reef fishes [37][38][39].Since 2004, the Seascape has been the focus of a concerted long-term conservation and sustainable development initiative, which is the result of an inclusive partnership between donors, international NGOs, local NGOs, Provincial and local governments, the State University of Papua, and marine tourism operators including resorts, operators of liveaboard diving boats, and local homestays.
The overall objective of this initiative is to secure the long-term management of coastal and marine resources in a manner that ensures food security and sustainable economic benefits while preserving the seascape's globally-significant biodiversity and marine ecosystems.The initiative is founded on three key components; a sound scientifically derived knowledge base, training to build capacity in marine protected area management, and new institutional arrangements for lasting stewardship.
Traditional fisheries management involving rotating no-take zones to replenish stocks is already in place.Management of no-take zones is now being modified using new knowledge derived from scientific studies of climate and oceanography including currents and seasonal sea surface temperatures.Local citizens have been trained in marine survey techniques and now conduct systematic surveys of corals and other marine life.Local people are co-producing knowledge with external scientists on the distribution and species composition of different habitat types and the population dynamics of key species including cetaceans, sharks, turtles, and crocodiles [37] is now known.Spawning aggregations and the dispersal patterns of propagules have been mapped.Based on this knowledge, a network of marine protected areas totalling nearly 3.6 million ha has been established that is designed for fisheries replenishment and protects 20%-30% of each of the critical coastal and marine habitats in the seascape.
One important difference between the Bird's Head Seascape and other marine protected area networks is that local community leaders have been hired and trained as protected area managers and patrol team members, which generally ensures that they are passionate about what they do because they are protecting their own reefs.In 2009, an intensive protected area management course was launched to train teams and associated government officials to an international standard of competency.Some team members have received fellowships from the NGO RARE, which trains people in behavioural science and marketing techniques to inspire community action.These RARE fellows are now experts in social marketing of the benefits of marine protected areas to fisheries.A parallel focus on marine conservation education of coastal school children using a boat equipped as an educational resource centre has helped develop a sense of ownership and pride in the seascape's spectacular marine resources.
The marine protected area management teams are now being transitioned into government agencies in order to ensure ongoing institutional commitment to the protected area network.Current efforts are underway to create a co-management body (termed 'Badan Layanan Umum Daerah' or regency technical unit), a model that has been successfully applied to hospitals in many parts of Indonesia.This public-private co-management model has two major benefits.First, it allows the management body to manage its own finances, including both government budget allocations and grants from aid agencies and private donors, as well as any revenues generated (e.g.tourism entrance fees).Second, it allows non-government partners to sit on the management board and private individuals to be recruited as protected area staff [37].
The process outlined above links traditional local scientific knowledge with the results of external scientific studies.The process trains local citizens in scientific management techniques for application across the seascape and develops new institutional structures to improve governance.The people who live in, and influence, the use of the Bird's Head Seascape, along with local and provincial government agencies, now have ongoing responsibility for, and a clear stake in, management that balances economic development with the protection of globally-significant biodiversity.The initiative in the Bird's head Seascape is not citizen science in the way the term is commonly understood.Local people are organized by outsiders to collect data-but their traditional knowledge is exploited and valued.Gradually these people may develop the skills needed to become pro-active citizen scientists.We consider that the Bird's Head Seascape provides an example of how a process of participatory action research could evolve towards genuine citizen involvement in knowledge generation and hypothesis development which, in turn, would lead to their empowerment to manage the seascape.
The Bali Rice Terraces World Heritage Area
The combination of forested hills, gardens and rice terraces results in the iconic Balinese landscapes that are a major international tourist attraction.Several hundred thousand people live in the landscape and depend upon rice cultivation for their livelihoods.Part of this landscape has recently been listed under the World Heritage Convention for its outstanding cultural values.The area also has outstanding natural values of forested landscapes and indigenous biodiversity and these may also be recognised by World Heritage listing in the future.The balance in the landscape between forests, trees, and agriculture results from hundreds of years of community management where decisions are mediated by ceremonies held at Water Temples organised through the Balinese Hindu religion [40].Religious ceremonies provide a forum at which conflicting demands for water and for use of the land are discussed and the debates are mediated by Hindu priests.Participants in the ceremonies provide evidence on yields, pest challenges, soil fertility constraints, and hydrological performance of watersheds.This information is not collected or analysed in ways consistent with modern science-hypotheses are not tested according to Western scientific epistemologies.However, the ceremonies do provide allow the results of management interventions to be assessed and decisions to be taken on the basis of evidence provided by this informal action research.During recent interviews with farmers we were impressed by their understanding of the value of indigenous biodiversity in biological pest control.Local rules exist to conserve owls, snakes and other predators that control rodents.Local agreements limit hunting and use of pesticides harmful to wild species.The rice terraces have been successfully managed in this way for several hundred years [40].
Government natural resource management agencies, sometimes supported by international aid agencies, have attempted to apply more rigorous science and technology to improve the productivity of this landscape but have failed.Technological water management models and government planning have not been able to deal with the complexity of the situation and the need for coordinated action by multiple stakeholders.The religious ceremonies that facilitate management of the Bali rice terraces do not meet the usual criteria for citizen science but they do allow for experimentation, social learning, and adaptation in ways that are similar to those described in the other examples in this paper.
Discussion and Conclusions
Our review shows that citizens are involved in a diversity of scientific activities that support the conservation of tropical landscapes and seascapes and that citizen knowledge and citizen science contribute significantly to shaping the landscapes.Only in the case of the Wet Tropics of Australia with its scientifically literate population is science being led by citizens.In the Sangha Tri-National and the Bird's Head seascape local people are being mobilised by external scientists to collect data and monitor landscape changes.Citizen monitoring of this sort is widespread in the tropics and is a common component of landscape initiatives by conservation organisations [10].We concur with Danielsen and others that the quality of citizen contributions to scientific monitoring can be high and carries the added benefit of securing citizen engagement in landscape conservation initiatives.In the case of the Bird's Head Seascape, this has been taken an important step further.Citizens have been trained and now monitor and manage their own protected areas.While at this stage we can only speculate on the longterm success of this approach, in the short term it has heightened interest and engagement among the local citizens who, after all, are the people who have the greatest stake in the successful management of their natural resources.
The examples that we give for production landscapes illustrate the power of science-based citizen engagement in large-scale landscape initiatives.Citizens groups play a role not so much in contributing data but in drawing upon their local knowledge to challenge the understanding of external scientists.Farmers, fishers, and graziers with generations of knowledge to draw upon have the ability to anticipate the impact of landscape scale interventions on the complexity of their farming systems.With their ancient knowledge of how the landscape responds to disturbances like fire and flood, indigenous people in the Lake Eyre Basin can contribute understanding of the consequences of management interventions that alter such disturbance regimes.Production foresters and protected area managers in the Sangha Tri-National landscape have divergent views on desirable management interventions.Production foresters point to the biodiversity richness of forests subject to careful logging and, hence, providing values at a landscape scale [31].In all of these cases the involvement of citizens at various levels is enriching the knowledge base required for management decisions.
Citizens vary in their level of scientific literacy.They may have rich and sophisticated traditional knowledge as in the rice terraces of Bali but little familiarity of modern science.The rainforests of the Australian Wet Tropics attract people with a strong interest in natural history and are a self-selected population with a high level of scientific literacy.Scientists from research institutes and academia may have greater competence in formal scientific methods within their disciplines or sectors but may be much less attuned to the complexity of local realities.Citizen science has the role of bridging this gap.
Citizen science does not provide a panacea.Good landscape outcomes require strong laws and institutions, transparency, effective negotiations, and credible leadership [41].Citizen science cannot compensate for the absence of these preconditions but it can contribute to the emergence of the context that is needed to enable landscape approaches to achieve their potential benefits.Landscape approach practitioners need to recognize this and nurture citizen science even in landscapes where scientific capacity may remain limited.Ultimately landscapes are the result of the knowledge and decisions of citizens and the higher the capacity of citizens the better is the likelihood of achieving shared goals.We, therefore, argue against the prevailing paradigm of landscape approaches being expert-driven and argue for a more inclusive citizen driven process.
The recent literature on landscape approaches to achieving conservation goals suggests that a transition is underway.Landscape approaches are moving from an externally imposed process of spatial planning to a locally-driven process of social learning, experimentation, and adaptation [2,5,41,42].Landscapes are a social construct and their nature is determined by the decisions of individual actors.Engaging these actors in landscape research will strengthen their engagement in landscape conservation and exploit their often considerable knowledge and understanding of local landscape dynamics.We encourage conservation managers to recognise the potential power of citizen science in landscape initiatives, to create space for citizens to organise and contribute to landscape-scale learning and to empower citizen groups to take the lead in determining the future of their landscapes.
Figure 1 .
Figure 1.Average counts (n = 4) of two species of fruit-doves in different aged planted sites up to 19 years old and older reference sites, Barron River.Unpublished data from Kuranda Envirocare.
|
v3-fos-license
|
2018-04-03T02:36:47.885Z
|
1996-03-15T00:00:00.000
|
34328521
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1074/jbc.271.11.6389",
"pdf_hash": "0ef591824cf8c5936d7589156817eb740e1ebceb",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45619",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "82b8d0277ed5b9cf4d46900c416300ba6da42de2",
"year": 1996
}
|
pes2o/s2orc
|
Physical and functional interactions between Lyn and p34cdc2 kinases in irradiated human B-cell precursors.
Exposure of human B-cell precursors (BCP) to ionizing radiation results in cell cycle arrest at the G2-M checkpoint as a result of inhibitory tyrosine phosphorylation of p34cdc2 . Here, we show that ionizing radiation promotes physical interactions between p34cdc2 and the Src family protein-tyrosine kinase Lyn in the cytoplasm of human BCP leading to tyrosine phosphorylation of p34cdc2. Lyn kinase immunoprecipitated from lysates of irradiated BCP as well as a full-length glutathione S-transferase (GST)-Lyn fusion protein-phosphorylated recombinant human p34cdc2 on tyrosine 15. Furthermore, Lyn kinase physically associated with and tyrosine-phosphorylated p34cdc2 kinase in vivo when co-expressed in COS-7 cells. Binding experiments with truncated GST-Lyn fusion proteins suggested a functional role for the SH3 rather than the SH2 domain of Lyn in Lyn-p34cdc2 interactions in BCP. The first 27 residues of the unique amino-terminal domain of Lyn were also essential for the ability of GST-Lyn fusion proteins to bind to p34cdc2 from BCP lysates. Ionizing radiation failed to cause tyrosine phosphorylation of p34cdc2 or G2 arrest in Lyn kinase-deficient BCP, supporting an important role of Lyn kinase in radiation-induced G2 phase-specific cell cycle arrest. Our findings implicate Lyn as an important cytoplasmic suppressor of p34cdc2 function.
Exposure of human B-cell precursors (BCP) to ionizing radiation results in cell cycle arrest at the G 2 -M checkpoint as a result of inhibitory tyrosine phosphorylation of p34 cdc2 . Here, we show that ionizing radiation promotes physical interactions between p34 cdc2 and the Src family protein-tyrosine kinase Lyn in the cytoplasm of human BCP leading to tyrosine phosphorylation of p34 cdc2 . Lyn kinase immunoprecipitated from lysates of irradiated BCP as well as a full-length glutathione S-transferase (GST)-Lyn fusion protein-phosphorylated recombinant human p34 cdc2 on tyrosine 15. Furthermore, Lyn kinase physically associated with and tyrosine-phosphorylated p34 cdc2 kinase in vivo when coexpressed in COS-7 cells. Binding experiments with truncated GST-Lyn fusion proteins suggested a functional role for the SH3 rather than the SH2 domain of Lyn in Lyn-p34 cdc2 interactions in BCP. The first 27 residues of the unique amino-terminal domain of Lyn were also essential for the ability of GST-Lyn fusion proteins to bind to p34 cdc2 from BCP lysates. Ionizing radiation failed to cause tyrosine phosphorylation of p34 cdc2 or G 2 arrest in Lyn kinase-deficient BCP, supporting an important role of Lyn kinase in radiation-induced G 2 phase-specific cell cycle arrest. Our findings implicate Lyn as an important cytoplasmic suppressor of p34 cdc2 function.
B-cell precursor (BCP) 1 leukemia is the most common childhood malignancy and represents one of the most radiationresistant forms of human cancer (1)(2)(3)(4)(5)(6)(7)(8). Recent studies demonstrated that Ͼ75% of clonogenic BCP leukemia cells from more than one-third of the newly diagnosed patients and virtually all of the relapsed patients are able to repair potentially lethal or sublethal DNA damage induced by radiation doses that correspond to the clinical total body irradiation dose fractions (i.e. 2-3 Gy) (6). Consequently, the vast majority of BCP leukemia patients undergoing total body irradiation in the context of bone marrow transplantation relapse within the first 12 months and only 15-20% survive disease-free beyond the first 2 years (9,10).
Ionizing radiation and various DNA damaging agents cause an accumulation of cells in G 2 phase of the cell cycle (11)(12)(13)(14). Several lines of evidence indicate that this transient G 2 arrest allows the cells to repair potentially lethal or sublethal DNA lesions induced by radiation or other DNA damaging agents. Cells that are unable to show this response are more sensitive to DNA damaging agents, and drugs that abolish this response sensitize cells to DNA damaging agents (11,(15)(16)(17)(18)(19)(20)(21)(22). A human lymphoma cell line that displayed markedly enhanced sensitivity to DNA damage by nitrogen mustard was found to be defective in the G 2 phase checkpoint control (14). The elucidation of the mechanism by which ionizing radiation induces G 2 arrest in BCP leukemia cells could lead to a rational design of radiation sensitizers that impair the repair of radiation-induced DNA damage by leukemia cells and improve the outcome after total body irradiation and bone marrow transplantation.
The molecular mechanism by which ionizing radiation induces G 2 arrest in the human cell cycle and prevents entry into mitosis has not yet been deciphered, but preliminary evidence suggested that it may involve the inactivation of p34 cdc2 kinase by inhibitory tyrosine phosphorylation on tyrosine 15 (23)(24)(25). p34 cdc2 kinase is the catalytic subunit of mitosis promoting factor (MPF), and its activation is a prerequisite for induction of M phase (26 -28). Recent studies demonstrated that exposure of BCP leukemia cells to ␥-rays results in enhanced tyrosine phosphorylation of multiple substrates including p34 cdc2 kinase (25,29). Furthermore, the protein-tyrosine kinase (PTK) inhibitor herbimycin A was able to prevent radiationinduced tyrosine phosphorylation and inactivation of p34 cdc2linked histone H1 kinase activity as well as mitotic arrest (25), supporting the notion that radiation-induced cell cycle arrest of BCP leukemia cells at G 2 -M transition is likely triggered by inhibitory tyrosine phosphorylation of p34 cdc2 kinase.
Several mitotic control genes encoding for protein-tyrosine kinases or protein-tyrosine phosphatases have been shown to coordinately regulate MPF function by altering tyrosine phosphorylation of p34 cdc2 kinase (30 -33). Genetic experiments in fission yeast have shown that the WEE1 kinase negatively regulates mitosis by phosphorylating p34 cdc2 on Tyr 15 , thereby inactivating p34 cdc2 -cyclin B complex (32,33). Preliminary genetic studies in fission yeast initially suggested an important role for WEE1 kinase in radiation-induced mitotic arrest at G 2 -M transition (34). However, a more recent study using Schizosaccharomyces pombe cells lacking functional wee1 gene product provided convincing evidence that fission yeast WEE1 kinase is not required for radiation-induced mitotic arrest (35). Furthermore, we detected no increase of human WEE1 kinase activity after radiation of BCP leukemia cells, as measured by autophosphorylation, tyrosine phosphorylation of (a) recombinant human p34 cdc2 -cyclin B complex isolated from lysates of insect cells coinfected with recombinant viruses encoding GSTcyclin B and [Arg 33 ]p34 cdc2 , an inactive mutant of p34 cdc2 , (b) p34 cdc2 -cyclin B complex biochemically purified from starfish oocytes, or (c) a synthetic peptide derived from the p34 cdc2 amino-terminal region, [Lys 19 ]Cdc2(6 -20)NH 2 (25). Human WEE1 kinase isolated from unirradiated or irradiated BCP leukemia cells had minimal PTK activity toward the aforementioned substrates (25). Thus, the identity of radiation-responsive kinases which inactivate MPF in human BCP leukemia cells remains unknown.
Lyn kinase is the predominant PTK in human BCP leukemia cells (36,37). The enzymatic activity of Lyn in human BCP leukemia cells is rapidly stimulated by ionizing radiation (38). Similarly, exposure of myeloid leukemia cells to ionizing radiation has been reported to cause Lyn kinase activation (39). Lyn kinase was shown to physically associate with p34 cdc2 kinase in lysates of irradiated myeloid leukemia cells, however the significance of Lyn kinase activation or its association with p34 cdc2 kinase in myeloid cells has not been examined (39). These recent observations prompted the hypothesis that p34 cdc2 kinase may associate with and serve as a substrate for Lyn in BCP leukemia cells.
Here, we show that the Lyn kinase associates physically and functionally with p34 cdc2 in the cytoplasm of BCP. Immunoblotting of Lyn immune complexes with an anti-p34 cdc2 -Cter antibody (where Cter indicates COOH terminus) and immunoblotting of p34 cdc2 immune complexes with an anti-Lyn antibody provided evidence for an association between Lyn and p34 cdc2 kinases in lysates of BCP even before radiation exposure. Irradiation of BCP stimulated the Lyn kinase, and concomitant with Lyn kinase activation following radiation exposure, p34 cdc2 became detectable in the Lyn immune complexes as a tyrosine-phosphorylated protein substrate. The abundance of the Lyn protein, as estimated by anti-Lyn Western blot analysis, did not change during the course of the experiment, suggesting increased enzymatic activity of Lyn. However, the abundance of the p34 cdc2 protein in the same Lyn immune complexes, as determined by anti-Cdc2-Cter Western blot analysis, was significantly increased after radiation exposure, suggesting that enhanced tyrosine phosphorylation of p34 cdc2 which parallels the Lyn activation is at least in part due to radiation-induced promotion of the physical association between Lyn and p34 cdc2 in NALM-6 cells. Binding experiments with truncated GST-Lyn fusion proteins suggested a functional role for the SH3 rather than the SH2 domain of Lyn in Lyn-p34 cdc2 interactions in BCP. The first 27 residues of the unique amino-terminal domain of Lyn were also essential for the ability of GST-Lyn fusion proteins to bind to p34 cdc2 from BCP lysates. Lyn kinase immunoprecipitated from lysates of irradiated BCP as well as a full-length GST-Lyn fusion proteinphosphorylated recombinant human p34 cdc2 on tyrosine 15. The ability of the Lyn kinase to phosphorylate recombinant human p34 cdc2 on Tyr 15 was amplified following radiation exposure. Lyn kinase interacts with and tyrosine-phosphorylates p34 cdc2 in vivo when these kinases are coexpressed in COS-7 cells. Ionizing radiation failed to induce p34 cdc2 tyrosine phosphorylation or G 2 arrest in Lyn kinase-deficient BCP leukemia cells expressing Fyn, Blk, and Lck kinases. These convergent observations constitute a strong argument for an important role of a cytoplasmic signal transduction pathway intimately linked to the Lyn kinase in radiation-induced G 2 phase-specific cell cycle arrest of human BCP leukemia cells. Since the dura-tion of the G 2 arrest is a major determinant of radiation resistance in BCP leukemias, this knowledge may lead to the design of a leukemia-specific radiosensitization method.
Our findings implicate Lyn as an important cytoplasmic suppressor of p34 cdc2 function. Lyn kinase may serve as an integral component of a physiologically important surveillance and repair mechanism for DNA damage by delaying the G 2 -M transition in cells exposed to mutagenic oxygen free radicals, thereby allowing them to repair their DNA damage prior to mitosis. Lyn kinase may also protect the cell from the potentially catastrophic consequences of premature cytoplasmic p34 cdc2 activation by maintaining the p34 cdc2 -cyclin B complex in its inactive, tyrosine phoshorylated state.
EXPERIMENTAL PROCEDURES
Irradiation of Cells-NALM-6 pre-B leukemia cells and Lyn kinasedeficient leukemic BCP from acute lymphoblastic leukemia (ALL) patients were obtained from the Cell Bank of the Childrens Cancer Group ALL Biology Reference Laboratory in Minneapolis, MN. Where indicated, cells (5 ϫ 10 5 /ml in plastic tissue culture flasks) were irradiated ( 137 Cs irradiator; J. L. Shephard, Glendale, CA, model Mark I) with 1 Gy (ϭ 100 rads) or 2 Gy at a dose rate of 1 Gy/min under aerobic conditions, as described previously (5-7, 40, 41).
Immunoblot Analysis of Tyrosine Phosphorylation of p34 cdc2 and Its Interaction with Lyn Kinase in BCP Leukemia Cells-p34 cdc2 kinase or Lyn kinase were immunoprecipitated from Nonidet P-40 lysates of BCP leukemia cells using an anti-Cdc2-Cter antibody (Upstate Biotechnology, Inc., Lake Placid, NY) or an anti-Lyn antibody, according to previously published procedures (25,29,36). In brief, cells (5 ϫ 10 6 cells/ sample) were solubilized in 0.5 ml of 1% Nonidet P-40 lysis buffer (50 mM Tris-Cl, pH 7.5, 150 mM NaCl, 1% Nonidet P-40, plus 1 mM EDTA) containing 0.1 mM sodium orthovanadate and 1 mM sodium molybdate as phosphatase inhibitors, and 10 g/ml leupeptin, 10 g/ml aprotinin, and 1 mM phenylmethylsulfonyl fluoride as protease inhibitors on ice for 30 min. Lysates were spun twice at 12,000 ϫ g for 15 min at 4°C prior to immunoprecipitation. Indicated amounts of the cell lysates were immunoprecipitated with anti-Cdc2-Cter (10 g of antibody/200 g of lysate) or anti-Lyn (2 l of antibody/200 g of lysate) overnight at 4°C.The immune complexes were collected with 50 l of a 1:1 (v/v) slurry of protein A-Sepharose (Repligen Corp., Cambridge, MA) in Nonidet P-40 buffer. The immunoprecipitates were washed four times with Nonidet P-40 buffer, resuspended in 2 ϫ SDS reducing sample buffer, and boiled. Samples were run on 10.5% SDS-polyacrylamide gel electrophoresis (PAGE) gels, transferred to PVDF membranes, and subsequently immunoblotted with either anti-phosphotyrosine (5 g/ ml) or anti-Cdc2-Cter (2 g/ml) antibodies. 125 I-Labeled protein A was used to detect tyrosine-phosphorylated proteins or p34 cdc2 kinase. In some experiments, we used immunoblotting with anti-Lyn antibody to detect Lyn kinase in p34 cdc2 immune complexes and immunoblotting with anti-Cdc2-Cter antibody to detect p34 cdc2 kinase in Lyn immune complexes. Blots were incubated with 1 Ci/ml 125 I-labeled protein A (specific activity ϭ 30 Ci/g; ICN Biomedicals) in blocking solution. After a 30-min incubation in 125 I-protein A, blots were washed, as indicated above, dried, and autoradiographed using a XAR-5 film (Eastman Kodak Co.). Molecular masses (in kDa) of the phosphotyrosyl protein substrates were calculated from prestained molecular size markers (Amersham Corp.) that were run as standards.
Immune Complex Kinase Assays-To evaluate the effects of ionizing radiation on the kinase activity of Lyn, exponentially growing cells (5 ϫ 10 6 /ml in ␣-minimal essential medium) were irradiated and lysed at the indicated time points in a Nonidet P-40 buffer. 200 g of cell lysates/ sample were immunoprecipitated with a rabbit anti-Lyn antibody, as described previously (25,29,36). Samples were assayed for kinase activity during a 10-or 20-min incubation in the presence of [␥-32 P]ATP (50 Ci/mol) in the presence and absence of synthetic Cdc2 peptides or human p34 cdc2 -cyclin B complex as exogenous substrates (25). In initial experiments, kinase reactions consisted of 10 l of Lyn immunoprecipitate, 5 l of assay buffer (0.25 M Tris-HCl pH 7.0, 0.125 M MgCl 2 , 0.025 M MnCl 2 , and 0.25 mM Na 3 VO 4 ) and 5 l of 1.5 mM substrate peptide [Lys 19 ]Cdc2(6 -20)-NH 2 (sequence: KVRKIGEGTYGVVKK) (Upstate Biotechnology, Inc.), a synthetic peptide derived from p34 cdc2 kinase. The reactions were initiated by the addition of 5 l of 0.5 mM [␥-32 P]ATP (specific activity ϭ 10 5 cpm/pmol) and incubation for 30 min at 30°C. The reaction was terminated by the addition of 10 l of glacial acetic acid, and then 25 l of the reaction mixture was spotted onto a P-81 phosphocellulose disc. The discs were washed four times with 0.75% phosphoric acid and once with acetone. [Val 12 ,Ser 14 ,Lys 19 ]Cdc2(6 -20)-NH 2 and [Phe 15 ,Lys 19 ]Cdc2(6 -20)-NH 2 (Upstate Biotechnology, Inc.) were included as control peptides. The PTK activity of Lyn toward the Cdc2 peptides was measured by incorporation of 32 P into the peptide substrates and expressed as the background-subtracted counts/min or pmol of PO 4 Ϫ incorporated/min. For subsequent experiments, human p34 cdc2 -cyclin B complex was isolated from lysates (100 g/sample) of insect cells coinfected with recombinant viruses encoding GST-cyclin B and [Arg 33 ]p34 cdc2 using glutathione-agarose beads (Sigma). The in vitro phosphorylation of p34 cdc2 by immunoprecipitated Lyn was assayed after a 20-min kinase reaction at 30°C in kinase buffer (50 mM Tris-Cl, pH 7.4, 5 mM MnCl 2 , 10 mM MgCl 2 , 1 mM DTT, and 50 M ATP). The kinase reaction was initiated by the addition of p34 cdc2 (50 l of the GST-cyclin B-[Arg 33 ]p34 cdc2 precipitate/sample) and [␥-32 P]ATP (20 Ci). Following the kinase reaction, these samples were fractionated on 9.5% polyacrylamide gels, detected by autoradiography and incorporation of 32 P was quantitated by a 4-min liquid scintillation counting of excised bands.
Phosphoamino Acid Analysis and Phosphotryptic Peptide Mapping-For phosphoamino acid analysis, protein bands were excised and hydrolyzed, as described previously (25,36,41). For two-dimensional phosphotryptic peptide mapping, 32 P-labeled protein bands were excised and subjected to enzymatic digestion with 100 g/ml trypsin (Sigma) overnight in 50 mM ammonium bicarbonate. Supernatants were dried by centrifugal evaporation, and dried samples were resuspended in 4 l of a buffer containing 7.8% glacial acetic acid and 2.5% formic acid. Labeled peptides were separated on thin layer phosphocellulose plates (Kodak) by electrophoresis at pH 1.9 for 30 min at 1,000 V, followed by ascending chromatography in a buffer containing 37.5% n-butanol, 7.5% glacial acetic acid, and 25% pyridine. Subsequently, air-dried plates were exposed to Kodak XAR-5 film. Prior to phosphoamino acid analysis and tryptic peptide mapping, protein bands of interest were excised and Cerenkov-counted for 32 P content.
Binding Assays with GST-Lyn Fusion Proteins-Truncated GST-Lyn fusion proteins corresponding to various domains of Lyn (44) were purchased from PharMingen, San Diego, CA. GST-Lyn fusion proteins were non-covalently bound to glutathione-agarose beads (Sigma) under conditions of saturating protein. In brief, 25 g of each fusion protein was incubated with 50 l of beads for 2 h at 4°C. The beads were washed three times with 1% Nonidet P-40 buffer. Nonidet P-40 lysates of NALM-6 cells were prepared as described above, and 250 g of the lysate was incubated with 50 l of fusion protein-coupled beads for 2 h on ice. The fusion protein adsorbates were washed with ice-cold 1% Nonidet P-40 buffer and resuspended in reducing SDS sample buffer. Samples were boiled for 5 min and then fractionated on SDS-PAGE, as described previously (36). SDS-PAGE gels were transferred to Immobilon-P (Millipore) membranes. Membranes were immunoblotted with anti-Cdc2-Cter (2 g/ml), as described (25,36). 125 I-Labeled protein A was used to detect p34 cdc2 kinase.
In Vitro Kinase Assays Using GST-WEE1 and GST-Lyn Fusion Proteins-A highly purified preparation of Lyn was prepared for these experiments by cloning a lyn cDNA (45) into the expression vector pBMS-1 (46), which directs the production of a recombinant baculovirus encoding a 83-kDa GST-Lyn fusion protein in insect cells. The GST-Lyn protein was purified to homogeneity using glutathione-Sepharose chromatography (46). The ability of GST-Lyn (1:100 dilution) and GST-p49 WEE1Hu (1:10 dilution, kindly provided by Dr. Laura Parker) to phosphorylate [Arg 33 ]p34 cdc2 was measured in a 20-min kinase reaction at 30°C in kinase buffer. The kinase reaction was initiated by the addition of p34 cdc2 (50 l/sample) and [␥-32 P]ATP (20 Ci). Following the kinase reactions, samples were boiled in 2 ϫ SDS reducing sample buffer, and proteins were fractionated on 15% polyacrylamide gels and visualized by autoradiography. Two-dimensional phosphoamino acid analysis and phosphotryptic peptide mapping of p34 cdc2 kinase were performed as described above.
Transfection Experiments-lyn and cdc2 cDNAs were expressed transiently in COS-7 cells by Lipofectamine lipid encapsulation (47,48). COS-7 cells were allowed to grow to Ͼ50% confluence by overnight incubation at 37°C in a humidified 5% CO 2 atmosphere and washed with serum-free and antibiotic-free DMEM. COS-7 cells were transfected with eukaryotic expression vectors for Lyn kinase (pSV7c-lynA) or cdc2 kinase (pT7f1A-cdc2; generously provided by Dr. Giulio Draetta, Mitotix Inc., Cambridge, MA). Specifically, 4 g of pSV7c/lynA, 3 g of pT7f1A/cdc2, or a combination thereof was diluted in 0.6 ml of serum/ antibiotic-free DMEM, mixed with 15 l of Lipofectamine reagent (Life Technologies, Inc.) (47,48), and the mixture was incubated for 30 min at room temperature to allow binding of DNA to cationic liposomes. Subsequently, the DNA-liposome complexes were diluted by addition of 1.4 ml of DMEM to the mixture, and 2 ml of DNA-liposome complex was added directly to COS-7 cells. Cells were incubated for 6 h at 37°C, followed by addition of 2 ml of DMEM supplemented with 20% fetal calf serum. After an 18-h culture at 37°C in a 5% humidified CO 2 atmosphere, the transfection mixture was removed and replaced with freshly prepared DMEM plus 10% fetal calf serum. COS-7 cells were harvested 72 h after the start of transfection and cell lysates were prepared using 1% Nonidet P-40 lysis buffer for immune complex kinase assays as well as immunoblotting with anti-Cdc2-Cter or anti-Lyn antibodies, as described (36).
Lyn Kinase-deficient BCP Leukemia Cells-Leukemic cells from all children with newly diagnosed BCP leukemia entered on the Childrens Cancer Group (CCG) treatment protocols are being examined in the CCG ALL Biology Reference Laboratory in Minneapolis for their Src family PTK profile (supported by National Cancer Institute Grant U01-CA-60437). These treatment protocols were approved by the National Cancer Institute as well as by the institutional review boards of the CCG-affiliated institutions. Informed consent was obtained from parents, patients, or both, as deemed appropriate for both treatment and laboratory studies according to the Department of Health and Human Services guidelines. Mononuclear cell fractions containing Ͼ90% leukemic cells were isolated from pretreatment bone marrow aspirate samples by centrifugation of the cell suspensions on Ficoll-Hypaque gradients. Leukemic BCP from 2 of 455 patients studied between 12/93 and 7/94 (designated as unique patient number (UPN) 1 lynϪ and UPN2 lynϪ ) were found to be Lyn kinase-deficient. These cells were used in the current study to examine the role of Lyn kinase in radiationinduced tyrosine phosphorylation of p34 cdc2 and cell cycle arrest.
Analysis of Radiation-induced Mitotic Arrest Using DNA Flow Cytometry-Cells were irradiated and then cultured at 5 ϫ 10 5 cells/ml in clonogenic medium (RPMI 1640 medium ϩ 1% penicillin/streptomycin ϩ 10% heat-inactivated fetal bovine serum, 2 mM L-glutamine, and 10 mM Hepes buffer) for up to 28 h at 37°C, 5% CO 2 . At the indicated time points, cells were washed two times in fresh clonogenic medium and stained with the UV-excited dye, Hoechst 33342, to quantify their DNA content as described previously (7,25). Quantitative DNA analysis was performed on a FACStar Plus flow cytometer equipped with a Consort 40 computer using the COTFIT program, which includes CELLCY, a cell cycle distribution function that fits DNA content histograms and calculates the percentages of cells in G 0/1 , S, and G 2 M phases of the cell cycle, as described (7,25).
Preparation of Cytoplasmic, Membrane, and Nuclear Protein Fractions of BCP Leukemia Cells-Enucleated cytoplasmic fractions and plasma membranes were prepared by nitrogen cavitation and differential centrifugation on Percoll (Pharmacia Biotech Inc.) gradients, as described previously (38,42). No nuclei or nucleated cells were seen on the cytospin preparations of the cytoplasmic or membrane fractions, and no DNA was detectable by PCR amplification of a 110-base pair fragment from the first exon of the human -globin gene (37). Nuclear proteins were extracted according to previously reported procedures (41). In brief, cells were lysed by vortexing at 4°C for 10 min in 10 mM HEPES, pH 7.9, 1.5 mM MgCl 2 , 10 mM KCl, 0.5 mM DTT, 0.1% Nonidet P-40. Nuclei were collected by centrifugation in a microcentrifuge at maximum speed for 5 min. Nuclei were then suspended in 20 mM HEPES, pH 7.9, 1.5 mM MgCl 2 , 0.5 mM DTT, 0.42 M NaCl, 0.2 mM EDTA, 25% (v/v) glycerol, 0.5 mM phenylmethylsulfonyl fluoride and incubated at 4°C for 15 min to allow the leakage of solubilized nuclear proteins. Higher salt concentrations were avoided to prevent release of DNA and histones. The mixture was briefly vortexed and centrifuged for 10 min at maximum speed in the microcentrifuge, and the supernatants were used for immunoprecipitations. NALM-6 cells express very high levels of the B-lineage-restricted CD19 antigen on their membrane and in their cytoplasm (36). Nuclear fractions of NALM-6 cells were free of CD19 antigen, as determined by Western blot analysis with a polyclonal anti-CD19 antibody raised against a GST-CD19 fusion protein corresponding to the cytoplasmic portion of CD19 (i.e. amino acids 410 -540).
RESULTS AND DISCUSSION
Lyn Kinase Associates Physically and Functionally with p34 cdc2 Kinase in the Cytoplasm of Human BCP Leukemia Cells-We investigated if Lyn kinase is capable of a physical association with p34 cdc2 kinase in human BCP leukemia cells by first examining the in vitro kinase reaction products of p34 cdc2 and Lyn immune complexes from the Nonidet P-40 lysates of unirradiated NALM-6 cells. Kinase reactions were performed in the presence of [␥-32 P]ATP to allow autophosphorylation of the 53-and 56-kDa Lyn isoforms (i.e. p53 lyn and p56 lyn ) that differ in sequences of their "unique" region. As shown in Fig. 1A, autophosphorylated Lyn kinase isoforms were detected not only in the Lyn immunoprecipitates that were used as a positive control but also in the p34 cdc2 immunoprecipitates. The degree of p34 cdc2 phosphorylation in unirradiated NALM-6 cells was very low in these kinase assays, which favor Lyn autophosphorylation. In order to better document the presence of p34 cdc2 in Lyn immune complexes, we subjected the Lyn immunoprecipitates to Western blot analysis with an anti-Cdc2-Cter antibody. The anti-Cdc2 antibody used in these experiments immunoprecipitates native enzyme better than it blots denatured enzyme. Therefore, in an attempt to increase the sensitivity of detection for p34 cdc2 , we used 5 times more cell lysate to prepare the Lyn immunoprecipitate that we did to obtain the p34 cdc2 immunoprecipitate that was used as a positive control. As shown in Fig. 1B, p34 cdc2 was detected not only in the p34 cdc2 immunoprecipitate but also in the Lyn immunoprecipitate. Similarly, Western blot analysis of the p34 cdc2 immunoprecipitate from the lysates of unirradiated NALM-6 cells with an anti-Lyn antibody raised against a GST-Lyn fusion protein corresponding to the 56-kDa isoform of Lyn confirmed the presence of Lyn kinase in p34 cdc2 immune complexes (Fig. 1C). Taken together, these results demonstrated that Lyn kinase is capable of association with p34 cdc2 in BCP leukemia cells and this association does not require exposure of cells to ionizing radiation. We next sought to determine the intracellular site of interactions between Lyn kinase and p34 cdc2 . To this end, we examined Lyn immune complexes from various fractions of Nonidet P-40 lysates of unirradiated NALM-6 cells in kinase assays for autophosphorylation of Lyn kinase (upper panel) and in Western blots for presence of Lyn protein (middle panel) as well as for presence of p34 cdc2 protein (lower panel). As expected, Lyn kinase activity was detected in whole cell lysates as well as the cytoplasmic and membrane fractions and the presence of Lyn protein in these immunoprecipitates was formally confirmed by anti-Lyn Western blot analysis (Fig. 1D). Lyn kinase was also detected in the nuclear fractions (Fig. 1D), a finding that was confirmed by immunofluorescent staining techniques. 2 Anti-Cdc2-Cter Western blot analysis of Lyn immune complexes revealed no significant association between Lyn and p34 cdc2 in the membrane or nuclear fractions. The detection of p34 cdc2 in the Lyn immune complexes from the cytoplasmic fraction as shown in Fig. 1D suggested the cytoplasm as the primary site of association between Lyn and p34 cdc2 .
Partial Mapping of the Sites of Interaction between Lyn and Cdc2 Kinases in BCP Leukemia Cells-Src family PTK are composed of an unique amino-terminal domain, a regulatory carboxyl-terminal domain, an SH3 domain, and an SH2 domain (49). SH3 domains, which bind to proline-rich sequences, as well as SH2 domains, which bind to phosphotyrosine, have been shown to facilitate protein-protein interactions and formation of intracellular signaling complexes (44,49,50). The amino-terminal 27 residues of the unique domain of Lyn have been shown to mediate the association of Lyn with phospholipase C␥2, mitogen-activated protein kinase, and GTPase-activating protein (44). We performed binding experiments with truncated GST-Lyn fusion proteins corresponding to various domains of Lyn kinase to generate preliminary information regarding the structural requirements for Lyn association with p34 cdc2 . Schematic diagrams and the inclusive amino acid se-quences of these truncated GST-Lyn fusion proteins are depicted in Fig. 2A. Purified GST-Lyn fusion proteins, which were non-covalently immobilized on glutathione-agarose beads, were incubated with Nonidet P-40 lysates of unirradiated NALM-6 cells, and the adsorbates were analyzed for the presence of p34 cdc2 kinase by immunoblotting with an anti-Cdc2-Cter antibody. As shown in Fig. 2B, p34 cdc2 in NALM-6 lysates was able to bind to GST-Lyn fusion protein Lyn 1-119 containing the unique amino-terminal domain plus the SH3 domain, but it did not bind to GST-Lyn fusion proteins corresponding to the SH2 domain (i.e. Lyn 131-243), amino-terminal 27 residues (i.e. Lyn 1-27), amino-terminal 61 residues (i.e. Lyn 1-61), or amino-terminal domain plus proximal portion of the SH3 domain (i.e, Lyn 1-92). The results of these experiments suggest a functional role for the SH3 rather than the SH2 domain of Lyn in Lyn-p34 cdc2 interactions in leukemic BCP. Notably, GST-Lyn fusion protein Lyn 27-131 did not exhibit any binding activity to p34 cdc2 . Thus, the first 27 residues of the unique amino-terminal domain of Lyn, while not sufficient for the Lyn-p34 cdc2 interaction, appear to be essential for the ability of Lyn to bind to p34 cdc2 from NALM-6 cell lysates. Further studies will be required to elucidate the exact structural basis for the Lyn-p34 cdc2 interactions.
Ionizing Radiation Promotes the Physical and Functional Interactions between Lyn Kinase and p34 cdc2 in BCP Leukemia FIG. 1. Lyn kinase associates with p34 cdc2 in unirradiated BCP leukemia cells. A, unirradiated NALM-6 cells were lysed in Nonidet P-40 lysis buffer and 200-g samples of the cell lysate were immunoprecipitated with a polyclonal rabbit anti-Lyn antibody or anti-Cdc2-Cter antibody. Immune complexes were assayed for kinase activity during a 10-min incubation in the presence of 0.1 mM [␥-32 P]ATP to allow autophosphorylation of the 53-and 56-kDa Lyn isoforms. Samples were boiled in 2 ϫ SDS sample buffer and fractionated on 12.5% polyacrylamide gels. B, unirradiated NALM-6 cells were lysed as in A and 100 g of the lysate was immunoprecipitated with anti-Cdc2-Cter antibody, whereas 500 g of the lysate was immunoprecipitated with anti-Lyn antibody, as described in A. The immune complexes were collected, boiled in 2 ϫ SDS sample buffer, fractionated on 15% polyacrylamide gels, transferred to an Immobilon-PVDF membrane, and immunoblotted for 90 min with anti-Cdc2-Cter antibody. 125 I-Labeled protein A was used to detect p34 cdc2 . C, unirradiated NALM-6 cells were lysed in Nonidet P-40 lysis buffer and 200-g samples of the cell lysate were immunoprecipitated with anti-Cdc2-Cter antibody; immune complexes were collected, washed, boiled in 2 ϫ SDS sample buffer, fractionated on 15% polyacrylamide gels, transferred to an Immobilon-PVDF membrane, and immunoblotted with anti-Cdc2-Cter antibody (lanes 1 and 2) or with an anti-Lyn antibody raised against a GST-Lyn fusion protein corresponding to the 56-kDa isoform of Lyn (lanes 3 and 4). 125 Ϫ incorporated/min, and it was amplified by 30% within 30 s following exposure to 2 Gy ␥-rays (data not shown). The specificity of this reaction was confirmed using a mutated Cdc2 peptide [Phe 15 ,Lys 19 ]Cdc2(6 -20)NH 2 ; Tyr 15 3 Phe) as a negative control. The kinase activity of Lyn immunoprecipitate toward this single amino acid substitution analog of the Cdc2 peptide, which does not contain a target Tyr 15 residue, was only 0.005 pmol of PO 4 Ϫ incorporated/min. We next examined the effects of ionizing radiation on the ability of Lyn kinase to phosphorylate a recombinant human p34 cdc2 -cyclin B complex preparation in the presence of [␥-32 P]ATP.This complex was isolated from lysates of insect cells coinfected with recombinant viruses encoding GST-cyclin B and [Arg 33 ]p34 cdc2 , an inactive mutant form of p34 cdc2 mutated at lysine 33 (25,43). Lyn kinase was immunoprecipitated from unirradiated as well as irradiated NALM-6 cells and examined in kinase assays for autophosphorylation as well as its ability to phosphorylate recombinant human p34 cdc2 on tyrosine. As shown in Fig. 3A, ionizing radiation resulted in a Ͼ4-fold increase in Lyn kinase activity, as measured by autophosphorylation. The increased autophosphorylation was accompanied by Ͼ1.8-fold increased phosphorylation of [Arg 33 ]p34 cdc2 . Two-dimensional phosphoamino acid analysis of the excised Cdc2 bands confirmed that the increased label on p34 cdc2 reacted with Lyn from irradiated cells was caused by enhanced tyrosine phosphorylation (data not shown). Thus, exposure of NALM-6 cells to ␥-rays prior to the immunoprecipitation augmented the ability of Lyn kinase to utilize recom-binant human p34 cdc2 as an exogenous substrate during the in vitro kinase reactions.
Subsequently, we evaluated the effects of ionizing radiation on the intracellular physical and functional interactions between Lyn and p34 cdc2 in NALM-6 cells. To this end, NALM-6 cells were irradiated, lysed with Nonidet P-40 lysis buffer, and Lyn kinase was immunoprecipitated from the lysates of unirradiated as well as irradiated cells. In vitro kinase assays were performed to examine Lyn autophosphorylation as well as phosphorylation of any co-immunoprecipitated p34 cdc2 kinase. As shown in Fig. 3B (b1), irradiation of NALM-6 cells stimulated the the Lyn kinase, as measured by autophosphorylation. Concomitant with Lyn kinase activation at 10 or 20 min following radiation exposure, p34 cdc2 became detectable in the Lyn immune complexes as a tyrosine-phosphorylated protein substrate (Fig. 3B, b1). The abundance of the Lyn protein, as estimated by anti-Lyn Western blot analysis, did not change GST-Lyn fusion proteins non-covalently bound to glutathioneagarose beads were used in binding assays to examine their ability to interact with p34 cdc2 in NALM-6 cells, as described under "Experimental Procedures." Samples (250 g) of the Nonidet P-40 lysates from unirradiated NALM-6 cells were incubated with the GST-Lyn fusion protein-coupled beads. The fusion protein adsorbates were washed, resuspended in SDS sample buffer, boiled, fractionated on 12.5% SDS-PAGE gels, transferred to Immobilon-P membranes, and membranes were immunoblotted with anti-Cdc2-Cter, followed by visualization using 125 I-labeled protein A and autoradiography.
FIG. 3. Ionizing radiation promotes the interaction between Lyn and Cdc2 kinases in BCP leukemia cells.
A, ␥-rays stimulate the PTK activity of Lyn toward recombinant human [Arg 33 ]p34 cdc2 . Lyn kinase was immunoprecipitated from Nonidet P-40 lysates of unirradiated (lane 1) and irradiated (lane 2, 5 min after 1 Gy ␥-rays; lane 3, 5 min after 2 Gy ␥-rays) NALM-6 cells. In vitro kinase assays were performed to examine the immunoprecipitated Lyn kinase for autophosphorylation as well as its ability to phosphorylate recombinant human p34 cdc2 -cyclin B complex, which was used as an exogenous substrate, on tyrosine. B, ␥-rays promote the physical and functional interactions between Lyn and p34 cdc2 in BCP leukemia cells. b1, Lyn kinase was immunoprecipitated from the Nonidet P-40 lysates (600 g/sample) of unirradiated (lane 2) as well as irradiated (lane 3, 10 min after 2 Gy ␥-rays; lane 4, 20 min after 2 Gy ␥-rays) NALM-6 cells and in vitro kinase assays were performed using one-third of the samples, as described in the legend of Fig. 1A, to examine Lyn autophosphorylation as well as phosphorylation of co-immunoprecipitated p34 cdc2 kinase. Arrows indicate the positions of the Lyn and p34 cdc2 kinases. b2, another third of the samples from the Lyn immunoprecipitations shown in b1 were boiled in 2 ϫ SDS sample buffer, fractionated on 12.5% polyacrylamide gels, transferred to an Immobilon-PVDF membrane, and immunoblotted with an anti-Lyn antibody raised against a GST-Lyn fusion protein corresponding to the 56-kDa isoform of Lyn. 125 I-Labeled protein A was used to detect the 56 kDa isoform of Lyn. b3, the remaining one-third of the samples from the Lyn immunoprecipitations shown in b1 were boiled in 2 ϫ SDS sample buffer, fractionated on 12.5% polyacrylamide gels, transferred to an Immobilon-PVDF membrane, and immunoblotted with anti-Cdc2-Cter antibody. 125 I-Labeled protein A was used to detect the p34 cdc2 kinase in the Lyn immune complexes. The purpose of the b2 portion of the experiment was to confirm that lanes 2, 3, and 4 contained equal amounts of Lyn and the lane-lane differences in Lyn autophosphorylation or amount of Cdc2 kinase detected by immunoblotting were not caused by loading unequal amounts of Lyn immune complexes in each lane. In b1-b3, no primary immunoprecipitating antibody was added to the control samples shown in lanes 1. during the course of the experiment, suggesting increased enzymatic activity of Lyn (Fig. 3B, b2). However, the abundance of the p34 cdc2 protein in the same Lyn immune complexes, as determined by anti-Cdc2-Cter Western blot analysis, was significantly increased after radiation exposure (Fig. 3, B, b3), suggesting that enhanced tyrosine phosphorylation of p34 cdc2 , which parallels the Lyn activation, is at least in part due to radiation-induced promotion of the physical association between Lyn and p34 cdc2 in NALM-6 cells.
Recombinant GST-Lyn Fusion Protein and Lyn Kinase Immunoprecipitated from Irradiated BCP Leukemia Cells Phosphorylate Recombinant Human p34 cdc2 on Tyrosine 15-For further analysis of the interactions between Lyn and p34 cdc2 kinases, we prepared a highly purified 83-kDa GST-Lyn fusion protein, as described under "Experimental Procedures." This GST-Lyn fusion protein was enzymatically active, as confirmed by its autophosphorylation and its ability to phosphorylate denatured rabbit enolase, which was used as an exogenous substrate, during a 10-min in vitro kinase reaction (Fig. 4A).
We next performed in vitro kinase assays using GST-Lyn in order to determine whether [Arg 33 ]p34 cdc2 can serve as a direct substrate for Lyn in the absence of other proteins or kinases which are associated with Lyn kinase (27)(28)(29). GST-Lyn effectively phoshorylated [Arg 33 ]p34 cdc2 (Fig. 4B), and two-dimensional phosphoamino acid analysis confirmed that the increased phosphorylation of GST-Lyn-treated p34 cdc2 kinase was on tyrosine (Fig. 4C). GST-p49 WEE1Hu , a positive control fusion protein of human WEE1 kinase with GST, which was reported to phosphorylate [Arg 33 ]p34 cdc2 on Tyr 15 (25,43), increased the p34 cdc2 -associated label Ͼ10-fold (Fig. 4D), and phosphoamino acid analysis confirmed that the increased phosphorylation was on tyrosine (data not shown). In some experiments, [Arg 33 ]p34 cdc2 was phosphorylated even in the absence of GST-Lyn (Fig. 4D). The threonine phosphorylation of untreated p34 cdc2 seen in two-dimensional phosphoamino acid analyses, which is depicted in Fig. 4E, is caused by a kinase that sometimes coprecipitates from the insect cell and phosphorylates p34 cdc2 on Thr-161 (25,30). Similarly, phosphorylation of cyclin B in this substrate preparation is due to an endogenous insect cell kinase that binds to cyclin B and copurifies with the cyclin B-p34 cdc2 complex, as kinase activity is associated with cyclin B when it is expressed in insect cells in the absence of p34 cdc2 as well (25,30). When observed, this base-line threonine phosphorylation of p34 cdc2 in kinase reactions partially masked the magnitude of GST-Lyn-induced phosphorylation of p34 cdc2 (Fig. 4D). However, two-dimensional phosphoamino acid analysis of GST-Lyn-phosphorylated [Arg 33 ]p34 cdc2 confirmed that increased phosphorylation of GST-Lyn-treated p34 cdc2 was on tyrosine (Fig. 4E), thereby unmasking and validating the potent PTK activity of GST-Lyn fusion protein toward recombinant human [Arg 33 ]p34 cdc2 .
To further evaluate the effects of GST-Lyn as well as Lyn kinase immunoprecipitated from unirradiated and irradiated NALM-6 pre-B leukemia cells on the phosphorylation state of Fig. 4D were subjected to two-dimensional tryptic phosphopeptide mapping, as described under "Experimental Procedures." The position of Tyr 15 -containing peptide was identified as the site of GST-WEE1-induced phosphorylation of [Arg 33 ]p34 cdc2 (30,43). Bottom panel, [Arg 33 ]p34 cdc2 was also used as a substrate for Lyn kinase, which was immunoprecipitated from Nonidet P-40 lysates of unirradiated (N6, 0 Gy) and irradiated (N6, 1 Gy ϭ 5 min after 1 Gy ␥-rays; N6, 2 Gy ϭ 5 min after 2 Gy ␥-rays) NALM-6 cells. Two-dimensional tryptic phosphopeptide mapping was performed as for the samples shown in the top panel. In both the top and bottom panels, the position of this Tyr 15 -containing peptide in each phosphotryptic map shown is indicated with an arrowhead.
[Arg 33 ]p34 cdc2 , we subjected p34 cdc2 excised from the gels of kinase reactions to two-dimensional tryptic phosphopeptide mapping. As shown in Fig. 5, a single threonine-containing phosphopeptide was detected upon phosphotryptic mapping of untreated p34 cdc2 . Consistent with a previous report, which identified Tyr 15 as the site of GST-WEE1-induced phosphorylation of [Arg 33 ]p34 cdc2 (43), one major tyrosine-containing phosphopeptide was detected after treatment of [Arg 33 ]p34 cdc2 with GST-WEE1. The position of this Tyr 15 -containing peptide in each phosphotryptic map shown in Fig. 5 is indicated with an arrowhead. Notably, treatment of [Arg 33 ]p34 cdc2 with GST-Lyn or Lyn immunoprecipitated from irradiated NALM-6 cells resulted in increased phosphorylation of the same Tyr 15 -containing peptide.
Taken together, these experiments provided direct evidence that Lyn kinase can directly phosphorylate p34 cdc2 on Tyr 15 . The radiation-enhanced ability of Lyn kinase from NALM-6 cells to phosphorylate recombinant p34 cdc2 on Tyr 15 strongly supports the hypothesis that Lyn may be one of the PTK responsible for radiation-induced inhibitory tyrosine phosphorylation and inactivation of p34 cdc2 kinase in human BCP leukemia cells.
In Vivo Tyrosine Phosphorylation of p34 cdc2 by Co-expression with Lyn Kinase in COS-7 Cells-To further study the inter-action of Lyn and p34 cdc2 in vivo, cDNAs encoding these kinases were transiently co-expressed in COS-7 cells using the Lipofectamine reagent (47,48). Compared to COS-7 cells transfected with cdc2 cDNA and mock-transfected with the empty expression vector, COS-7 cells co-transfected with cDNAs for both lyn and cdc2 showed markedly amplified expression of Lyn protein, as determined by anti-Lyn Western blot analysis of Nonidet P-40 lysates (Fig. 6, upper panel). Amplified lyn expression did not affect cdc2 expression in co-transfected COS-7 cells, as determined by anti-p34 cdc2 -Cter Western blot analysis of Nonidet P-40 lysates (Fig. 6, upper panel). Anti-Cdc2-Cter Western blot analysis of Lyn immune complexes from COS-7 cells co-transfected with cdc2 cDNA demonstrated the presence of p34 cdc2 kinase (Fig. 6, lower left panel). Kinase assays of Lyn immune complexes from lysates of COS-7 cells co-transfected with cDNAs for both lyn and cdc2 demonstrated the presence of phosphorylated p34 cdc2 (Fig. 6, lower right were used for immunoprecipitation and immune complex kinase assays of the indicated Src family PTK. C, p34 cdc2 was immunoprecipitated from Nonidet P-40 lysates of unirradiated as well as irradiated (2 Gy delivered 5 min prior to lysis) BCP leukemia cells of UPN1 lynϪ and UPN2 lynϪ using a rabbit anti-Cdc2-Cter antibody. Samples were run on 10.5% SDS-PAGE gels and subsequently immunoblotted with either anti-phosphotyrosine or anti-Cdc2-Cter. 125 I-Labeled protein A was used to detect tyrosine-phosphorylated proteins or p34 cdc2 kinase. The position of p34 cdc2 is indicated with arrowheads. D, for comparison, using the procedures outlined in C, radiation-induced tyrosine phosphorylation of p34 cdc2 was also examined in Lyn kinase-positive NALM-6 pre-B leukemia cells. panel), and phosphoamino acid analyses confirmed that the p34 cdc2 -associated label was on tyrosine (data not shown). Thus, Lyn kinase associates physically with p34 cdc2 kinase when these kinases are co-expressed in COS-7 cells and elevated Lyn kinase activity is sufficient for induction of p34 cdc2 tyrosine phosphorylation in vivo. These results confirmed the ability of Lyn kinase to interact with p34 cdc2 in vivo.
Failure of Ionizing Radiation to Induce Tyrosine Phosphorylation and Inactivation of p34 cdc2 in Lyn Kinase-deficient Human BCP Leukemia Cells-Lyn kinase has been consistently identified as the predominant member of the Src family PTK family in leukemic cells from BCP leukemia patients (36,37). In a survey of 455 BCP leukemia cases, we were able to identify only two patients, UPN1 lynϪ (Fig. 7A) and UPN2 lynϪ , whose leukemic cells did not contain any Lyn enzyme detectable by immune complex kinase assays or by Western blot analysis of Lyn protein expression. We used immune complex kinase assays to examine the relative abundance of other members of the Src PTK family in UPN1 lynϪ cells. As shown in Fig. 7B, Fyn and Blk were the predominant Src family PTK in these Lyn kinase-deficient cells.
We next compared the ability of ionizing radiation to trigger tyrosine phosphorylation of p34 cdc2 in Lyn kinase expressing NALM-6 cells versus Lyn kinase-deficient UPN1 lynϪ or UPN2 lynϪ cells by anti-phosphotyrosine Western blot analysis of p34 cdc2 immunoprecipitates from the Nonidet P-40 cell lysates prepared 5 min after radiation exposure. Ionizing radiation induced tyrosine phosphorylation of p34 cdc2 in NALM-6 cells, but not in UPN1 lynϪ or in UPN2 lynϪ cells (Fig. 7, C and D).
We next compared the ability of ␥-rays to cause a G 2 arrest in Lyn kinase expressing NALM-6 cells versus Lyn kinase-deficient UPN1 lynϪ cells. Asynchronously dividing NALM-6 cells and uncultured UPN1 lynϪ cells were irradiated with 2 Gy ␥-rays and then cultured at 5 ϫ 10 5 cells/ml in a clonogenic medium, as described under "Experimental Procedures." At the indicated time points, cells were stained with Hoechst 33342 to quantify their DNA content on a FACStar Plus flow cytometer. Prior to radiation, 25% of NALM-6 cells and 29% of UPN1 lynϪ cells were in the G 2 phase of the cell cycle, which corresponds to the second peak of the DNA histogram (Fig. 8). In NALM-6 cells, a radiation-induced accumulation in G 2 phase was first detectable at 8 h after radiation, when the DNA flow cytometric analysis showed 38% of the cells to be in the G 2 phase. The cell cycle arrest at the G 2 -M transition checkpoint was further evident from the decreased percentage of G 0/1 cells. The percentage of cells accumulated in G 2 phase was further increased to 54% at 22 h. This cell cycle arrest at G 2 -M transition checkpoint was transient, as evidenced by the decreased percentage of G 2 cells and increased percentage of G 0/1 cells at 28 h after radiation. In contrast to NALM-6 cells, UPN1 lynϪ cells did not show any evidence of a cell cycle arrest at G 2 -M transition after exposure to 2 Gy ␥-rays (Fig. 8). These findings support the hypothesis that Lyn is the PTK responsible for radiationinduced inhibitory tyrosine phosphorylation and inactivation of p34 cdc2 kinase in human BCP leukemia cells.
Radiation-induced G 2 arrest allows the cells to repair potentially lethal or sublethal DNA lesions induced by radiation or other DNA damaging agents. Cells that are unable to show this response are more sensitive to DNA damaging agents, and drugs that abolish this response sensitize cells to DNA damaging agents (11,(15)(16)(17)(18)(19)(20)(21)(22). Abrogation of radiation-induced G 2 arrest by caffeine exposure induces premature mitosis before DNA repair is complete and results in enhanced cell death (8). Similarly, pentoxyfylline, a caffeine analog that shortens the duration of G 2 arrest, also displays radiosensitizing properties (40). A human lymphoma cell line that displayed markedly enhanced sensitivity to DNA damage by nitrogen mustard was found to be defective in the G 2 phase checkpoint control (9).
We provide experimental evidence for an important role of a cytoplasmic signal transduction pathway intimately linked to the Lyn kinase in radiation-induced G 2 phase-specific cell cycle arrest of human BCP leukemia cells. Because Lyn kinase maintains p34 cdc2 in an inactive state in irradiated BCP leukemia cells, thereby allowing them to repair sublethal radiation damage, we postulate that inhibition of Lyn kinase in BCP leukemia cells may result in radiosensitization. To accomplish this goal, a PTK inhibitor could be targeted to Lyn kinase in BCP leukemia cells with a monoclonal antibody, which binds to and remains complexed with CD19 receptor. CD19 receptor is physically associated with the Lyn kinase (36). Our recent results show that treatment of CD19 ϩ BCP leukemia cells with nanomolar concentrations of B43-Gen immunoconjugate causes sustained inhibition of CD19-associated Lyn kinase (37).
Role of Lyn Kinase in Surveillance and Repair of DNA Damage in Human B-lineage Lymphoid Cells-Our results implicate Lyn as an important cytoplasmic suppressor of p34 cdc2 function and extend previous observations that Lyn kinase may play an important role in anti-IgM or anti-CD19-induced G 1 arrest of B lymphoma cells (51). Recent studies indicate that p34 cdc2 is activated in the cytoplasm and premature activation of p34 cdc2 at an inappropriate time during the cell cycle leads to apoptotic cell death, underscoring the importance of regulatory events governing p34 cdc2 activation and deactivation (52). Lyn FIG. 8. Ionizing radiation does not cause G 2 arrest in Lyn kinase-deficient BCP leukemia cells. Lyn kinasepositive NALM-6 pre-B leukemia and Lyn kinase-deficient UPN1 lynϪ cells were irradiated with 2 Gy ␥-rays and then cultured in clonogenic medium for 8, 22, and 28 h at 37°C/5% CO 2 . Cells were washed two times in fresh clonogenic medium and stained with the UV-excited dye, Hoechst 33342, as described previously (5,25). Quantitative DNA analysis was performed on a FACStar Plus flow cytometer equipped with a Consort 40 computer using the COTFIT program, which includes CELLCY, a cell cycle distribution function that fits DNA content histograms and calculates the percentages of cells in G 0/1 , S, and G 2 M phases of the cell cycle.
kinase may protect the cell from the potentially catastrophic consequences of premature p34 cdc2 activation by maintaining the p34 cdc2 -cyclin B complex in its inactive, tyrosine phoshorylated state.
Several studies have documented the ability of B-lineage lymphoid cells to produce reactive oxygen intermediates in response to various activation signals (53)(54)(55)(56)(57)(58). Recent evidence suggests that production of reactive oxygen intermediates in response to various mitogenic stimuli may regulate the proliferative responses of peripheral blood mononuclear cells (59). It has been proposed that generation of reactive oxygen intermediates upon activation of B-lineage lymphoid cells may contribute to somatic mutations (53,(55)(56)(57)60). Lyn kinase may serve as an integral component of a physiologically important surveillance and repair mechanism for DNA damage by delaying the G 2 -M transition in cells exposed to mutagenic oxygen free radicals, thereby allowing them to repair their DNA damage prior to mitosis. Without this surveillance, the likelihood of malignant transformations leading to BCP leukemias as well as impaired survival and self-renewal capacity of BCP populations leading to immunodeficiency disorders may be increased. Therefore, it will be important to conduct appropriate epidemiologic studies designed to test the hypothesis that low activity levels of Lyn in BCP populations may be associated with increased risk of development of BCP leukemia or B-cell immunodeficiency during childhood.
|
v3-fos-license
|
2023-11-18T16:13:00.316Z
|
2023-11-15T00:00:00.000
|
265269074
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2411-5118/4/4/38/pdf?version=1700027803",
"pdf_hash": "04f5e7b00d8cd7dde9c14b22125afaf28bb2d199",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45625",
"s2fieldsofstudy": [
"Sociology"
],
"sha1": "009b69d2cc4a36f78c664508afa532bc5405c484",
"year": 2023
}
|
pes2o/s2orc
|
“As Long as It’s Not on the Face”: Pornography Viewers Discuss Male Ejaculation Perceptions and Preferences
: Feminist scholars have suggested that male ejaculations in pornographic videos, particularly ejaculations on a sexual partner’s face or in their mouth, are often used to symbolically debase and humiliate women. However, no previous study has asked pornography viewers about their perceptions and preferences regarding male ejaculation. In this article, I investigate these perceptions and preferences using a large sample of more than 300 pornography viewers representing diverse demographics and cultural backgrounds. I find that most viewers either did not care about the male ejaculation or its placement or preferred for it to be in the female partner’s vagina. In contrast to common assumptions found in the literature, very few viewers expressed a preference for ejaculation on a woman’s face or in her mouth and many of them found such practices disturbing.
Introduction
Research on visual portrayals of orgasm in pornographic videos has shown that men are much more likely than women to be shown reaching an orgasm and visibly ejaculating [1][2][3].This disparity reflects a real-life orgasm gap, as a large host of studies have reported that men are much more likely than women to experience an orgasm during sexual intercourse [4][5][6][7] and that both men and women care more about the male orgasm than about the female orgasm [8][9][10].Some scholars believe that visible male orgasms are important to viewers because they confirm male pleasure and its authenticity [11,12].
Studies on viewers' approaches and preferences regarding pornographic content have begun to proliferate in recent years, with many of them focusing on issues such as authenticity [13,14], pleasure [15,16], and aggression [17].However, no previous study has focused on viewers' perceptions and preferences regarding the male ejaculation.In the absence of such research, the most common perception among scholars is that the male ejaculation is often used as a tool to symbolically debase women [18,19].More specifically, feminist scholars have written about the symbolic meanings of ejaculating on a female sexual partner's face or in her mouth, which content analyses show are very common practices in mainstream pornography [20][21][22][23][24][25].Most feminist scholars argue that such practices are misogynistic, humiliating, and degrading, as they celebrate gender inequalities through sexual domination [26][27][28][29][30][31][32].In this view, producers, directors, and viewers of pornographic videos, who are mostly men, enjoy ejaculation on a female performer's body parts, particularly her face.This demand, in turn, drives the ubiquity of these practices in the pornographic industry.
In this article, I investigate these common assumptions using a large and diverse sample of more than 300 pornography viewers, both men and women, from a wide range of geographical locations, ethnicities, cultural backgrounds, and sexual orientations.These viewers were asked about their perceptions and preferences regarding male ejaculations, their importance, and their placement.In contrast to common assumptions in the literature, as well as the findings of content analyses of popular videos, very few viewers expressed a preference for ejaculation on a woman's face or in her mouth, with many more viewers (both men and women) finding these practices unappealing or even disturbing.Instead, most viewers either did not care about the male ejaculation or its placement or preferred for it to be in the female partner's vagina.
Previous Research on Male Ejaculation in Pornography
Content analyses of visual pornography consistently show that depictions of male orgasms and ejaculations are much more prevalent than those of female orgasms [1][2][3]22,24,33,34].This gap in pornographic representations reflects a real-life orgasm gap.Indeed, studies have repeatedly demonstrated a clear gap between men and women in reported rates of orgasm, with men much more likely than women to report an orgasm during sexual intercourse [4][5][6][7]35,36].Studies further suggest that experiencing an orgasm during a sexual encounter is more important for men than it is for women [8,9] and that heterosexual men also feel more entitled to orgasms than their female partners [37].Furthermore, not only is the male orgasm very important for men, it is also very important for women, as both men and women believe that it is an essential aspect of sex and are concerned when men do not reach an orgasm, viewing the sex as abnormal or incomplete [10].
Given these cultural beliefs and scripts, it is not surprising that nearly all mainstream pornographic heterosexual videos culminate, and usually also terminate, with the male ejaculation, also often referred to as "the cum-shot" [19] or "the money shot" [1,33].Under the conventions of mainstream pornography, the male ejaculation serves as the authentic affirmation of the male climax and represents "the visible truth" of sexual pleasure [11], which also confirms the authenticity of the pornographic texts themselves [12].This conceptualization has led to the "money shot" becoming a fetishized feature of pornographic production from their very early days [11].Indeed, even when male performers ejaculate within the female performer's mouth or vagina, the pornographic script often demands that the female performer make the semen visible, letting it spill out of her mouth or vagina, as evidence that the male climax indeed occurred.
The Meanings of Male Ejaculation and Its Placement
Some might believe that the form or placement of male ejaculations is of little academic interest or practical consequence, dismissing it as merely a fetish or a matter of personal tastes.However, I argue that, similar to other sexual acts and scripts, the act and script of male ejaculation is in fact loaded with gendered and cultural meanings.Depending on both personal and cultural interpretations, the placement of the male ejaculation may be perceived as an act of passion, commitment, love, and acceptance, or alternatively of dominance, degradation, humiliation, and even aggression.Hence, it is important to assess the readings that both male and female viewers of pornography ascribe to this act, as well as their perceptions and preferences regarding male ejaculation in the videos they watch.
Scholars have offered various readings of the male ejaculation.The most common reading suggests that this ejaculation is often used as a tool to symbolically debase women [18].Many feminist scholars believe that certain forms of male ejaculation, particularly ejaculation on a sexual partner's face or in their mouth, are inherently humiliating, degrading, and even dehumanizing [3,20,21,38].According to Schauer [19], the location of the male ejaculation in both real life and pornography is a primary component of female degradation, as it links the male sexual imagination with misogyny and objectification.Schauer argues that "since male performers are depicted discharging on their victim's faces (this is by far the most common), breasts, or buttocks-i.e., on the bodily spaces that are signifiers of feminine difference-the cum-shot metaphorically debases femininity" (pp.54-55, [19]).And while ejaculation on a woman's body in general may be seen as degrading, ejaculation on her face is particularly humiliating, because it "signifies not only extreme subordination, but also a disregard on the part of the dominant male partner for the identity and personal sentiments of his female or male partner" (p.55, [19]).Despite these readings (or, some might argue, in large part because of them), content analyses of mainstream pornography suggest that ejaculations on a female performer's face or in her mouth are very common practices.For example, Cowan and Campbell [39], who analyzed popular X-rated pornography videocassettes, found that 43% of White women and 28% of Black women in interracial pornography were portrayed with men ejaculating on their faces.Bridges, Wosnitzer [21], who examined aggression in popular rental pornographic videos, found that the male character's ejaculation almost always occurred outside the female character's vagina, most frequently in her mouth (58.6%).Gorman, Monk-Turner [20], who analyzed a convenience sample from multiple internet websites, reported that nearly half (45%) of the videos in the sample included a scene where one or more male performers ejaculated onto the face of a the female performer.Finally, Shor and Seida [22] found that 24.3% of the most watched videos on Pornhub included male ejaculation on a woman's face, while 35.7.% of these videos featured ejaculation in her mouth.
The Current Study
Despite the rich research described above, we still know relatively little about the preferences of both men and women regarding male ejaculation in pornographic videos.Research on pornography viewers' preferences and perceptions has grown substantially in recent years.Most of these studies have focused on female viewers, utilizing primarily focus groups and in-depth interviews with a select group of respondents (typically fewer than 30) in a single locale [13][14][15][16][40][41][42][43][44].Many of these studies examined perceptions regarding authenticity in pornography, finding that viewers are quite preoccupied with detecting genuine pleasure, including genuine orgasms [13,14,40].However, none of them asked viewers directly about their perceptions and preferences regarding male ejaculation.
This study expands on these empirical efforts in several important ways.First, it relies on more than 300 interviews, a much larger sample than these previous studies, allowing for a much wider range of demographics, geographical distributions, ethnicities, cultural backgrounds, and sexual orientations.Perhaps most importantly among these, while many recent studies on the perceptions and preferences of pornography viewers have focused nearly exclusively on women, e.g., [13][14][15][40][41][42]45], the current sample includes an equal number of men and women, as well as several non-binary individuals.
Comparing the preferences and views of women and men regarding male ejaculation in pornography is particularly important given common claims that while these preferences vary significantly, men's preferences are the ones driving industry norms.Petersen and Hyde [46], who conducted a systematic review of gender differences in sexuality, concluded that men's attitudes toward sexuality tend to be more "liberal" than those of women, suggesting that they may find alternative depictions of sexuality, including ones that are arguably degrading, more appealing.In addition, since ejaculation on body parts, which some also find degrading, is conducted by men and directed toward women, one might expect that women who watch pornography would be more likely to identify with the perceived humiliation of the female performers and thus resent such acts.
Moreover, some pornography critics have suggested that male viewers might in fact have a preference for watching acts that are often perceived as humiliating for women, such as ejaculation on a woman's face, in her mouth, and perhaps also on other intimate body parts.This is because such acts celebrate the tension and thrill derived from sexualizing gender inequalities and contribute to the entrenchment of gender hierarchies.Thus, videos that revel in the degradation and abuse of women are for many men thrilling, providing greater sexual pleasure [26][27][28][29][30][31][32].These feminist scholars argue that such preferences are the source of the great popularity and presence of facial and mouth ejaculations in pornography, as most producers, directors, and viewers are men.Accordingly, we might expect male pornography viewers to express particular affinity toward ejaculation on a female performer's body parts, particularly on her face.
Method 2.1. Sampling Strategy and Recruitment
Since there is no comprehensive list of online pornography viewers, I had to use a non-probability sampling method.While this method limits generalizability, it is still useful in obtaining rich descriptive data, revealing certain trends, and identifying preferences and views [47].I used a mix of voluntary and purposive sampling techniques.First, I posted recruitment ads inviting participants over the age of 18 who watched pornographic videos online at least once per month over the previous year to share their experiences and preferences.To encourage participation, I offered each participant a $20 compensation.Ads were posted to Craigslist, Kijiji, and to several Facebook groups, primarily those of current and former students in several North American universities.About 60% of eventual participants learned about the study through Kijiji or Craigslist, while about 40% reached it through various Facebook groups.
I then applied a theoretically driven purposive sampling strategy.This strategy was designed to increase variability in theoretically important factors, primarily gender, age, ethnicity, sexual orientation, and geographical location.For example, I sought to reach a roughly balanced number of men and women.Therefore, toward the end of the recruitment process, when realizing that the sample includes more women than men, I gave preference to the recruitment of men who agreed to participate in the study and did not interview some of the women who wished to participate.Similarly, I gave preference in later stages of the recruitment process to older individuals (over the age of 25) and to non-North American participants, seeking to increase representation of these populations, which were harder to recruit.For the purpose of this preliminary screening, we first sent potential interviewees a preliminary questionnaire, asking them to note their age, gender, place of residence, sexual orientation, ethnicity, and relationship status.
The final sample includes 302 interviewees.Of these, 149 identified as women (two of them transgender), 148 identified as men (one of them transgender), and 5 identified as non-binary or gender fluid.In Table 1, I present some of the key descriptive statistics of the sample of interviewees.Interviewees came from a wide variety of countries (55 different countries (we spoke with interviewees from Bahrain, Bangladesh, Belgium, Bolivia, Brazil, Canada, Chile, China, The Democratic Republic of Congo, Costa Rica, The Czech Republic, Denmark, The Dominican Republic, Ecuador, France, Germany, Greece, Guatemala, India, Indonesia, Iran, Ireland, Israel, Japan, Kenya, Korea, Lebanon, Mauritius, Mexico, Moldova, Montenegro, Morocco, Nepal, The Netherlands, Nigeria, Pakistan, Peru, The Philippines, Romania, Russia, Saudi Arabia, Singapore, Slovenia, Sri Lanka, Switzerland, Syria, Thailand, Tunisia, Turkey, the United Kingdom, the United States, Venezuela, Vietnam, and Zimbabwe)) and geographical regions, including substantial representation for interviewees from Europe, South Asia, East Asia, and Latin America.Still, nearly half of the interviewees were raised in North America (Canada or the US).
The interviewee list also includes a relatively high share of younger people, as nearly two thirds of these interviewees were 25 years old or younger, with the overall average age around 24. Finally, students (about 60 percent of all interviewees) and individuals from relatively affluent socioeconomic backgrounds (86.8 percent) were also over-represented in the sample.Any generalizations should therefore be made with caution.Nevertheless, the study includes a diverse group of interviewees, and most importantly, the sample captures some of the most theoretically-relevant features and characteristics that could influence viewers' preferences, including gender, ethnicity (about half of the interviewees identified as visible minorities according to North American standards), sexual orientation (nearly 30 percent sexual minorities), and relationship status (the sample is almost evenly distributed between those who are in a steady relationship and those who are not).
Procedure, Coding, and Analysis
Following approval from a university research ethics board, all interviews were conducted in either French or English by two highly skilled and well-trained graduate research assistants (a man and a woman).Both research assistants met with the project leader multiple times before beginning and while conducting the interviews, undergoing careful training and discussing and resolving various issues that came up during interviews.In an attempt to reduce social desirability bias and increase interviewees' sense of confidentiality, both the interviewers and the interviewees were encouraged to avoid revealing their real names or any specific identifying details and all interviews were conducted via Skype audio (without video).Interviewees were furthermore assured that their real names would not be revealed and all names appearing in the findings section are indeed pseudonyms.As a result, most interviewees appeared to be open about their preferences and views, even when these did not seem to conform with social conventions.Still, since interviewees were not provided complete anonymity, it is possible that at least some of them were not fully candid when discussing their views and preferences.
Interviews lasted between 30 and 120 min.They were recorded (with the consent of the interviewees) and subsequently transcribed, coded and analyzed using an open coding strategy, which is useful in gaining a rich understanding of under-researched phenomena [48].For the current study, I primarily analyzed interviewees' responses to two primary questions: (1) "How important for you is it to see the men reach an orgasm?", and (2) "Where would you prefer the men to ejaculate?"I derived the codes directly from the text, first identifying preliminary themes and then re-categorizing and combining them to form major themes.Interviewees were presented with some specific questions about their views and preferences regarding male ejaculations in pornography but were also given freedom to speak more broadly about other sexual experiences and preferences.
Findings
In Table 2, I present viewers' preferences for male ejaculation, overall and by various demographics.When asked about their preferences regarding male ejaculations, most of the interviewees either did not care/had no preference (26.6%) or they preferred to see male performers ejaculate inside the female performer's vagina (37.8% of all interviewees and 48.35% of the women in the sample).Others (15.4% of the interviewees) preferred the ejaculation to be on the partner's body parts, mentioning primarily the breast/chest, the stomach, the back, or the buttocks.Only about 17% of all interviewees said they preferred to see ejaculation on a woman's face (9.0%) or in her mouth (8.2%).I examine these trends below, noting variations by gender, sexual orientation, age, cultural diversity, and relationship status.
"Just Keep It to Yourself": Male Ejaculation as Unimportant/Unappealing
About half of the interviewees in the sample did not find displays of male performers' ejaculation at all important or stimulating.Of note, this was more common among men, with 58.9% of the heterosexual men in the sample declaring that seeing a male orgasm was not important to them, compared with only 37.7% of heterosexual women who held this view.Some of these interviewees still had preferences for where the ejaculation was placed when it did occur, while others (26.9% of the heterosexual sample) simply did not care whether ejaculation occurred or where it was placed.This was especially true for men (31.1%) vs. women (22.0%), for younger interviewees (31.8%), and for interviewees from Europe (41.2%) and from East Asia (45.5%).In contrast, none of the interviewees from the Middle East reported being uninterested in where ejaculations were placed.
Many of the interviewees, particularly heterosexual men, told us that they mostly stopped watching before the male orgasm occurred and that if they did reach that point, they simply did not care about their form or placement of the male ejaculation.For example, Liam, 25, a heterosexual unemployed Canadian told us: "I don't know; I'm not a big watchuntil-the-end kind of guy, so no preference."Similarly, Ivan, 22, a heterosexual student from Russia said: "I don't reach that point.So I don't care."Fady, 25, a heterosexual/asexual student from Lebanon, felt the same: "I don't care really if [the] guy doesn't ejaculate."Mathilda, 23, an unemployed bisexual student from France also had "no preference.Vagina if anything.But I don't care if they do [reach an orgasm] honestly." Other interviewees felt even more strongly about male ejaculation, finding it unappealing or even revolting.Christine, 19, a bisexual student from France, said that she did not care much for the male orgasm and preferred vaginal ejaculation, "where I don't see it."Julian, 20, a queer man from Canada working at a customer call center, said that male performers should ejaculate "on themselves; just keep it to yourself."Claudia, 22, a non-binary gay student from Saudi Arabia, wanted male ejaculations to be "nowhere near me, so no preference."Miguel, 22, a heterosexual student from Costa Rica was even more explicit: I think I'm weird in this one.I don't like jizz; I hate it!It's sticky and annoying; it's ugh!Why does it have to be so gooey?To me, since I lost virginity, I have post-sex cleanup.So the less of that, the better.So if the girlfriend swallows, it avoids mess.
"Somewhere Where It's Seen": Desire for Male Ejaculations to Be Visible
In contrast to the sentiment expressed in the previous section, other interviewees felt that male ejaculations were not only important, but also that they needed to be discernable.They therefore preferred the ejaculation to be made visible by being placed on a partner's body part, including sometimes their face.This sentiment was significantly more common among heterosexual women (27.5% of respondent) than among heterosexual men (20.9%), and was particularly prevalent among interviewees from the Middle East, with nearly half of all Middle Eastern interviewees preferring to see visible ejaculations on the partner's body or face.
Selena, 23, a bisexual educational assistant from Canada said that she generally had "no preference, but not in the vagina.On face or body."Jamila, 20, a heterosexual student from India, similarly wanted to see male performers ejaculate "all over the face or on the body, cause that's more visible."Josh, 25, a heterosexual performer from Canada preferred male ejaculation to be "on chest; somewhere where it's seen."James, 22, a heterosexual construction worker from the United Kingdom agreed: "Nowhere in particular.On body though; I like to see it."Celeste, 23, a bisexual student from St. Martin summarized this sentiment: I would say that any part of the body [is fine], just as long as it's outside.If he cums on the genital area, it's more arousing than like inside, where you don't see any of it, or on the face, where it's like, 'go wipe it off'.
"Hottest Is Inside": Internal Ejaculations as a Sign of Passion, Intimacy, and Authenticity
As mentioned above, the most common specific preference among interviewees (37.8% of all interviewees) was for male performers to ejaculate inside the female partner's vagina.This preference was especially prevalent among women, with nearly half of all women (46.2% and 48.4% of heterosexual women) expressing a preference for vaginal ejaculation.South Asian (54.3%) and African (44.4%) interviewees also showed particular affinity for this option, but it was quite popular across all demographics (at least 30% for all demographic groups).
For some, primarily among male interviewees, the portrayal of vaginal ejaculations was favored because these indicated that the woman was willing to accept the male semen, implying that she was more engaged and involved in the sexual act and there was some reciprocity and mutual respect.They therefore preferred ejaculations to be in "either the mouth or the vagina" (Antonio, 32, a heterosexual interior designer from Brazil and Brandon, 24, a heterosexual student from Canada).Paul, 24, a heterosexual army officer from the United States, similarly though that "hottest is inside if she's telling him to [do it].That's the biggest plus."Like Paul, others also stressed that they wanted the ejaculation to be "wherever she wants it" (Rajesh, 21, a heterosexual student from Bangladesh and Liana, 20, a sexually fluid student from Canada).For Jace, 25, a pansexual customer service representative from China, who also preferred vaginal ejaculations, this was a very important point: I prefer inside, but only if [the] woman wants it.I don't like it when [the] woman says 'don't cum in me', but [the] guy still does it.That's not respectful.It's men showing their macho.So it depends on [the] wish of the female [performer].
Others viewed internal ejaculations as a sign of passion, intimacy, and authenticity.Elijah, 22, a gay student from Canada, wanted to see ejaculations "inside the other person.It seems more like, genuine, it mirrors my experiences."Daria, 23, a "sexually questioning" student from Romania, also said that she preferred ejaculation to be "inside" because "it feels more personal.I feel less like, oh, I'm just watching two people doing this for money."Jessica, 22, a heterosexual part-time administrative worker from Canada, agreed, while also mentioning another common theme-the thrill of a potential impregnation: I think [ejaculating in the] vagina is the one I like most. . .Vagina is interesting because I always think if they'll get pregnant, and it [makes it] more sensual when they do.So that's more arousing with that.But otherwise, [I would prefer that they ejaculate on the] body, not on the face.
Of note, several interviewees who said that they preferred ejaculations to be in the sexual partner's mouth mentioned vaginal ejaculations as a close second.The reasoning given was that in both cases the female performer was accepting the male performer's semen into her.For example, Megan, 20, a bisexual student from Canada, said that she liked to see ejaculations in the mouth but "I also don't like it in porn when the girl opens her mouth and the cum drips out.If it's a swallow, I'd like it most, because it shows she's fully accepting him."Felix, 27, an unemployed heterosexual American of Chinese decent, similarly said that he liked ejaculations "in the mouth.When they swallow it and it goes down without an issue, it's a sign that they enjoy and accept it."
"As Long as It's Not on the Face": The Undesirability of Facial Ejaculations
As noted above, less than 10% of both the male and female interviewees in the sample indicated that they preferred to see the male performer ejaculate on the female performer's face.Moreover, when excluding non-heterosexual (that is gay, bisexual, and queer) men from the sample (these were more likely to express an affinity to facial ejaculations, with 18.5%, as some other options like vaginal ejaculations were mostly not relevant), only 6.7% of the remaining interviewees favored facial ejaculations.Another interesting finding was that facial ejaculations were in fact slightly more popular among female heterosexual interviewees (8.8%) than among male heterosexual interviewees (5.3%).However, this difference was not statistically significant and the large majority of both men and women were uninterested in this practice.Other groups that were relatively less likely to express interest in facial ejaculations included younger interviewees (only 5.5%, compared with 12.1% among older interviewees), interviewees who were in a relationship (7.6%), and interviewees from South and Central America (5.3%), from Europe (2.9%), and particularly from Africa, where none of the interviewees expressed a preference for this practice.
Beyond the fact that facial ejaculations were not coming up as a popular option for most interviewees, they were also the one practice that often elicited a strong negative reaction.A large portion of the interviewees singled out "facial ejaculations" as an unwanted practice, emphasizing that they disliked it and did not wish to encounter it.Of note, most interviewees mentioned this aversion out of their own volition, prior to any solicitation or explicit mention of the practice in the interview questions.Interviewees repeatedly used expressions such as "not on the face," "don't like face," "below the face," "as long as it's not on the face," and "anywhere other than the face or mouth" when discussing ejaculations by male performers.Some of them also referred to ejaculation on a partner's face as "gross," "weird," "demeaning," "degrading," "abusive," "unsanitary," and "uncomfortable," adding that they "never understood it."Viktoria, 36, a heterosexual student from Germany, summarized the sentiments of many viewers (both male and female): I mean, I don't like it when it's on the face or in the mouth.I often watch porn [to see] things I wouldn't do.But sometimes you put yourself in [the] place of [the] person.And I wouldn't enjoy it all over my face.
Of note, among those interviewees who did express an affinity for facial ejaculations, most did not view this as an aggressive or degrading practice.More often, they said that they preferred it due to the visibility of the ejaculation ("I'd like to see it"), with many of them noting that ejaculation on the body, which is also noticeable, would be their secondary preference.For example, Elodie, a bisexual student from Canada, said that she liked ejaculations to be "on the face.Also on the body.I just like it being shown versus inside." For some interviewees, facial ejaculations also signaled acceptance, showing that the female performer was not repulsed by her male partner's semen.For example, Cameron, 22, a heterosexual Masters' student from the United States, said that he likes facials, particularly when "the girl is kind of asking for it versus other videos where she isn't.Maybe because that shows she's in to it and accepts him."Henrietta, 24, a Black lesbian student from Canada, who also expressed an affinity for facial ejaculation in pornography, similarly thought that these could be interpreted in different ways: "It depends on the person receiving and how they interpret [the facial ejaculation].Are they able to breath and talk?It depends on what you hear from the person.".
Discussion and Conclusions
In this paper I examined the preferences of both men and women regarding male ejaculation in pornographic videos.I found that a relatively large portion of interviewees did not deem the male orgasm as important, and thus many of them had no clear preference about where ejaculations would be placed or in fact whether or not ejaculations would even be shown.Among those interviewees who did express a preference, ejaculations inside the female partner's vagina were clearly the most popular option, with nearly half of the women and about one third of the men showing a preference for this option and many others mentioning it as their secondary option.In contrast, ejaculations on the partner's face or in their mouth were much less popular, particularly among heterosexual interviewees.
Notwithstanding these general tendencies, sub-group analyses showed some interesting variability among interviewees, particularly by gender and by culture (as measured by region of residence).One notable difference between men and women was that in comparison to heterosexual male interviewees, heterosexual female interviewees were significantly more likely to see depictions of male orgasms in pornography as important.This finding challenges the perception that the fetishization of "the money shot" and the desire to watch it in pornography is almost entirely led by male viewers.In contrast, it is in line with recent studies showing that most women find it important that their male partner ejaculates during intercourse [49] and that the male orgasm is very important for women, as they often see it as an essential aspect of normal heterosexual sex and a confirmation of their own sexual appeal and attractiveness [10].
Findings regarding cultural variability in ejaculation preferences are also interesting to explore.Perhaps most notably, while ejaculations inside a female performer's vagina were generally quite popular among all groups of interviewees, this preference was especially salient among South Asian and African interviewees.One possible explanation for this preference may be the importance that these cultures ascribe to the male semen and the common belief that spilling semen in vain weakens men [50,51].These beliefs, in turn, may drive a desire to avoid wasteful spilling of semen, noting that vaginal ejaculations at least hold some potential to produce meaningful results in the form of impregnation.
In contrast, visible ejaculations (ejaculations on the female performer's face of body) were especially popular among Middle Eastern interviewees.This finding is particularly interesting given the fact that nearly 80% of all Middle Eastern interviewees in the sample identified as heterosexual men and that heterosexual men in general were actually significantly less likely than heterosexual women to express a preference for such visible ejaculations.Of note, this difference was not due to an aversion to vaginal ejaculations, as Middle Eastern interviewees' preference for vaginal ejaculations was on par with that of interviewees from Europe and the Americas.Instead, it is mostly due to the fact that all Middle Eastern interviewees expressed a clear preference regarding ejaculations, with none of them saying that they did not care about the location of such ejaculations (compare this with more than 40% of European and East Asian interviewees who that they did not really care).Clearly, Middle Eastern cultures ascribe great importance to the male orgasm and viewers wanted visible proof that it occurred.
Turning to the more general implications of this study's findings, it is interesting to note that these results are somewhat counterintuitive given the extant literature on pornography and the male ejaculation.First, they are not compatible with content analyses of popular mainstream pornographic videos.These analyses reported a relatively high prevalence for facial ejaculations, ranging from 24.3% of all videos [22] to 45% [20], while ejaculations in a partner's mouth were reported to be even more prevalent, ranging from 35.7% [22] to 58.6% [21].Viewers, however, seem much less partial to these acts.Fewer than 15% of heterosexual viewers said that they preferred either of these options.Even more, facial ejaculations were the one practice that many of the interviewees singled out, often unsolicited, as it elicited a strong negative reaction and a clear desire to avoid seeing it.Of note, this aversion was by no means reserved to female viewers.In fact, heterosexual female viewers were somewhat more likely than heterosexual male viewers to express a preference for facial ejaculations, although interviewees who expressed this preference were a small minority among both men and women.
Second, the low popularity of facial ejaculations and the frequent overt rejection of this practice do not support arguments by radical feminist scholars, suggesting that viewers, particularly male viewers, largely enjoy and seek the degradation of women in pornography [26][27][28]31,32].These scholars concluded that since aggressive and degrading practices are common in mainstream pornography, they must be popular among male viewers.Instead, there seems to be a disconnect between the low popularity of content that is often perceived as degrading and humiliating women (primarily ejaculation on a performer's face or in her mouth) and the relatively high frequency of these acts in pornographic videos and films.This disparity seems to suggest that producers, directors, and more generally industry conventions about the preferred pornographic script may be out of touch with what most viewers actually wish to watch.Furthermore, even those interviewees who did express an affinity for facial ejaculations mostly did not perceive this as a degrading or humiliating act.Instead, many of them explained this preference with the desire for the ejaculation to be visible.
Somewhat similarly, ejaculations in a female performer's mouth were also not very popular among viewers, with female viewers finding these particularly unattractive.Here, even more than with facial ejaculations, one could see viewers' perception that this is not a degrading practice.Instead, most interviewees, whether they wanted to see it or not, perceived it as an act of acceptance by the female performer, as long as she was consenting and not visibly repulsed by it.Some, particularly ones who found male semen unattractive, even saw mouth ejaculations as a relatively elegant way (when the female performer swallows the semen) to avoid watching the semen altogether.Of note, there was substantial cultural diversity among viewers regarding both facial and mouth ejaculations.These two related practices were most popular among North American viewers, with nearly a quarter of all viewers expressing a preference for one of them.However, among interviewees from Europe and from Africa, nearly no one expressed a preference for either of these practices.These differences may be a result of varying sexual cultural norms and/or of greater exposure to pornographic industries that might be less likely to feature these practices.Regardless, they demonstrate that there is nothing "natural" about such sexual fantasies and preferences.
Finally, the most popular ejaculation option among interviewees was inside the female partner's vagina.Viewers spoke about different reasons for this preference.Male viewers often emphasizing the feeling that this meant the female performer accepted the male semen (rather than being repulsed by it), especially when she was encouraging the male performer to ejaculate inside her, and interpreted this as her being more engaged in the sexual act.Female viewers (as well as some men) who expressed particular affinity for vaginal ejaculations often felt that this act signified intimacy, passion, sensuality, and authenticity.It showed that both sexual partners were fully engaged in the act and that they both cared about each other's feelings and enjoyed the sex rather than just performing for payment.Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Funding:
This research received no external funding.Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of McGill University (392-0219).
Table 1 .
Descriptive statistics of the sample.
|
v3-fos-license
|
2018-04-03T00:23:16.761Z
|
2017-02-01T00:00:00.000
|
12053136
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.aginganddisease.org/EN/article/downloadArticleFile.do?attachType=PDF&id=147558",
"pdf_hash": "379d6c651876e7d0d3f32364472b50b45493a4b8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45626",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "379d6c651876e7d0d3f32364472b50b45493a4b8",
"year": 2017
}
|
pes2o/s2orc
|
Imaging and Quantitative Analysis of the Interstitial Space in the Caudate Nucleus in a Rotenone-Induced Rat Model of Parkinson’s Disease Using Tracer-based MRI
Parkinson’s disease (PD) is characterized by pathological changes within several deep structures of the brain, including the substantia nigra and caudate nucleus. However, changes in interstitial fluid (ISF) flow and the microstructure of the interstitial space (ISS) in the caudate nucleus in PD have not been reported. In this study, we used tracer-based magnetic resonance imaging (MRI) to quantitatively investigate the alterations in ISS and visualize ISF flow in the caudate nucleus in a rotenone-induced rat model of PD treated with and without madopar. In the rotenone-induced rat model, the ISF flow was slowed and the tortuosity of the ISS was significantly decreased. Administration of madopar partially prevented these changes of ISS and ISF. Therefore, our data suggest that tracer-based MRI can be used to monitor the parameters related to ISF flow and ISS microstructure. It is a promising technique to investigate the microstructure and functional changes in the deep brain regions of PD.
The interstitial space (ISS) within the brain provides an immediate accommodation space for neural cells and contributes to the physiological and functional homeostasis of the brain, where the flow of interstitial fluid (ISF) is important for nutrient supply, waste removal and intercellular communication [1]. Parkinson's disease (PD) is the primary neurodegenerative disease of the basal ganglia and is characterized by neurotransmitter alterations in the caudate nucleus and selective loss of dopaminergic neurons in the substantia nigra [2][3][4]. Studies have demonstrated microstructural changes in the ISS of PD [5,6]. However, changes in ISF flow within the caudate nucleus of PD have not been reported. The caudate nucleus is vulnerable to the effects of PD and is the target region of several promising therapeutic strategies [7,8]. Therefore, investigating the changes in ISS and ISF flow in the caudate nucleus is necessary to comprehensively understand the mechanisms underlying pathogenesis of PD and optimize the efficacy of therapeutic strategies. The width of the ISS ranges from 38-64 nm, which, therefore, is challenging to image in vivo [9]. To our knowledge, the tracer-based MRI technique is a unique in vivo method that can measure both the microstructure of ISS and ISF flow in deep brain regions [10,11]. In tracer-based MRI, the tracer is introduced into the ISS of the target region, and the radiofrequency signal is assessed using MRI. The tracer concentration is calculated by the signal intensity of the images, and the parameters of the microstructure and flow can be calculated according to the diffusion equation. In the present study, we investigated the changes of ISS in the caudate nucleus in a rotenone-induced model of PD, with or without madopar treatment, using tracer-based MRI.
Rotenone-induced injury, madopar treatment rat model and behavioral testing
Thirty 8-week-old male Sprague-Dawley rats (280 g-320 g) were randomly divided into three groups (n=10): (a) rotenone-induced PD group that received daily subcutaneous injections with rotenone solution (1.5 mg/kg/day) [12], (b) madopar-treated group that received both rotenone injections and intragastric administration with madopar (50 mg/kg/day) and (c) sham group that received subcutaneous injections with saline. All rats were administrated the above mentioned treatment for 4 weeks. Motor impairment in the rotenone-induced group was verified using the hanging-wire and inclined plane tests [13,14]. All animal experimental procedures were reviewed and approved by the Peking University Institutional Animal Care and Use Committee and the Peking University Committee on Animal Care (No. LA2012-016). The average inclination angle in the inclined plane test four weeks after rotenoneinduced damage and madopar treatment. Both the hanging time and the inclination angle were significantly decreased in the rotenone-induced group compared to the other two groups. Data are the mean ± SEM (n = 10). One-way ANOVA and SNK tests were performed, * represents P < 0.05.
Tracer-based MRI technique
The tracer-based MRI technique was perfomed according to the propotocol previously described by Han [10,15]. Rats were anesthetized by a combination of pentobarbital sodium, ethanol, chloral hydrate, magnesium sulfate and propylene glycol (3 ml/kg) via intraperitoneal injection. Anesthesia was subsequently maintained with additional injections over the course of the experiment (approximately 0.7 ml/kg/h). The rats were fixed in a stereotactic apparatus, an incision was made in the scalp along the sagittal suture, then the bregma was exposed. A small trephine hole in the skull bone was made according to the stereotactic coordinates of the caudate nucleus (bregma: +1.0 mm, lateral: 3.5 mm, vertical: 5.0 mm) [16]. 10 mmol/L of gadolinium-diethylene triamine pentacetic acid (Gd-DTPA) in 2 μl was injected into the caudate nucleus of each rat by a syringe pump (rate: 0.2 μL/min). After the injection, the needle was left in place for an additional 5 min and then slowly withdrawn. MR scanning was operated in a 3.0-Tesla MRI system (Magnetom Trio; Siemens Medical Solutions, Germany) with an eight-channel wrist coil using T1 3D MPRAGE sequences. MR scanning was performed sequentially preinjection and post-injection (0.5, 1, 1.5, 2, 3, 4, 5, 6, 7 and 8 h). The scanning parameters were as follows: repetition time = 1500 ms, echo time = 3.7 ms, flip angle = 12°, inversion time = 900 ms, field of view=267 mm, voxel = 0.5 × 0.5 × 0.5 mm 3 , matrix = 512 × 512 and acquisition time = 290 s. The axial images of the same anatomical site from different time points were imported into the processing workstation for quantitative analysis. The increment in the signal intensity of all of the pixels in the image was recorded after registration and subtraction procession using Matlab (MathWorks, Inc., Natick, MA) and were converted to the concentration of Gd-DTPA [17]. We extracted the parameters related to the microstructure (effective diffusion parameter D* and tortuosity λ) and clearance of Gd-DTPA (clearance rate constant k' and half-life t1/2). D* represents the diffusion ability of substances in the ISS and is less than the free diffusion parameter (D) due to the diffusion barrier in the ISS. λ can be calculated by the equitation λ= √D/D * 2 and is related to the microstructure of the ISS.k' refers to nonspecific uptake and represents the loss of Gd-DTPA due to cell metabolism. t1/2 is the amount of time needed for a reactant concentration to decrease by half compared to the initial concentration of Gd-DTPA. Almost all Gd-DTPA is cleared from the ISS through the paravascular spaces surrounding large draining veins and is drained into the cerebrospinal fluid and the lymphatic system [18]. Therefore, t1/2 is primarily related to the diffusion and bulk flow of ISF.
Statistical analysis
One-way analysis of variance (ANOVA) followed by Student-Newman-Keuls (SNK) test was performed for comparisons among the groups with SPSS 18.0. Statistical significance was set a priori at 0.05.
RESULTS
Compared with the sham group (8.16 ± 0.91 s) and the madopar-treated group (7.81 ± 0.85 s), the average hanging time in the rotenone-induced model group was significantly decreased (5.97 ± 1.07 s) in the hanging-wire test (P < 0.05). In addition, compared with the sham group (43.1 ± 3.8°) and the madopar-treated group (40.7 ± 3.8°), the average inclination angle in the inclined plane test was significantly decreased in the rotenone-induced group (34.5 ± 3.9°, P < 0.05). There were no significant differences between sham group and the madopar-treated group in either the hanging-wire or the inclined plane test (Fig. 1).
In the sham and madopar-treated groups, a maximum spreading region was reached at 1 and 4 h after the introduction of Gd-DTPA, and the Gd-DTPA had almost completely disappeared. In the rotenone-induced group, a maximum spreading region was observed 3 h after the introduction of Gd-DTPA and the tracer was eliminated 8 h later. The maximum spreading region was not significantly different among the three groups (P > 0.05) (Fig. 2) In this study, we measured parameters related to the microstructure of the ISS (D *, λ) and the clearance of Gd-DTPA (k ', t1/2). Compared with the sham group, D* was significantly increased (5.828 ± 0.727 vs 2.770 ± 0.506 × 10 -4 mm 2 /s, P < 0.05), and λ was significantly decreased (0.916 ± 0.209 vs 1.560 ± 0.320, P< 0.05) in the rotenoneinduced group. However, compared with the sham group, the amplitude of variation was smaller in the madopartreated group (D* = 3.645 ± 0.451×10 -4 mm 2 /s, λ = 1.157 ± 0.128, P > 0.05). Moreover, a significant difference was found between the rotenone-induced group and the madopar-treated group (P < 0.05).
The results showed that the k ' in the rotenone-induced group was significantly decreased (k ' = 0.333 ± 0.093 × 10 -4 /s), and t½ was significantly prolonged (t½ = 156.6 ± 10.2 min) in comparison with the sham group (k ' = 0.648 ± 0.082 × 10 -4 /s, t½ = 114.6 ± 8.7 min) and the madopartreated group (k ' = 0.500 ± 0.070 × 10 -4 /s, t½ = 128.8 ± 7.1 min) (Fig. 3). Compared to the sham group, the effective diffusion parameter(D*) was significantly increased in the rotenone-induced group. No differences were observed between the sham group and the madopar-treated group. (B) Compared to the sham group, tortuosity (λ) was significantly decreased in the rotenone-induced group. No differences were observed between the sham group and the madopar-treated group. (C) Compared to the sham group, the clearance rate constant (k') was significantly decreased in the rotenone-induced group. No differences were observed between the sham group and the madopar-treated group. (D) Compared to the sham group, half-life (t½) was significantly prolonged in the rotenone-induced group. No differences were observed between the sham group and the madopar-treated group. Data are the mean ± SEM (n=10); SNK test was performed. * represents P<0.05
DISCUSSION
In this study, using tracer-based MRI allowed us for the first time to visualize the changes in ISF flow in the caudate nucleus in the rotenone-induced rat model of PD. We demonstrated that the ISF flow slowed and that the tortuosity of the ISS significantly decreased in the rat model. Madopar administration partially prevented these changes of ISS and ISF.
Tracer-based MRI was used in the current study, because this technique has the ability to image dynamic ISF flow in the brain from a global view. To date, three techniques have been developed to investigate the ISS in vivo, including real-time iontophoresis (RTI), integrative optical imaging (IOI) and tracer-based MRI. RTI can accurately evaluate the diffusion parameters of the ISS across a distance of approximately 100-200 µm, but it cannot visualize ISF flow. Similar to tracer-based MRI, IOI can visualize ISF flow using a fluorescent tracer to calculate the parameters. However, IOI is limited to superficial (200 µm) brain regions and cannot be used to investigate the ISS of deep brain regions, where the structure produces too much light scattering [9,19]. Tracer-based MRI utilizes a magnetic tracer (Gd-DTPA), which can shorten the spin-lattice relaxation time of hydrogen nuclei in water molecules within an effective distance of 2.5 angstroms. This extracellular probe increases the signal intensity of MRI.
According to our findings, the microstructure changes in the ISS measured using tracer-based MRI are consistent with previous reports using RTI [6]. Furthermore, we have verified that there are biophysical changes in ISS in the deep brain nuclei in PD. Importantly, our recent report using a 6-hydroxydopamine-induced rat model of PD demonstrated that there was decreased tortuosity and a reduced clearance rate in the ISS of substantia nigra [5]. The changes were partially mediated by dopaminergic neuron loss and reactive astrogliosis. In the current study, we demonstrated similar changes in the ISS of the caudate nucleus using a rotenone-induced rat model of PD. Whereas, unlike the pathological changes in the substantia nigra, no extensive cell loss or reactive astrogliosis have been observed in the caudate nucleus in PD [13,20]. We hypothesize that the changes in the ISS in the caudate nucleus may be caused by the degeneration of dopaminergic axons and the up-regulation of spontaneous neuronal discharge activity [13,21]. The boundary structure of the ISS, which is made of the cell membrane and extracellular matrix, prevents the diffusion of substances within the ISS. Moreover, unpublished work from our lab showed that myelin fibers act as a barrier to ISF flow. Axon degeneration can alter brain structures and influence diffusion parameters. Additionally, in a recent study, we demonstrated that functional neuronal discharge activity in normal rats can slow ISF flow and reduce clearance of Gd-DTPA [22].
In future, application of this method will provide new insight into the mechanisms of PD. For example, this technique may be used to answer the following question: "Where PD begins at the cellular level, the neuron soma in the substantia nigra or the axonal terminal in the striatum?". Moreover, current treatment of PD is not clinically satisfied, and pharmaceutical drugs offer only time-limited symptomatic relief and become less effective as the disease progresses [23]. Emerging ISS administrations that involve gene therapies and cell transplantation have shown promise in the treatment of PD [24,25]. ISS administration provides a number of advantages over conventional administration, including bypass of the blood brain barrier, minor systemic toxicity and enhanced efficacy, etc [26]. Real-time monitoring and quantitative analysis of the changes in the ISS are essential to both optimizing these treatment strategies and evaluating their efficacy. Our results validated the practicality and sensitivity of tracer-based MRI to monitor the changes in the ISS of the deep brain nuclei of treated or untreated PD.
It is important to note that we report only a novel application of tracer-based MRI in this study and future pathological or histological studies require investigations.
|
v3-fos-license
|
2024-01-24T17:22:16.908Z
|
2024-01-01T00:00:00.000
|
267152675
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2024/05/bioconf_rtbs2024_01017.pdf",
"pdf_hash": "305c166bfa5b9f39562f91f4c70f2629aabf02ed",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45629",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "48c8e1e55caf6a99c9599fcb76a81397898fc432",
"year": 2024
}
|
pes2o/s2orc
|
Druggable targets for Parkinson’s disease: An overview
. One of the most crippling conditions affecting the brain and its progression causes neurodegeneration is Parkinson's disease (PD). The disease is characterized by accumulation of α -synuclein having Lewy bodies and further loss of dopaminergic neuron in substantia nigra, ultimately causing reduced ability of voluntary movements. The main symptoms of PD include tremor, bradykinesia and rigidity. Though, various symptomatic treatment options are available targeting both motor and non-motor signs but none of them claim to improve quality of life of PD patients. Recent studies indicated the identification of targets for PD such as glutamate receptors, α -Syn, c-Abl, molecular chaperones, GPR109Aand metals have been and some drugs targeting these targets are already there in market. The effectiveness of these pharmacological targets in treating PD has to be confirmed by a larger-scale trial. Effective PD therapy may also target pathways mediated by autophagy. Gene therapy and gene editing all have strong therapeutic effects and provide fresh PD medication targets. Additionally, the therapy of PD is more effective when a multi-target response is used. Further, research should be conducted to validate and explore new targets for treatment ofPD.
Introduction
Parkinson's disease (PD) is a neurodegenerative disease that belongs to synucleinopathy (a class of neurodegenerative conditions marked by an aberrant buildup of soluble -synuclein in glial and neuronal cells), which gradually develops, and there's not any good a technique for early detection and treatment.PD is a brain disorder that causes uncontrollable movements or unintended, such as stiffness, shaking and difficulty with balance and coordination.Over time, this disease becomes worsen and more worsen.Many people have difficulties in walking as well as talking as this disease progresses.Mental behavior changes, depression, sleep problems, fatigue and memory difficulties are some common problems in PD.Mostly PD can occur at elderly age and some of the researches also show that elderly men have been more in PD than elderly women.PD can be inherited or can be from genetic mutations.Parkinson's symptoms and indicators can include tremor, bradykinesia, tight muscles, poor posture and balance, loss of automatic motions, changes in speech and writing, and tremor.Druggable targets have been used to assess the potential for pharmacological activity on a novel, predicted protein from the genome.GPCR families, protein kinases, and several enzymes are notable instances of this.[1-2].
Prevalence
There are 1-2 cases of PD for every 1000 people, however; PD prevalence rises with age and affects 1% of those over the age of 60.More people worldwide are becoming disabled and dying from Parkinson's disease (PD) than from any other neurological condition.In the last 25 years, PD prevalence has doubled.According to 2019 estimates, there were approximately 8.5 million people worldwide who had PD.According to estimates, PD caused 329 000 deaths in 2019, a rise of over 100% since 2000, and 5.8 million disability-adjusted life years, an increase of 81% since 2000.The rise in Estimates of PD prevalence highlight the growing personal and societal burden and the urgent need for actions to tackle and have an influence on this difficult disease[3].
Symptoms
Lewy bodies containing α-Syn and dopaminergic neuron loss in the substantia nigra, which manifests as lessened facilitation of voluntary movements, are the primary neuropathological findings.As PD worsens, Lewy body disease spreads to the cortex and neocortex.The three primary signs of Parkinson's disease are tremor, rigidity, and bradykinesia.Postural instability is no longer included as a fourth characteristic in the diagnostic criteria, which also describe supporting criteria, absolute exclusion criteria, and red flags [4].
In PD, non-motor symptoms are receiving more attention, and both motor and non-motor symptoms are now considered supportive criteria.In most situations, the cause of PD is unknown.There are known genetic risk factors, such as uncommon monogenetic causes in populations without selection.In 5-10% of patients, a genetic component can be detected.There are several environmental factors linked to an increased risk of PD.According to studies on corpses, a significant number of people do not have their Parkinson's disease clinical diagnosis verified during autopsies.The accuracy of the clinician's diagnosis of PD is anticipated to increase with the revised diagnostic criteria.In the near future, it's likely that growing awareness of the genetic and environmental PD risk factors may reveal the disease's underlying cause [4].
Drug Targets forPD 1 Glutamate Receptors
Glutamate receptors have ability of controlled neural transmission in basal ganglia in our brain.The targets of PD treatment can be also studied/identified with this ability of Glutamate receptors.By postponing the neurodegenerative processes, compounds that act against these receptors can slow the progression of PD.Neuroprotection is a function of amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors.Additionally, levodopa-induced dyskinesias (Uncontrolled, involuntary movements of the face, arms or legs) can be effectively treated with its antagonist Perampanel.(mGluRs) Metabotropic glutamate receptors pharmacological modulation can regulate neurotransmission can help in delay PD.Some drugs that target glutamate receptors like antagonists of mGluR5, Motor dysfunctions can be treated, and activators of group II mGluRs and group II mGluR4 can prevent neurodegeneration that can help in delay progression of PD [5-6].
Alpha-synuclein(α-Syn)
The α-Syn protein is usually insoluble in blood, and in PD it gets accumulated and enhances the PD progression.The SNCA gene also encodes the -Syn protein.Further, it has been observed that the accumulation ultimately causes the LBD (disease associated with abnormal deposits of a protein).The α-Syn protein despite having the appropriate mechanism, experiences point mutations, triplications, and duplications of α-Syn protein in PD is still unknown.Oligomerization (a chemical procedure that, through a limited amount of polymerization, transforms monomers into macromolecular complexes) of α-Syn form toxic add on which multiplies from one to another cell.The development of -Syn protein's harmful effects can be prevented in four ways that have been identified and documented so far i.e.; reducing α-Syn aggregation, boosting its clearance, limiting its multiplication, and stabilizing its current situation.The aggregation of α-Syn protein has been recorded to extend to the cell types and cell populations in brain.And two key mechanisms-autophagy and the (UPS) ubiquitin-proteasome system-have been implicated in reaching different types and populations of brain cells.UPS also degrades the proteins so thus it can also degrade α-Syn protein and can enhance the PD pathogenesis.Using both inherent and learned immunity, By reducing inflammation brought on by α-Syn protein and proteotoxic processes, the clearance of pathogenic aggregates can be increased [10][11][12][13][14].
Some methods (related to α-Syn protein) to induce progress slowly of PD are by immunotherapy aggregation of the α-Syn can be suppressed, Motor impairments can be restored by preventing the creation of the α-Syn axonal, Antibodies that prevent C terminal truncation can reduce α-Syn protein cell-to-cell proliferation, Oxidation and nitration are further processes that can be used to degrade the aggregation of α-Syn protein of α-Syn protein and these oxidation and nitration of α-Syn protein can also block oligomerization.Some of these methods are still in validation process but apart from that all the above methods have already showed lesser aggregation of α-Syn protein in PD patients [15].
GeneTherapy
Disease-modifying and non-disease-modifying transgene levels in gene therapy has revealed convincing results for PD treatment in both animals and people.Some of the factors that halts the progression of PD at preclinical level are cerebral dopamine neurotrophic factor, Growth derived neurotrophic factor, brain-derived neurotrophic factor, and nurturing.Mutations in mitochondrial genes are also to blame for the development of Parkinson's disease.A potential gene treatment for Parkinson's disease (PD) involves targeting certain mitochondrial genes, such as Parkin and Pink1, which result in decreased activity of the electron transport system, abnormal mitochondrial dynamics, impaired mitochondrial permeability, and altered membrane potential.Clinical studies have also demonstrated an association between α-Syn buildup and decreased miR-7 levels.The gene therapy that substitutes for miR-7 activity also slows the onset of PD [16][17][18][19].
Gene Editing
The clustered regularly interspaced short palindromic repeats (CRISPR) technology is particularly useful for discovering fresh PD research pathways and gene-gene interactions.In addition to greatly reducing oxidative stress and neuroinflammatory load, the CRISPR-Cas9 gene editing method also significantly reduced the progression of Parkinson's disease (PD).A valuable method for identifying and tracking dopaminergic neuronal defects is CRISPR/Cas9-mediated gene editing.It may also be useful for creating knockout cell lines that may be used to more thoroughly study illness.Therefore, Gene editing might also carry potential treatment for the PD [20][21][22][23].
Metals
Researchers discovered connections between metals and PD.[24][25][26] On the one hand, because they can result in neuronal death through oxidative stress, metals, particularly heavy metals, are typically viewed as neurotoxins.[27] For instance, both copper and iron can lead to oxidative stress and harm neurocytes, Both the both the periphery and the central nervous systems experience substantial swelling and neuronal death as a result of lead exposure, PD patients have considerably more aluminum than controls in their substantia nigra, Cerium has been shown to have a detrimental impact on DNA methylation, i.e., cerium is likely to cause PD. [28] Cerium oxide nanoparticles (CeO2 NPs), a different cerium chemical, have shown promising results and may be able to treat several neurological illnesses, including Parkinson's disease.
[29] People are reasonably knowledgeable on the pathophysiology of various metals.[30]On the other hand, current research has revealed that metals can control epigenetics in Parkinson's disease.Finding a cure for PD maybe aided by understanding the functions that metals play in the epigenetics of the disease [31].A larger-scale trial is required to confirm whether these pharmaceutical targets are useful in treating PD.Effective PD therapy may also target pathways mediated by autophagy.Gene therapy and gene editing all have strong therapeutic effects and provide fresh PD medication targets.Additionally, the therapy of PD is more effective when a multi-target response is used.Further, research should be conducted to validate and explore new targets for treatment ofPD.[3] https://www.who.
Multiple α-Syn oligomers cause damage to specific areas of the brain in PD [7-9].
Table 1
Some heavy metals and their role in PD with their respective targets [2
|
v3-fos-license
|
2021-02-02T20:59:08.934Z
|
2021-01-26T00:00:00.000
|
234004388
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.mdpi.com/2071-1050/13/3/1285/pdf",
"pdf_hash": "d3cd7eb2ed4470d5298c2638080a2771f340f35f",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45634",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "5857dee25849dbc29ff990b8bd66ae66a0078c55",
"year": 2021
}
|
pes2o/s2orc
|
The Relationship between Renewable Energy and Economic Growth in a Time of Covid-19: A Machine Learning Experiment on the Brazilian Economy
This paper examines the relationship between renewable energy consumption and economic growth in Brazil, in the Covid-19 pandemic. Using an Artificial Neural Networks (ANNs) experiment in Machine Learning, we tried to verify if a more intensive use of renewable energy could generate a positive GDP acceleration in Brazil. This acceleration could offset the harmful effects of the Covid-19 global pandemic. Empirical findings show that an ever-greater use of renewable energies may sustain the economic growth process. In fact, through a model of ANNs, we highlighted how an increasing consumption of renewable energies triggers an acceleration of the GDP compared to other energy variables considered in the model.
Introduction
The world has been facing an unprecedented humanitarian, social, and economic crisis since February 2020 due to the Covid-19 pandemic. The Organisation for Economic Cooperation and Development (OECD) is warning the entire international economic system about the negative impact that the Coronavirus will have on the world. The estimates, made so far, outweigh the worst economic forecasts. Therefore, the OECD recommends urgent economic and fiscal policy measures. As early as March 2020, the OECD had released a report that estimated that the Covid-19 crisis was able to halve the growth of the world economy by 2020. Subsequently, the adverse effects of the pandemic could go on until 2022. Now, as the weeks go by, the scenario is getting worse. Therefore, the economic emergency of Covid-19 requires targeted economic policy intervention by all countries. This pandemic is the third major economic, financial, and social "shock" of the 21st century, following the attacks of 11 September 2001, and the global economic-financial crisis of 2008. Among the adverse effects of the crisis, the collapse of production in all countries affected by the pandemic could occur. This situation would generate damage to global value chains, with adverse outcomes on consumption and consumer confidence. Furthermore, although severe measures being implemented are essential to contain the virus, the situation is pushing economies into an unprecedented deep freezing state, from which the recovery will not be direct or automatic. Thus, in addition to taking action to minimize the loss of human lives, a coordinated effort against this new great economic crisis is also a priority, which will continue even when the worst of the health crisis has passed.
On a theoretical level, we can say that this pandemic will affect world economies through three different channels: (1) mortality-affects production, as it permanently removes some people from the workforce; (2) illness, hospitalization, and absenteeismproduction is temporarily penalized; (3) efforts to avoid contagion-people change their behavior in the event of an epidemic, with quarantine preventing travel to/from infected regions and reducing the consumption of services, e.g., restaurants, tourism, entertainment, public transport, and offline purchases. This state energy research company provides support to the Brazilian Ministry of Mines and Energy (BMME).
As regards wind energy, Brazil is the most promising market for wind energy in the Latin American region. In recent years, wind energy has become an increasingly essential component of the national electricity grid. During the energy shortage of 2015, 10% of the energy came from wind farms, contributing to sustaining the costs of turning on thermal plants. Its efficient local production chain produced most of the equipment and machinery used by wind farms in the country. Moreover, 2017 was a positive year for the global wind industry, with annual installations over 52 GW. Brazil returned to the Latin American markets, installing more than 2 GW, compared to 116 MW in Chile, and 295 MW in Uruguay (the only Latin American countries with more than 1 GW of installed capacity). At the end of 2017, only nine countries had more than 10 GW of cumulative installed capacity, and Brazil was among them. In 2018, the installed capacity reached 14 GW, which corresponds to 8% of the Brazilian electrical matrix. The sector employs over 190,000 people, supplies electricity to around 22 million homes per month, and reduces CO 2 emissions by approximately 21 million tons per year. With nearly 8000 km of coastline, Brazil has enormous potential for wind power generation, particularly on the Northeastern coast, where there is wind all year round, and many wind farms have already been built.
Conversely solar energy, is a very young industry in Brazil. It starts only after the source has been included in the electricity auctions (the first took place in 2013). The volume contracted since then is almost 4 GW. The installed capacity in photovoltaic systems was only 90 MW in early 2017 but has already reached 1 GW. This result placed Brazil in the top 30 globally. According to the report on solar energy, Brazil ranks ninth among the solar additions in the period 2016 to 2020. The best scenario points to 9.5 GW of additional volume. Brazil's share of photovoltaic power plants is expected to grow significantly in the future as incentives drive investment and technology prices to drop. Today, solar parks generate only 0.01% of national energy. By 2026, the government plans to increase solar energy production to 9660 MW, resulting in 4.5% of the total matrix (considering only the utility-scale). The federal program "Luz para Todos" was created to guarantee universal access to electricity, having benefited more than 16 million people in a decade. The program provided the installation of photovoltaic panels in isolated communities and is a big buyer of this technology.
On the contrary, concerning biofuels, Brazil is an ideal market for this energy resource, and it is still a world leader in the sector. Brazil is the second-largest producer of this type of clean fuel in the world. According to the Brazilian National Oil, Natural Gas and Biofuels Agency (ANP), which is responsible for regulating fuel, 18% of all fuel consumed today in Brazil comes from renewable sources, mainly ethanol and biodiesel. As the demand for control of green gas emissions is getting stronger all over the world, Brazil has also consolidated its potential as a great exporter.
Sugar cane (for ethanol) and vegetable oil (for biodiesel) are the primary sources of Brazilian biofuels. Ethanol production grew 27.5% from 2012 levels, reaching 28.5 million cubic meters in 2017. Biodiesel, an incipient in Brazilian refineries at the beginning of the 21st century, has increased production three times in 10 years, ending 2017 with 4.3 million cubic meters produced. Some specific characteristics of the internal market have allowed this expansion of biofuels. Brazil is a world leader in flexible fuel vehicles, with approximately 90% of the cars produced in the country capable of running on both petrol, ethanol, or any mixed proportion (directly to the pump). Public policies also play an essential role, providing incentives and special programs beyond production, as well as establishing minimum market consumption. The minimum required dilution of ethanol in gasoline was increased to 27% in 2015 by ANP. The blend of biodiesel in the standard diesel became a requirement in 2008, with a share of 2%, with a policy of gradual increases over time. At the beginning of 2017, the minimum biodiesel mix required throughout Brazil was 8%, with a gradual increase of 1% until 2019, as determined by ANP, with possible further improvements shortly. Biofuel production in Brazil is concentrated in the Southeast and Northeast regions, with most sugar cane and mill companies located in São Paulo.
Energy consumption is a growth thermometer. The macro-factors that underlie economic and, therefore, energy growth primarily concern demography. Just thirty years ago, 4.8 billion people lived on the planet, while, according to World Bank, in 2017, the world had seven and a half billion human beings. A 55% growth naturally strongly affects production, consumption, and energy demand. Moreover, the trend, although it has faded over the last decade, continues to be positive, showing global demographic growth rates of 1.2% per year. In addition to the quantitative phenomenon, there is a qualitative one.
Another element strongly connected to economic growth concerns the expansion of the middle class. There are vast regions of the world (as in Brazil) where the middle class (the backbone of every growing economic system) expands rapidly, bringing with it a load of desires, possibilities, and needs that, to be satisfied, cannot ignore an increase in energy consumption. In Asia alone, for example, the middle class could go from one and a half billion people today to 3.5 billions in the next 15 years. This explains why, at a historical level (from 1985 to present), the countries that recorded a middle class expansion, as a result of demographic explosion, are the same showing the most significant increases in terms of energy demand. In Brazil, energy consumption increased by 55%; in India, by 100% while, in a country like France, only by 2%. The combination of these effects led-in the three decades considered above-the world GDP to grow by 600%, against a demographic increase of 55%. Therefore, the growth in energy consumption replicates that of wealth, being its determinant and effect at the same time. This thesis is confirmed even if we leave the purely quantitative parameter of GDP, and adopt correction indices, such as the United Nations Human Development Index (HDI). It is therefore not surprising that the location on the same population growth chart, energy consumption, and HDI trend over the past 25 years returns a consistent growth trend. Of course, there is a well-known downside. Like any human activity, if not regulated, the goal of pursuing growth causes several undesirable effects.
We can think of phenomena such as the growing concentration of wealth in the hands of a small minority of individuals, the risks of social dumping, and the effects on the climate change and the environment. The energy sector and the environmental sector, in particular, are intrinsically linked, and it is known that the actions of governments, supranational institutions, sector players, and NGOs are aimed at attempting to reconcile economic growth with related emissions. By emissions, we mainly mean greenhouse gas emissions, and in particular, CO 2 , whose abundance in the atmosphere generates an altering climate effect, which facilitates the phenomenon of global warming.
During 2014-2016, promising substantial stability of emissions was recorded, even with a slight decrease in 2015, while wealth continued to grow worldwide. The decoupling between emission trends and growth (the so-called decoupling) is undoubtedly a desirable goal to pursue. However, the virtuous trend of emissions in this period has reversed, according to the surveys of the last few years. CO 2 emissions have started to rise again by about 1.5%. This situation launched an alarm to the international community that, for years, has (almost) unanimously embraced the cause of the fight against climate change and emissions. This result is explained by the fact. In 2017, the energy demand grew by 2.1%, against 0.9% over the last five years, on average. All its components have increased and contributed to the increase in demand, including coal, reversing the recent downward trend (Figure 1).
Global energy efficiency slowed significantly in 2017. Energy intensity, measured as the energy consumed per unit of economic output, decreased by 1.7% against 2.3% in the last three years, and, even more worrying, it is far from what would be required to stay in the wake drawn by the Paris agreements of 2015.
Literature Review
The literature on the economic growth-energy consumption nexus has been summarized in [1] and [2], while [3] and [4] report an overview of the electricity demand-GDP relationship. Since the 1970s, several empirical papers have analyzed the link between energy consumption and economic growth. The first study that examined this causality factor was by [5]. However, despite the number of studies on this research topic [6], analyzing 264 scientific analyses (from 1978 to 2014), some of them were not conclusive. In particular, there were paradoxes on the results obtained concerning the fact that the energy consumption can favor economic growth by improving productivity, but also generating damage to the environment (negative externalities). Therefore, in light of these contrasting results, scientific research analyzes the link between energy consumption and economic growth, by structuring it into four alternative hypotheses: growth, conservation, feedback, and neutrality hypothesis. For example, [7] analyzed the relationship among real GDP, CO 2 emissions, and energy use in South Caucasus countries and Turkey over the period 1992-2013. Causality results suggest that the conservation hypothesis holds for Armenia, while, for Azerbaijan and Georgia, mixed results emerge, since both feedback hypothesis and growth hypothesis received support by empirical findings. Finally, no evidence of causality exists for Turkey (neutrality hypothesis).
As regards the growth hypothesis, it suggests that energy represents a determinant of economic growth. Therefore, an increase in the use of energy generates a direct effect on it. On the contrary, if the contribution to the use of energy decreases, economic growth is negatively affected [8]. Ref. [9] investigated the nexus between renewable energy consumption and economic growth in Italy. Long-run estimations reveal that, if renewable energy consumption increases by 1%, real GDP decreases by 0.23%. Ref. [10] examined the renewable energy consumption-economic growth nexus in Italy over the period 1970-2007. The Toda and Yamamoto approach shows a unidirectional causal flow, running from renewable energy consumption to aggregate income. Ref. [11] demonstrated this adverse effect as a reduction in energy consumption due, for example, to an energy-saving policy, which generates a very negative impact on the cyclical trend of the economy. Ref. [12] analyzed the US case between 1961 and 2011. Demonstrate the causal link between the use of energy from biomass, the level of employment, and capital by supporting the growth hypothesis. The econometric analysis showed how biomass energy consumption generated a direct and positive causal link to the US economic growth. Ref. [13] confirmed the existence of the growth hypothesis analyzing the case of the OECD countries and the effect of the use of renewable energies on the economic growth of the area. The study concluded that the use of renewable energy, or a mix of it with other energy sources, generated economic growth effects for OECD countries with statistically significant values. Ref. [14] made an econometric estimate of the impact of renewable energy consumption on Germany economic growth. It showed that a 1% increase in renewable energy consumption generates an increase in the economic growth of 0.22%. Ref. [15] used a panel estimation technique to examine the energy consumption in a time series from 1990 to 2012 and considered 38 renewable energy-producing countries. Their findings indicate that renewable consumption of energy hugely impacted the long-run economic growth of 57% of the sample countries. Ref. [16] analyzed the relationship among economic growth, carbon dioxide emissions, and energy use for six Association of South-East Asian Nations (ASEAN) countries over the 1971-2007 years. Using a panel Vector AutoRegression (VAR) technique, the empirical findings show that the response of economic growth to energy use is positive and statistically significant. Thus, the results suggested that for this panel the "growth hypothesis" holds.
According to the conservation hypothesis, a one-way causality effect ranging from economic growth to energy consumption exists. This result suggests the absence of an adverse impact on economic growth when energy consumption decreases. Ref. [17], studying the Israeli case over the period 1980-2013, found evidence of a unidirectional causality running from economic growth to primary energy consumption. Ref. [18] analyzed the case of the Baltic States, through a time-series approach, from 1990 to 2011. Applied results highlighted that the economic growth generated an increase in the consumption of renewable electricity, but not vice versa. Ref. [19] examined the link between renewable energy consumption and economic growth for 17 emerging countries. They concluded that the conservation hypothesis could hold only in the case of Peru, while the growth hypothesis was confirmed in most other countries. Ref. [20] inspected the relationship among economic growth, energy use, and CO 2 emissions in Israel over the period 1971-2006. Causality results suggest that real GDP drives both energy use and CO 2 emissions. Ref. [21] used the Granger causality tests in analyzing the US data between 1960 and 2007. The applied findings confirmed a unidirectional causality between the growth of GDP and renewable energy consumption. The conservation hypothesis was also validated in the Turkey case by and [22,23]. Ref. [24] with panel co-integration estimations for 18 emerging economies concluded for a direct correlation between economic growth and consumption of clean energy in which increase in capital would result in a 3.5% increase in the use of renewable energy, implying that renewable energy consumption will increase tremendously as emerging economies take on.
According to the feedback hypothesis, it affirms that there is a bidirectional relationship between economic growth and energy use. Ref. [25] studied the case of Middle East and North Africa (MENA) countries, discovering a bidirectional causal link between the use of renewable energy and economic growth, but only in the long-run. Ref. [26] analyzed the relationship among economic growth, renewable energies, trade, and economic growth. The results showed the existence of a bidirectional link between renewable energy consumption and economic growth for both, developing and developed countries. In addition, a 1% increase in renewable energy consumption allows a 0.873% variation in economic growth in developed countries, and 0.68% in developing countries. Ref. [27] confirmed the feedback hypothesis, analyzing the relationship between biomass energy consumption and economic growth in the BRIC (Brazil, Russia, India, and China) countries. The results showed a two-way link that ensured a balance between the two long-term variables. Ref. [11] demonstrated how, because of this bidirectional relationship, energy-saving policies could generate a negative effect on the economy, and vice versa. Ref. [28] analyzed the bidirectional causal relationship amongst nuclear energy consumption, CO 2 emissions, renewable energy, and economic growth per capita between 1990 and 2013, through panel data methodologies. They discovered the presence of a long-term bidirectional relationship between renewable energy consumption and per capita GDP change. Ref. [29] assessed the relationship between disaggregate energy production and real aggregate income in Italy, using annual data from 1883 to 2009. Causality tests roughly confirm a bidirectional flow in the long-run, so that energy production and economic growth complement each other. Ref. [30] discovered a two-directional connection between economic growth and renewable energy consumption employing Panel Vector Error Correction (PVEC) models for OECD countries over the period 1985-2005.
The neutrality hypothesis denies the existence of a causal relationship between energy consumption and economic growth. Therefore, an adverse change in energy consumption will not cause a reduction in economic growth. The neutrality hypothesis is confirmed in empirical studies if an increase in economic growth does not cause an increase in energy consumption, and vice versa. Ref. [31] analyzed the relationship among economic growth, carbon dioxide emissions, and energy use for 19 Asia-Pacific Economic Cooperation (APEC) countries 1960-2013. Using a panel VAR technique, a three-variable VAR is estimated. Empirical findings illustrate that no causal relationship emerges between real GDP and energy use, in line with neutrality hypothesis. [32] investigated the relationship among economic growth, carbon dioxide emissions, and energy use for the South Caucasus area and Turkey in the 1992-2013 years. The time-series techniques show results in line with the neutrality hypothesis. Ref. [33] studied the causal relationship between energy consumption and economic growth through a panel VAR on 82 countries over the period 1972-2002. They found that the neutrality hypothesis was valid for low-income countries. Ref. [34] analyzed the relationship between electricity consumption and economic growth for 12 countries of the EU (European Union). They concluded that there was no causal relationship between the variables of their study in the short term. This result confirmed the hypothesis of neutrality. Ref. [35], through a panel co-integration model, analyzed the link between energy consumption and economic growth for 16 Asian countries. The results highlighted the absence of a causal relationship between energy consumption and economic growth in the short-term. In the long run, however, the authors found a one-way causal link that supported the growth hypothesis.
The papers using ML methodologies to estimate the link between energy pollution and economic growth are very recent. Ref. [36] investigated the causal relationship among solar and wind energy production, coal consumption, economic growth, and CO 2 emissions for China, India, and the US. The findings, confirmed by three different ML procedures, showed that, while a reduction in overall carbon emissions is predicted in China and the US (resulting from the intensive use of renewable sources of energy), India displays critical predictions of a rise in CO 2 emissions. Ref. [37] analyzed 104 countries in a time series from 1993 to 2014. Using the Boosted Regression Trees (BRT) in ML, they stated that renewables is a precondition for sustainable development. Ref. [38], using an Artificial Neural Networks (ANNs) experiment, analyzed the relationship between energy priceenergy supply and economic growth in China, from 1980 to 2010. They show that the positive effects on economic growth are only visible in the short-term. Ref. [39] studied the interactions of water and energy systems with economic sustainability. They use a variety of ML techniques by comparing results with statistical estimates. They concluded that future studies will need to be supported by empirical evidence in ML.
Ref. [40] analyzed the relationship among economic growth, pollution, and the spread of COVID-19, in India, using a Causal Direction from Dependency (D2C) algorithm. A predictive link among economic growth, energy use, PM 2.5 , and the spread of Covid-19 in India emerges. Ref. [41], through an ANN, demonstrated the threshold value of concentration of PM 2.5 and PM 10 linked to the spread of the Covid-19 virus in three French cities.
In light of the analysis of how renewable energy sources have contributed to the development of the Brazilian economy, one fundamental point that every research study should focus on is the extension by which the mitigation of carbon emissions is integral in the growth of the country. The economic literature, already in its contributions that are not recent, has analyzed the connections between economic growth and environmental degradation through the so-called Environmental Kuznets Curve (EKC). It states that there is a direct correlation between environmental degradation and economic growth to a certain level, beyond which there is an increase in the quality of environment as income per capita increases [20]. Based on the Intergovernmental Panel on Climate Change report, the renewable energy sources can meet 77% of the energy needs of the world by 2050, despite the present figure being relatively as low as 13% [42]. Most countries wonder about the role they can perform in that drastic transformation, given that Brazil has played a significant role in the process, having contributed 44.8% of clean energy in 2010. The projected renewable energy production increase to 46.3% could be small. Still, it does not consider the enormous growth that it will experience in terms of demands of raw energy and the point that, in the next decade, a foundation would be created to facilitate the use of clean energy in the future.
Fossil fuels are the current primary sources of global energy, which makes up to over 80% of the total power supplied in the world economy [13]. However, the use of fossil fuel across the globe has been met with numerous impediments that have prompted many countries, including Brazil, to find alternative reliable energy sources. Some of the challenges that the Brazilian government faces in the use of fossil fuels is the increasing disconnection between the demand of energy and its supply in the global market, the growing depletion of oil reserves, and emissions of harmful gases in the atmosphere [43]. Carbon, which is a byproduct of the combustion of fossil fuels, is the leading cause of the present human ecological crisis. Concerning the various crises that have rocked the energy sector, lately Brazil has displayed an increasing desire to develop clean and renewable energy sources. Besides the depletion of fossil fuel, the unabated rate of environmental degradation is also another factor that has prompted the Brazilian government to embrace the Green Growth Agenda, which fosters for tandem coexistence of economic growth and environmental conservation [13].
According to [44], renewable energy is highly suitable in reducing the amount of carbon in energy, which is an integral element of mitigating climate. Based on [42], the consumption of renewable energy would minimize the emission of carbon by approximately 8.2% by 2050. The use of clean energy technologies also contributes immensely to economic development. It assists in the minimization of dependency on imported fuels. It increases the consumption and accessibility of energy across Brazil to over 1.4 million people who are not fully exposed to renewable energy sources [43]. The spread of renewable energy can also help create job opportunities and facilitate the growth of industries in its underdeveloped areas. Ref. [21] estimated that an increase in energy consumption by 1% rises Brazilian per capita GDP by 0.12%. In summary, Brazil is one of the very few nations that have fully utilized renewable energy sources in the world. Different factors prompted the attention that the country has the generation and use of renewable energy. Some of these factors are the increasing Brazilian population that cannot fully be supplied with hydropower, the depletion of fossil fuel, and the increase in the number of motor vehicles and other equipment that requires energy for operation. In general, many researchers, in the face of an increase in energy consumption in Brazil, are continuing their research, in order to establish a long-term causal relationship concerning the change in economic growth in Brazil.
Methods and Data
The empirical methodology of this study uses ANNs in ML experiment with Oryx 2.8.0. software (Hyderabad, India). The aim of this paper is to estimate a possible acceleration of Brazil's GDP with more use of renewable energy consumption in a Covid-19 time. A combination of four variables that are pooled through the first differences, the squared elevation, and the logarithmic transform is used. The need for these transformations of the dataset is further explained. The goodness of the performance of the NN is covered by accuracy. We must select and adequately prepare input data and the structure of the network itself. In fact, unlike standard econometric models, ANNs do not require data transformation; therefore, they do not have the typical problems of stationarity of the time series. Since the operating logic of a NN is very different. We have to make the variables dimensionless and comparable. Then, we have to make sure that the weights are not influenced by the absolute value of the variables; in other words, variables with a higher total value are more influential than others with a lower one. As for the architecture of the constructed networks, we decided not to limit to a single intermediate layer, as, instead, stated by [45,46]. We followed [47] approach, according to whom the optimal number of nodes in the intermediate layer, which allows the network to obtain the best performance, is equal to the logarithm of the number of sample data used to estimate the network itself. In general, we followed a similar approach by [48][49][50][51][52]. However, see Figure A1 in the Appendix A, for the flowchart.
Although there are numerous approaches and algorithms for using ANNs, according to [53], we used the Multilayer Perceptron (MP) approach with ANNs forecast experiment. Starting by the perceptron, it is a network with m of neurons; d is the number of inputs, and the output y j is the target: where x i are the inputs and w ji are the weights of each input combined with each output.
With this architecture, we use the activation functions of the threshold function type. The final outputs of the network are, therefore, verifiable by the following expression: where z k is the final output, w kj are the weights for each processing unit, and y j is the signal sent by the hidden units. The bias, as a coefficient respectively of the input x 0 and y 0 , was calculated by setting both of them equal to 1. Then, combining Equations (1) and (2), we can derive the final result: Figure 2 shows the graphical representation of the previous equation. We derived data on per capita GDP in 1990 US $ (converted at Geary-Khamis PPPs) (https://conference-board.org/data/economydatabase/total-economy-databaseproductivity), per capita energy consumption (kg of oil equivalent), renewable-energy consumption (% of total final energy consumption) (https://www.iea.org/data-and-statistics), and combustible renewables and waste (% of total energy) (https://data.worldbank.org/). Using yearly data from 1990 to 2018, we obtained a total of 420 data.
Results
In this section, we analyze the results obtained through our ANNs experiment. In total, it considered thirteen inputs (with two omitted inputs) and one target. The variables were expanded through mathematical transformations (logarithms, ln, and variations of the first differences, dd). Figure 3 illustrates the use of the variables of our algorithm. The calculator has chosen 13 several inputs, compared to a single target. There are no omitted variables. Out of a total of 15 combined variables, one variable (which does not appear) represents the substrate. Through the pie chart in Figure 4, we can observe, in detail, the use of all instances in the dataset. The total number of instances is 59. The number of training instances is 37, the number of selection instances is 11, the number of testing instances is 11, and the number of unused instances is 0 (0%). Finally, after observing the behavior of the dataset concerning the processing in ML of our algorithm, we can analyze the result of the ANNs in Figure 5. The ANNs graph contains a scaling layer, an NN, and an unscaling layer. The yellow circles represent the scaling neurons, the blue circles are the perceptron neurons, and the red circle the unscaling neurons. The number of inputs is equal to 13, and the number of outputs to 1. The degree of complexity-represented by the numbers of hidden neurons-is 10:7:4. As we can see from the result of the NN, the preset target is dGDP. It represents the best choice, compared to 7371 (Result = DR n,k . In this case, k, a positive integer, can also be greater than or equal to n) possible combinations of between inputs, to generate a target necessary for the analysis. The Confusion Matrix in Table 1 confirms the results obtained through ANNs. The results confirm the goodness of the previous outcomes shown in Figure 4. The expected values, compared to the actual positive values, cause a change in the target 96.47 times every 100 combinations between the inputs made. Therefore, compared to the positive current values, there is only a 3.53% probability of being able to choose a different target than the one obtained in the ANNs analysis (dGDP).
We obtain the same result by observing, in the Confusion Matrix, the results between the predicted positive and negative values with the actual negative values. In this case, the probability of obtaining a different target drops to a level of 0.17%. As regards the analysis of the goodness of learning and data generation concerning the obtained target, we performed the Predictive Linear Regression test ( Figure 6).
The best straight line has a smaller distance, on the ordinate axis, from all the points of the diagram to the predicted and real values. Therefore, the predicted and real variables of the study (concerning the target) have a linear relationship between them. Thus, the points of the scatter plot tend to arrange themselves in a straight line. The tests that we have carried out on the ANNs results confirm the generated target. The combination of inputs showed that the dGDP undergoes a change after several passages. It can be considered as the output of the whole process. Despite this, we wonder which of the outputs generated a more significant variance of per capita GDP. In this way, we would have useful policy information to advise the use of adequate economic actions to accelerate Brazil's GDP. To this end, we have generated in our algorithm the ability to predict, concerning the total historical data, the variation of the target (dGDP) in four iterations (ITEs). Through this procedure, we can analyze the effects of inputs on ANNs over time. Since we use annual data, the four ITEs could be interpreted as the next four years. In other words, we try to determine if one or more inputs can determine the variation of the target dGDP until 2022. This process is performed through the Levenberg-Marquardt algorithm errors history (Figure 7) and the Importance test ( Figure 8). The results of the test are very interesting. Our algorithm, which foresees different variations over time (four years), shows that the errors squared by the predictions of the ITE decrease to be minimal to the fourth ITE. This result allows us to analyze the Importance test, since the forecast error would be minimum. Therefore, in Figure 8, we show the result of the algorithm on the importance of each input variable in generating the target dGDP. Each positive value represents the predictive ability, compared to our four ITEs, of causing an acceleration of the target. Negative values, on the other hand, generate the contraction effects on the dGDP.
The results obtained from the Importance test represent the positive and negative projections about our inputs used concerning the target dGDP. We can come up with interesting observations. Compared to the 13 inputs in total, three of them show a positive variation, while 10 a negative variation. More specifically, two of the three variables with a positive value are not the transforms we have created for the ANNs model. These two variables, therefore, represent the linear inputs to our target, which is also linear (dGDP). Thus, we can eliminate the LndGDP variable from consideration also because, being itself a GDP logarhythmic evaluation, it would suffer from collinearity with target. Thus, the inputs that generate a positive change in the dGDP are renewable energy consumption, with a value of 0.945, and energy, use with a value of 0.810. This result highlights that the renewable energies could represent the variables capable of accelerating Brazil's economic growth. Energy use (more generally) also has a positive effect on GDP growth. However, it is lower than the renewable energies' one. An explanation of this result could be the following. The projections concerning four ITEs could be based on the assumption that there will be no friction on the labor market following the transition of the Brazilian economy. In particular, the workforce will adapt to structural changes as regards to skills requirements. This situation presupposes that funding is made available for the restructuring, and the country energy system moves towards renewable energy. In this scenario, we could have a positive effect not only on the climate, but also in terms of GDP growth and new employment. As a result of the investment activity necessary to achieve this transition, together with the impact of the reduction in spending on the import of fossil fuels.
The shift towards the production of capital goods, such as renewable energy equipment and machinery, will lead to a significant increase in the demand for labor from the activities connected to it. We must also say that, if this acceleration of the energy transition will increase GDP and jobs opportunities globally, the gains are not automatic. In fact, some communities could suffer negative impacts, especially in areas of the country that have so far relied heavily on fossil fuels. It is, therefore, necessary to invest also in training and retraining of workers and in social security measures for those who cannot be relocated. Thus, we can say that, by accelerating the spread of renewable sources, it is possible-at the same time-to fuel economic growth, create new job opportunities, improve human well-being, and contribute to safeguarding the future climate. The increase in distribution will be able to meet the energy needs of a growing population, guide development, and improve well-being, whilst reducing greenhouse gas emissions and increasing the productivity of natural resources.
Furthermore, our results provide empirical evidence that economic growth and environmental conservation are fully compatible and that the conventional consideration of the trade-off between the two is obsolete and erroneous. In addition, the acceleration effect of the input of renewable energies is the best choice for policymakers in a period of economic uncertainty such as the present time, due to Covid-19.
Conclusions and Policy Implications
Renewable energy is one of the critical drivers of the Brazilian economy after the government shifted its effort in full utilization of clean energy. Numerous researchers are persuaded of the presence of a positive correlation between economic development and renewable energy consumption worldwide. Brazil is one of the largest consumers and producers of energy, and it is experiencing a substantial economic growth. However, this growth could withdraw itself, giving rise, instead, to a real economic crisis caused by the Covid-19 pandemic. Therefore, economic policy solutions are needed. In particular, they should be based on investments in renewable energy capable of accelerating a long-term development process. Over the recent past, there has been growing propulsion in Brazil towards production renewable energy in the national energy sector. Renewable energy is currently touted as the future fuel, and Brazil is not left behind in the maximum utilization of the energy in its economic production. Empirical findings show that an ever-greater use of renewable energies may sustain the economic growth process.
In fact, through a model of ANNs, we have noticed how an increasing consumption of renewable energies trigger an acceleration of the GDP, compared to other energy variables considered in the model. Compared to standard econometric models, this experiment was able to show and select which input can generate the best target. The best output was per capita GDP. A positive variation of it, through a four ITEs predictive process, was due to an acceleration of renewable energies. Therefore, we can conclude that, during the international pandemic caused by the Covid-19 virus, Brazil might anticipate the adverse effects that will spill on the economic system. Using defective policies, intensifying its energy structural change process by promoting a more intensive use of renewable energy.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2019-01-22T22:35:07.611Z
|
2019-01-18T00:00:00.000
|
58593716
|
{
"extfieldsofstudy": [
"Biology",
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1093/bioinformatics/btz029",
"pdf_hash": "1faedda7e27eb33b01894d7c83e9383a07d9c292",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45637",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Computer Science"
],
"sha1": "c51bfb465795d02170f72d1219b85302cb40ea5b",
"year": 2019
}
|
pes2o/s2orc
|
AllerCatPro—prediction of protein allergenicity potential from the protein sequence
Abstract Motivation Due to the risk of inducing an immediate Type I (IgE-mediated) allergic response, proteins intended for use in consumer products must be investigated for their allergenic potential before introduction into the marketplace. The FAO/WHO guidelines for computational assessment of allergenic potential of proteins based on short peptide hits and linear sequence window identity thresholds misclassify many proteins as allergens. Results We developed AllerCatPro which predicts the allergenic potential of proteins based on similarity of their 3D protein structure as well as their amino acid sequence compared with a data set of known protein allergens comprising of 4180 unique allergenic protein sequences derived from the union of the major databases Food Allergy Research and Resource Program, Comprehensive Protein Allergen Resource, WHO/International Union of Immunological Societies, UniProtKB and Allergome. We extended the hexamer hit rule by removing peptides with high probability of random occurrence measured by sequence entropy as well as requiring 3 or more hexamer hits consistent with natural linear epitope patterns in known allergens. This is complemented with a Gluten-like repeat pattern detection. We also switched from a linear sequence window similarity to a B-cell epitope-like 3D surface similarity window which became possible through extensive 3D structure modeling covering the majority (74%) of allergens. In case no structure similarity is found, the decision workflow reverts to the old linear sequence window rule. The overall accuracy of AllerCatPro is 84% compared with other current methods which range from 51 to 73%. Both the FAO/WHO rules and AllerCatPro achieve highest sensitivity but AllerCatPro provides a 37-fold increase in specificity. Availability and implementation https://allercatpro.bii.a-star.edu.sg/ Supplementary information Supplementary data are available at Bioinformatics online.
1 Introduction Troyano et al., 2011). The assessment of the allergenic potential of novel proteins remains a challenge since there is no generally accepted, validated and broadly applicable method available (Verhoeckx et al., 2016). The current approach relies on the guidance of allergenicity assessment for genetically modified plant foods recommended by FAO/ WHO (2001), which is based on single hexamer peptide hits and sequence identity thresholds to known allergens. However, this similarity approach leads to many wrongly classified proteins as potentially allergenic (Stadler and Stadler, 2003), including up to 90% of all human proteins ( Supplementary Fig. S1).
Many factors are known to contribute to protein allergenicity (Huby et al., 2000), including protein stability, cleavage sites, posttranslational modifications and physico-chemical properties. However, allergenic proteins need to be recognized by T and B cells to trigger the development of protein-specific IgE and/or they need to react with IgE on basophiles or mast cells to trigger the elicitation of an IgE-mediated allergic reaction. The basis for this specific immune recognition of the protein is its 3D structure and its amino acid sequence.
Here, we present a new model to predict the protein allergenicity potential starting from the protein sequence. We first gathered all available and reliable protein sequences associated with allergenicity (further abbreviated as 'known allergens') and analyzed these protein sequences and their corresponding 3D structures to identify and characterize features related to allergenicity and then combined these features with a biophysical model built on the union of available data sets to form one unique and comprehensive data set.
Merged database
The five major databases of known allergens online were accessed and sequences were retrieved directly or via accessions through the respective databases at NCBI or UniProt. Next, cd-hit (Li and Godzik, 2006) was used to create non-redundant subsets with the detailed resulting numbers of unique proteins for each database in Table 1.
In the case of Allergome, individual entries were accessed online and sequences retrieved with the additional criterion that the evidence of allergenicity includes at least one strong experimental test (without counting non-functional tests) or epidemiological support. Supplementary Table S1 lists the accessions of the entries considered from the respective databases.
k-mer hit criterion
First, a query protein was split into its respective 6-mers and those of low complexity as defined by a sequence entropy < 0.34 (log2based bit score) and those with ambiguous amino acids (BJOUXZ) were removed. Then the remaining query 6-mers were compared with the 6-mer database derived from our database of known allergens. A hit to a known allergen is found, if at least three different 6-mers are shared between the known allergen and the query protein.
Gluten-like Q-repeat fingerprint score
From the Food Allergy Research and Resource Program (FARRP) AllergenOnline database, 1013 'Celiac disease peptides' were downloaded in March 2018. The smallest size of those peptides is nine residues, which are in agreement with most major histocompatibility complex Classes I and II core-binding regions. The amino acid frequencies were calculated for every 9-mer window within the peptides and a composition fingerprint score was derived by using a log odd ratio of the frequency in the 'Celiac 9-mer' windows divided by a background database frequency (UniProtKB used here). This log odd score is used to score all 9-mers in a query protein and if the score for a 9-mer is within one standard deviation of the average of the FARRP 'Celiac disease peptides', it triggers a hit as Gluten-like Q-repeat.
3D structure/model database
Cd-hit (Li and Godzik, 2006) was used to cluster the known allergens into groups of 70% or more sequence identity. We next used BLAST (Altschul et al., 1997) and HHpred (Zimmermann et al., 2018) against PDB (Burley et al., 2017) to find templates for homology modeling for the $1200 representatives. Approximately 900 models were created using MODELLER (Webb and Sali, 2017) in two steps. The modeling process was performed in two steps. First, dynamic programming-based structural alignment between query and template was performed by using the salign class of MODELLER and then 100 structural models were built and the discrete optimized protein energy (DOPE) score of each model was calculated and the one with the lowest energy was selected for Step 2, the loop refinement. Using the loop model class of MODELLER, 200 models with refined loops were built and the one with the lowest DOPE score was selected as the final model. Next, we further evaluate model quality visually and with ProQ2 (Ray et al., 2012) requiring quality thresholds of LGscore > 1.5 and MaxSub > 0.1. This resulted in 713 representative protein structures/models. To ensure that the resulting models are consistent with optimal protein geometry, we next use YASARA (Krieger and Vriend, 2014) to calculate Z-scores for deviation from normality of angles, bonds, dihedrals (Ramachandran plot) and planarity relative to the AMBER-FB15 force field (Wang et al., 2017). This identified 91 suboptimal models with Z-score < À2. Using the YASARA energy minimization protocol (Krieger et al., 2009) based on short simulated annealing molecular dynamics simulations with the AMBER-FB15 force field, we corrected the 91 models. Supplementary Table S3 lists details on template similarity and model quality.
Sequence to 3D epitope mapping
For every structure a table of epitope definitions was created following this procedure: First, all surface accessible residues were identified with YASARA (distance to solvent accessible surface <2.55 Å , empirically derived threshold that was consistent with the binding interface of known protein-antibody complexes), then each of the surface residues were taken as the hypothetical center of an epitope and all other surface residues that are within 12 Å distance from the center residue were included. This distance was chosen to match the binding interface size (Dall'antonia et al., 2014) seen in representative complexes of IgE antibodies with allergens. A minimum epitope size of at least 13 residues is further required. The procedure is implemented as custom Yanaconda macro script in YASARA (Krieger and Vriend, 2014).
For the sequence to epitope mapping, the query protein is searched with BLASTP (Altschul et al., 1997) against our 3 D structure/model database (E-value < 0.001). To compare the query protein to the closest known allergen in context of the 3D structure an additional BLASTP search (E-value < 0.001) is run with the query protein against our database of known allergens. Next, MAFFT (Katoh and Standley, 2014) is used with L-INS-I settings to create a multiple sequence alignment of the three sequences: query, best 3D hit and best allergen hit. The aligned residues of query and allergen with the structure are then assigned to the respective epitopes using the epitope definition table described above. Finally, a loop over all epitopes comparing the identity of epitope residues between query and allergen allows determination of the epitope with highest identity. In case of equal identity values, the larger epitope is considered. This procedure is implemented as custom Perl scripts.
Comprehensive database of known allergens
Various in silico databases of protein allergens were reviewed to gather available allergenicity information on characterized proteins (Table 1) to help identify the allergenic potential of novel proteins. The most comprehensive database is Allergome (http://www.allergome.org/) which provides annotation details for each entry to characterize the degree of allergenicity based on available data from literature and from the 'Real Time Monitoring of IgE-sensitization' database, which provides data from any contributor willing to share data. The most popular database for assessment of food allergen proteins is the Allergen Online database from the FARRP (https://farrp.unl.edu/resources/ farrp-databases). This database contains information collected and evaluated by a peer review panel of scientists and clinicians comparing peer reviewed publications following pre-determined guidelines. The most recent Comprehensive Protein Allergen Resource (COMPARE, http://comparedatabase.org/database/) by the Health and Environment Science Institute comprises protein entries which result from an algorithm combined with a review of the corresponding literature and a final decision made by independent allergy experts. The UniProtKB, although not specialized on allergens alone, is one of the most established sources for general protein annotations. It is based on a combination of manual curation and annotation by close similarity and uses the controlled keyword "Allergen" to attribute allergenic potential to proteins. One of the most stringent databases regarding criteria that need to be fulfilled to consider a protein as allergenic is organized by the Allergen Nomenclature Sub-Committee under the auspices of the WHO and the International Union of Immunological Societies (IUIS, http://www.allergen.org/). A protein is considered as allergenic if protein-specific IgE-reactivity was demonstrated with sera from at least five patients allergic to the protein source and, moreover, the protein has been characterized in accordance to given WHO/IUIS criteria (Pomes et al., 2018).
Using this collection of major databases with most including various degrees of expert curation, we systematically compared their data overlap using 100% sequence identity as criterion to determine shared and unique proteins. There is a strong consensus between individual databases featuring only 33-69 protein entries unique to only one database except for Allergome which contains 1826 unique sequences (Fig. 1, D1). The total number of unique entries merged from the 5 databases comprises 4180 proteins with good support for allergenic potential, which we then use as known allergens for our computational workflow described here.
Improved k-mer matches with known allergens avoiding random hits
One of the traditional criteria that have been used for allergenicity assessment of genetically modified plant foods by experts for the FAO/WHO (2001) is the six-amino acid rule: A protein matching a k-mer of six amino acids in length with a known allergen should be further evaluated for potential allergenicity. The statistical distribution of k-mer matches between two sequences (Lippert et al., 2002) and for database searches (Tan et al., 2012) is well studied. In short, there is a critical k-mer length for a given database (depending on size and redundancy within). Below the critical k-mer length, random hits increase. Above the critical k-mer length, the k-mer becomes specific only for the respective protein, unless the k-mer represents a simple low complexity sequence repeat, which in turn produces random hits (Tan et al., 2012). In the case of allergen databases, the number of known allergens has grown dramatically in the last decade and the probability of finding a random 6-mer hit to an allergen, for example in the human proteome, is so high that 90% of human proteins would be classified as allergens with this rule ( Supplementary Fig. S1).
A simple temporary solution is to increase the k-mer length, for example to at least eight, as suggested by Goodman (2006) and Hileman et al. (2002). Although the general value of k-mer-based hits are frequently questioned (Herman et al., 2009), one reason for their introduction was to evaluate potential immunogenic crossreactivity which can occur at the T-cell epitope level (Westernberg et al., 2016). AllerCatPro strikes a conservative balance between the need for safety and the practicality of avoiding random hits by using a statistically informed k-mer hit criteria. First, low complexity sequence motifs are detected and filtered out with simple sequence entropy measures (Wootton and Federhen, 1996). Second, instead of increasing the k-mer length a minimum number of k-mer hits within the same protein is required. For example, for a k-mer of length six, three consecutive hits (shifted by only one position) exactly fit into a sequence of eight residues and, hence, are a more flexible form of 8-mer matches that also allow matches to more relaxed patterns of homology seen in protein sequences. This approach is also in agreement with the rationale of similarity to T-cell epitopes, where there is usually not a single long epitope in a protein sequence but multiple short ones (in terms of the recognized core region) (Huby et al., 2000;Jahn-Schmid et al., 2005;O'Brien et al., 1995;Oseroff et al., 2012;Prickett et al., 2015).
In order to evaluate the best combination of the k-mer length and parameters discussed earlier, the prospective predictive power of different k-mer lengths was estimated using the UniProt database of known allergens from 2005 to predict all allergens known in 2015 following the rule of having at least one k-mer hit to a known allergen (Fig. 2B). To estimate the rate of false positives two 'false' control sets were used that should not be natural allergens and not detected as such. The first is the sequence reversal (same protein sequence in false direction) of the true set which is a perfect nonsensical copy maintaining number and length of sequences as well as amino acid composition. The second is a large set of 52 894 nonredundant human proteins that do not have any annotation in UniProt for words including 'allergen'. At low k-mer length both the true and the two false sets give predictions for all input proteins while the predictive power increases with a k-mer length six and the excess of true over false detections remains stable (Fig. 2B). The fact that both negative set curves are very similar shows that, for k-mer studies, even the unnatural but simple to obtain sequence reversal may be a reasonable estimate for false positives. This is supported by studies showing that only in very limited cases; reversed sequences would also represent similar structures (Carugo, 2010). Next, the effect of different k-mer based methods on the excess of % true minus false positives or, in other words, the distance of the true from the false curve from the first graph, was examined (Fig. 2C). The methods included: (i) classical single k-mer hit required for prediction as allergen, (ii) at least three k-mer hits in same sequence for prediction as allergen, (iii) single hit criterion but only if k-mer is not a simple repeat motif as measured by a minimum sequence entropy threshold and (iv) triple hit plus entropy criterion. At k-mer length six, the combined triple hit and entropy criterion performs best (Fig. 2C) and is used in AllerCatPro (Fig. 1, S5). Depending on future extensions of allergen databases these criteria will have to be revisited.
Gluten-like Q-repeats
Filtering out peptides of low complexity to reduce random hits also prevent hits to Gluten-like repeats of Glutamine (Q-repeats), which are important to be recognized especially for immune reactions such as Celiac disease (Hischenhuber et al., 2006;Mamone et al., 2011). To distinguish random hits from Gluten-like Q-repeats and account for its relevance for allergenicity risk assessment, a dedicated score was created based on the compositional fingerprint ( Supplementary Fig. S2A) of peptides associated with Celiac disease in the FARRP database (Goodman et al., 2016).
Composition and physical property-based fingerprints have already been used for allergen assessments (Dimitrov et al., 2014b) and we believe this approach to be especially useful for short repeats of the same set of amino acids characters but in different order and combination. Indeed, this compositional fingerprint score significantly separates peptides associated with Celiac disease ('Celiac peptides') from human non-allergen peptides (Supplementary Fig. S2B) and detects all known proteins associated with Celiac disease in the database via their short repeats and is therefore included in the AllerCatPro prediction method (Fig. 1, S1 and D3).
Moving from a linear sequence window to 3D epitope similarity
The second traditional criterion that has been suggested for allergenicity assessment is that if a protein matches a known allergen over a linear window of 80 residues with at least 35% identity (FAO/ WHO, 2001), then it is declared a potential allergen (Fig. 3A). The rationale behind this criterion is that at this level of similarity the 3D structure of the region may be identical, at least at the domain fold level, which could lead to similar recognition by B cells and IgE antibodies. However, the fast linear-window approach also has its limitations as it ignores the fact that antibody recognition of especially discontinuous epitopes occurs in 3D and surfaces can also differ among same folds. At the same time there have been good efforts to predict 3D cross-reactivity by comparing 3D structures (Ivanciuc et al., 2003;Negi and Braun, 2017). However, 3D structures are known for only a fraction of allergens and it remains a challenge to evaluate a query protein of unknown structure in a fast and automated manner.
To overcome this, AllerCatPro utilizes (i) comprehensive structure modeling to create a 3D database of all known allergens, (ii) a fast sequence-to-structure alignment method and finally (iii) a structural epitope-sized 3D window sliding over the structure to move from the previous linear sequence window approach into relevant 3D structure comparisons (Fig. 3B). To create a comprehensive 3D structural database of known allergens, the allergens were clustered into groups with at least 70% sequence identity and the best templates for structural modeling were predicted for every cluster representative (see Section 2). This clustering avoids overrepresentation of fold members and negative effects from potential modeling inaccuracies of highly similar sequences while still providing efficient coverage of the fold space among all allergens for fast comparison. Crystal structures were considered when available and highly reliable homology models built utilizing methods successfully employed previously for protein structure prediction (Kraft et al., 2005;Kunze et al., 2011;Maurer-Stroh et al., 2003. With this approach, structures covering 74% of the 4180 known allergens were identified and modeled for AllerCatPro (Fig. 1, D2, see Section 2 for details). Furthermore, we compare the fold classification and distribution of our models to a recent review of known structures of allergens (Dall'antonia et al., 2014) as well as the crystal structures listed at the dedicated Structural Database of Allergenic Proteins (Ivanciuc et al., 2003). We observed that the relative ratios of the major fold classes like all alpha, all beta etc. are maintained but that these bigger classes are proportionally larger than the group of small proteins and peptides ( Supplementary Fig. S6).
The next task was to solve the problem of a fast and-automated structure-based comparison for query proteins of unknown structure. For every representative structure in our database, we first precalculated all possible epitope-sized 3D surface regions and then created structure-to-sequence maps assigning sequence positions to their respective epitopes. This allows quick recall of sequence-tostructure mappings from sequence-based alignments against the known structures. Therefore, sequence similarity can be used to identify for a query protein sequence both the closest known allergen and the closest structure representative and create a multiple sequence alignment of the three. And finally, using the sequence-tostructure maps of the epitopes, the similarity of the query and the closest known allergen over all possible 3D structural epitopes for the closest 3D surface match can be evaluated (Fig. 3B).
The initial 70% clustering is essential for our epitope approach, since we do not measure explicit 3D structural differences between crystal structures or models but we compare differences in the sequence alignment of a query with the closest known allergen in the context of 3D epitope residues over the same common family structure/model scaffold. If redundancy were to be retained it would create a mix of highly similar structures/models with small and not necessarily reliably modeled conformational fluctuations adding bias to the comparison.
It should be noted that we are not directly considering known B-cell epitopes at this stage because it is difficult to avoid bias towards the set of very well-studied allergens with complete experimental epitope data that is not available for the majority of other allergens. We focused here on a general approach that can be equally used on known and new proteins for the purpose of risk assessment. To exemplify that our epitope comparison includes relevant epitopes also without explicit bias towards experimentally known sites, we show an example of a rice Bet v 1 sequence being compared Fig. 2. Prediction of protein sequence similarity towards protein allergens by the k-mer method. Screening for similarity between a query protein sequence and a sequence in the allergen database is based on identical k-mer hits (A). Evaluation of an appropriate k-mer length based on the predictive power of different k-mer lengths by using the UniProt database of known allergens from 2005 to predict all allergens known in 2015 (B). Differences in the excess of percent true minus false positives depending on the k-mer length and entropy degree (ent034 ¼ entropy bit score > 0.34) (C) Fig. 3. Prediction of linear sequence window and 3D epitope similarity. Screening for similarity between a query protein sequence and a sequence in the allergen database based on a sequence window of 80 residues with at least 35% identity (A). Matching of a query protein sequence with unknown 3D structure and the closest known allergen over all possible 3D structural epitopes within the created comprehensive 3D structural database of known allergens (B) with known allergens on the well-studied dominant epitope also seen in crystal structures ( Supplementary Fig. S7). In this case, the well-known dominant epitope is also the most similar epitope for this sequence but it only has 80% identity in the epitope consistent with expected lower allergenicity potential of the rice Bet v 1.
Combined workflow
Finally, the discussed methods and scores were combined into a decision workflow (Fig. 1A) that is guided by consistency between previous rules and recommendations. The input is a query protein and the output is the model's assessment if there is strong, weak or no evidence of allergenicity potential for the queried protein based on the different measures of similarity to known allergens. Presence of a Gluten-like Q-repeat is classified as strong evidence independent of other features and, hence, evaluated first. Next, sequence similarity to representatives in our 3D structure database is checked. If there is significant sequence similarity, then the 3D surface epitope similarity is used to assign 'strong evidence' if above the benchmark threshold of 93% sequence identity or 'weak evidence' otherwise. The rationale here is that sharing the same fold is at least weak evidence for allergenicity potential, but only if surface epitopes are substantially similar one would expect cross-reactivity and hence strong evidence for allergenicity potential. The threshold was chosen to allow correct prediction of all known allergens and thereby maintain highest sensitivity (see benchmark below). If no structure hit is found (as is the case for $26% of known allergens), a default back to the classical linear-window approach is used with the established 35% identity over 80 residues rule also resulting in a strong evidence call. Finally, if also no hit was found with the linear-window approach, the model falls back to evaluating by kmer. This hierarchical staggering also ensures that the more relevant 3D and linear windows are given priority over the k-mers in the evaluation. Only, if none of the methods give a hit, a 'no evidence' prediction is assigned. The complete workflow was named 'AllerCatPro'.
Performance benchmark
Since the aim of this work is to have reliable safety assessments, the thresholds, such as the 3D epitope % identity, have been set to be able to give hits to all known allergens in our database. It needs to be noted, however, that among the 4180 there were 11 sequences (all <18 residues length) that were missed because they are too short to be evaluated (length 5) or have ambiguous characters and do not trigger the model's k-mer rule. The formal sensitivity is therefore 99.7%. This must be seen as an upper boundary since the sequences to be predicted are also in the AllerCatPro database. To estimate performance on new sequences, first a jack-knife cross-validation was performed where the sequence to be predicted is removed from the database (Supplementary Fig. S3). This still yields 97.2% sensitivity. Extending the cross-validation to removing all sequences with >90% identity produces 93.8% sensitivity. The latter is a common scenario where remote family members of known allergen protein families from other species are being evaluated. It is important to point out that this type of cross-validation is more stringent and systematic than 5-or 10-fold cross-validation with random assignment to groups since it makes sure that no closely related family member remains in the respective 'training' sets. Additionally, the sensitivity of the method was also evaluated on previous benchmark sets provided by other tools and it ranges from 96.5 to 99.3% ( Supplementary Fig. S4).
To test the performance of the new 3D structure similarity measure in detail, a benchmark set of 221 known allergens with structures in our database was created (selected to be structurally non-redundant using CLICK; Nguyen et al., 2011) and matched with 221 likely non-allergens with the same fold by finding the closest non-self hits in species like human, rice, yeast or E. coli (Supplementary Table S2). We emphasize that this set is small but well representative of protein allergens with structure in our set.
AllerCatPro achieves 84% overall accuracy (Fig. 4A) at 100% sensitivity (Fig. 4B) and 67% specificity (Fig. 4C). From other methods reported in the recent literature results were generated for the same data set, including the classical FAO/WHO linear-window rule (FAO/WHO, 2001) but leaving out the ambiguous k-mer rule that would predict 100% of positives and negatives, PREAL (Wang et al., 2013), AllerHunter (Muh et al., 2009), AllergenFP (Dimitrov et al., 2014b) and AllerTOPv2 (Dimitrov et al., 2014a). The accuracy of these methods ranges from 51% for the old FAO/WHO rules to respectable 73% (AllerTOPv2) (Fig. 4A). The same trends are seen when evaluating by Matthew Correlation Coefficient ( Supplementary Fig. S5). However, only the FAO/WHO window rule and AllerCatPro achieve 100% sensitivity (safety rationale for conservative assessments) with the other methods typically ranging from 57 (AllerHunter) to 85% (AllergenFP) (Fig. 4B). When compared with the FAO/WHO window rule, AllerCatPro identified 3fold less false positives resulting in a 37-fold increase in specificity (Fig. 4C) at the same high sensitivity.
Implementation as webserver
AllerCatPro is accessible as a webserver (https://allercatpro.bii.astar.edu.sg/). The input (Fig. 5A) is one or more protein sequences (up to 50) in FASTA format and the output is a table with the workflow results and decision for one protein per line (Fig. 5B). The results also include a link to view the most similar 3D surface epitope (Fig. 5C) when applicable. At the end there is a download link for the results also in comma-separated format which can be opened by popular spreadsheet programs. Fig. 4 AllerCatPro performance. Performance of AllerCatPro is calculated as accuracy to predict allergens (n ¼ 221) versus non-allergens (n ¼ 221) with the same structural fold compared with FAO/WHO rules (window-rule only, no k-mer), PREAL, AllerHunter, AllergenFP and AllerTOPv2 (A). By our definition, sharing the fold with an allergen already results in a weak evidence prediction. Therefore, the calculation of accuracy here is based on strong prediction on known allergen as true positive, weak prediction on known allergen as false negative, weak prediction on non-allergen as true negative and strong prediction on non-allergen as false positive. For the same benchmark, the respective sensitivity (B) and specificity (C) is highlighted
Conclusions
In this work, we build on and extend the work by several groups and expert panels with the aim to improve assessment of the allergenic potential of protein sequences. Our emphasis has been to retain earlier considerations and update or upgrade the approach and criteria. Starting with a comprehensive database comparison to derive the largest set of reliable known allergens, we propose an entropy-adjusted hexamer hit approach as well as switching from linear sequence window similarity to B-cell epitope-like 3D surface similarity with predicted structures for 74% of all known allergens in a workflow guided by safety rationale. At the highest sensitivity needed for conservative assessments, AllerCatPro increases specificity by 37-fold compared with the previous rules. Fig. 5 Interface of AllerCatPro version 1.7. Submitting one or more protein sequences in FASTA format (A) leads to the AllerCatPro output table with the result for strong, weak or no evidence for allergenicity per protein based on corresponding workflow decisions and, in case of a hit, the possibility to view the most similar proteins (B) as well as the most similar 3D surface epitope via links (C). The structural view shows identical epitope residues as balls (colored as blue for positive charges, red for negative charges and gray for all other amino acid types)
|
v3-fos-license
|
2020-11-26T09:06:44.094Z
|
2020-12-25T00:00:00.000
|
229670811
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.jstage.jst.go.jp/article/avd/13/4/13_oa.20-00136/_pdf",
"pdf_hash": "86184c0f0ccc1a1faa263d1e614a2b481a8f8208",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45639",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "beaab849ecdea6592cb96f5e76f99deff14d5175",
"year": 2020
}
|
pes2o/s2orc
|
Interface Pressures Derived from a Tubular Elastic Bandage
Objective: We sought to clarify the interface pressure (IP) when using a tubular elastic bandage (TEB) and examine the possibility for TEBs to provide IPs comparable to those provided by anti-thrombotic stockings. Materials and Methods: In 40 healthy patients, IPs were measured at the level of calf at its maximum diameter (C) and transition of the medial gastrocnemius muscle into the Achilles tendon (B1) while a single or double layer of TEBs (17.5 cm in circumference) were applied with the patient in a supine position. Results: Including both the C and B1 levels, circumferences and IPs showed a good correlation (single layer; r=0.72, double layer; r=0.75). The IP obtained with a single layer of TEB at the C level (median, 17 mmHg [range, 12–23 mmHg]) was higher than that at the B1 level (14 mmHg [11–18 mmHg], p<0.001). When double-layer TEB was used, the IP at B1 level increased to 18 (14–23) mmHg (p<0.001 vs. single layer). Conclusion: Considering the characteristics of TEBs and using a single or double layer appropriately, creating a pressure profile mimicking that of an anti-thrombotic stocking seemed to be feasible when using a TEB.
Introduction
At the same time as some controversies existing, with anti-thrombotic stockings (ATS) still being considered to reduce the risk of venous thromboembolism in postoperative patients, [1][2][3][4] ATS are known to cause skin irritation and/or uncomfortable feeling, and, more seriously, can damage soft tissue and/or peripheral nerves. These complications are considered to be caused mainly by improper fitting and/or application technique. 5) However, this may not be simply because of the lack of experience of nurses and/or patients themselves. Postoperative leg sizes in the patients who have undergone vascular or orthopedic surgery can vary greatly depending on the severity of postoperative inflammation and/or edema. Bandages can fit legs of any size and shape, but maintaining a proper interface pressure (IP) is difficult, with the guideline recommending daily re-measurement of leg sizes and refitting of ATS whenever necessary, 4) however, this would lead to excessive economic costs and work overload for staff. For these reasons, the nurses at our institute started to use tubular elastic bandages (TEBs) instead of ATS. Indeed, TEBs are less costly and easier to apply than ATS. Moreover, their thick fabric is beneficial to avoid medical device-related pressure ulcer. However, they are not designed to generate graduated compression to increase venous blood flow as demonstrated by Sigel et al. 6) and Lawrence et al., 7) i.e., 18 mmHg at the ankle and 14 mmHg at the calf. Since the IPs obtained TEBs have not been studied well, we investigated IPs obtained when using TEBs and discussed whether TEBs could provide IPs mimicking those provided by ATS.
Materials and Methods
This prospective study was approved by the Institutional Review Board of Yamaguchi University Hospital (Center for Clinical Research, Ube, Yamaguchi, Japan; H2020-040). All patients provided signed, informed consent before enrollment. The TEB evaluated in this study was an Elutube ® (NIPPON SIGMAX Co., Ltd., Tokyo, ©2020 The Editorial Committee of Annals of Vascular Diseases. This article is distributed under the terms of the Creative Commons Attribution License, which permits use, distribution, and reproduction in any medium, provided the credit of the original work, a link to the license, and indication of any change are properly given, and the original work is not used for commercial purposes. Remixed or transformed contributions must be distributed under the same license as the original.
Interface Pressures Derived from Tubular Bandages
Japan), which consists of 84% cotton, 12% polyester, and 4% polyurethane, because Elutube ® has already been adopted and utilized in our institute. In this study, size E, which fits medium-sized calves (not defined by measurements) was used. The circumference of the TEB in its original shape is 17.5 cm. The study patients comprised 40 healthy volunteers with a median age of 38 (range, 23-60) years. The characteristics of the patients and their right legs are summarized in Table 1.
Firstly, two air pack-type sensors were attached to the medial aspect of the right leg, one at the level of the calf at its maximum diameter (C) and another at the level of the transition of the medial gastrocnemius muscle into the Achilles tendon (B1) in each patient (Fig. 1). Because the sensor could not be attached properly around the ankle, IP at this level was not measured. The patient first put on the TEB at the level of the fibular head to ankle (single layer). With the patient in a supine position, IPs at the C and B1 levels were measured followed by performing the same measurements with the patient in a standing position. Next, the patients were asked to put on another TEB over the first one (double layer), and IPs were measured as above. For the measurement of IPs, an analyzer (Model AMI-3037-SB, AMI Co., Tokyo, Japan) was used.
Statistical analysis
Results are expressed as the median (range) or count, unless otherwise indicated. In order to classify the leg size fit for ATS, we used the brochure provided by the manufacturer (AT stocking ® , NIPPON SIGMAX Co., Ltd.). The Mann-Whitney U-test was used to test differences in IPs obtained in different positions. The Wilcoxon signed rank sum test was used to test differences in IPs between singleand double-layer TEBs. The correlations between IPs and circumferences were tested using a linear regression analysis. Statistical analyses were performed using JMP 11.0 (SAS Institute, Cary, NC, USA). A p-value <0.05 was considered significant.
Results
The correlations between leg circumferences and IPs in a supine position are shown in Fig. 2. When the circumferences and IPs, including both C and B1 levels, were correlated, there was a good linear correlation both with single-layer (r=0.72) and double-layer (r=0.75) TEB. However, when C and B1 levels were assessed separately, circumferences and IPs showed similar linear correlation at the C level for both single-(r=0.59) and double-layer (r=0.52) TEBs, while no such correlation was found at the B1 level.
The IPs obtained using TEBs in various settings are listed in Table 2. IPs at the C level were higher than those at the B1 level, and IPs obtained in a standing position were higher than those obtained in a supine position when all other conditions were the same. The IP increased by approximately 1.3 times at both the C and B1 level when double-layer TEB was used. The median static stiffness index, which is defined as the difference between IPs at the B1 level in the supine and standing position, 8) increased from 5 to 7 mmHg (p<0.01) when a double layer of TEB was used.
Median IPs according to leg size based on the fit of the ATS are demonstrated in Table 3. Using single-layer TEB, the median IP at the C level was 15 mmHg for S-size legs, 17 mmHg for M-size legs, and 18 mmHg for L-size legs. Using a double layer of TEB, the median IP at the B1 level was 18 mmHg for the S-size legs, 18 mmHg for M-size legs, and 19 mmHg for L-size legs. Accordingly, for S-size legs, the median IP at the B1 level was 18 mmHg using a double-layer TEB, and the median IP at the C level was 15 mmHg using a single layer of TEB, which was similar to the pressure profile reported by Sigel et al. 6) On the other hand, IPs at the C level were higher than 14 mmHg for M-and L-size legs. Fig. 1 Points of measurements.
Discussion
The main findings in this study were as follows: 1) there was a significant linear correlation between IPs obtained using TEB and circumferences, 2) the IP increased approximately 1.3 times by doubling the layer of TEB, and 3) a pressure profile mimicking that of ATS might be created using TEBs in legs of a certain size. As expected, the simple application of either a single or double layer of TEB did not produce a pressure gradient as recommended to increase venous return. Namely, the IP was higher at the C level than that at the B1 level. Inter-estingly, Bowling et al. reported that the desired pressure gradient for ATS was achieved in only 14% of legs, and a positive pressure gradient from the calf to ankle was observed in approximately 23% of legs, 9) where, if the TEB is doubled only below the calf, then a pressure gradient mimicking ATS might be created in S-size legs. Considering the fact that certain prophylactic effects of ATS to prevent venous thromboembolism could be expected in such conditions, and also considering that the National Institute for Health and Care Excellence guideline recommends the use of stockings producing a calf pressure of 14-15 mmHg to prevent venous thromboembolism in in-hospital patients, 4) the pressure profile required to prevent venous thromboembolism itself may need to be revised. In this study, the correlation between the circumference, i.e., the degree of stretching of the TEB, and IP was linear at the C level, while there was no such correlation at the B1 level. In results that might be interpreted as the correlation at each level representing a different phase of hysteresis, this is probably because we used linear regression analysis for each level. Another possible explanation for this result is floating of the TEB at the B1 level because the B1 level is recessed between the calf and ankle and because the degree of depression varies widely depending on the leg shape.
Limitation
Since this study was a single-center study that included a limited number of patients, reaching a definitive conclusion is difficult. The validity of the pressure profile of ATS is generally determined using the IP at the ankle level. However, we could not find an appropriate place to attach the sensor around the ankle in which little flat and nonbony places could be found. This might have prevented the obtaining of conclusive results. Since there are a wide variety of commercially available TEBs with different sizes and made of different materials, the current results may not be generalizable; therefore, the circumference-pressure relationship needs to be clarified for each TEB. Furthermore, because the pressure profile may not the only factor determining anti-thrombotic properties, validity of the use of TEBs instead of ATS should be tested in future clinical trials.
Conclusion
The IP achieved using TEBs linearly correlated with calf circumference, and using double-layer TEB increased the IP by up to 1.3 times. Using these characteristics, it seems feasible to create a pressure profile mimicking that of ATS using TEBs.
|
v3-fos-license
|
2019-03-27T13:03:17.340Z
|
2019-03-01T00:00:00.000
|
85516477
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/20/6/1461/pdf",
"pdf_hash": "24e40ca14fa3a006af9125db30637ed29fa3590b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45640",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "24e40ca14fa3a006af9125db30637ed29fa3590b",
"year": 2019
}
|
pes2o/s2orc
|
Molecular Mechanisms of Pulmonary Fibrogenesis and Its Progression to Lung Cancer: A Review
Idiopathic pulmonary fibrosis (IPF) is defined as a specific form of chronic, progressive fibrosing interstitial pneumonia of unknown cause, occurring primarily in older adults, and limited to the lungs. Despite the increasing research interest in the pathogenesis of IPF, unfavorable survival rates remain associated with this condition. Recently, novel therapeutic agents have been shown to control the progression of IPF. However, these drugs do not improve lung function and have not been tested prospectively in patients with IPF and coexisting lung cancer, which is a common comorbidity of IPF. Optimal management of patients with IPF and lung cancer requires understanding of pathogenic mechanisms and molecular pathways that are common to both diseases. This review article reflects the current state of knowledge regarding the pathogenesis of pulmonary fibrosis and summarizes the pathways that are common to IPF and lung cancer by focusing on the molecular mechanisms.
Introduction
Idiopathic pulmonary fibrosis is a progressive and usually fatal lung disease characterized by fibroblast proliferation and extracellular matrix remodeling, which results in irreversible distortion of the lung's architecture. Although its cause remains to be elucidated fully, advances in cellular and molecular biology have greatly expanded our understanding of the biological processes involved in its initiation and progression [1]. It is widely accepted that environmental and occupational factors, smoking, viral infections, and traction injury to the peripheral lung can cause chronic damage to the alveolar epithelium [2]. Based on recent in vitro and in vivo studies of IPF, the novel therapeutic reagents pirfenidone and nintedanib were developed to slow the progression of this complex disease [3][4][5]. However, these drugs do not improve lung function and patients often remain with poor pulmonary function [6,7]. Furthermore, neither drug has been tested prospectively in patients with coexisting IPF and lung cancer [8]. In previous studies, 22% of patients with IPF developed primary lung cancers, corresponding with a five-fold greater risk than that in the general population [8][9][10][11][12]. Similarly, primary lung cancer risk is more than 20 times higher in patients who undergo lung transplantation for IPF than in the general population [13,14]. These observations warrant efforts to identify pathways that are common to both disorders. Questions regarding the proper and ideal management of patients who suffer from both IPF and lung cancer are also raised. It is assumed that pathogenetic similarities between IPF and lung cancer are a starting point for investigations of disease pathogenesis and the resulting insights will improve therapeutic approaches. This review article summarizes the current knowledge of the pathogenesis of pulmonary fibrosis and outlines the common molecular pathways between IPF and lung cancer.
Dysfunctional Epithelia Trigger Aberrant Wound Healing Processes
It is assumed that fibrosis advances over long periods of time in patients with IPF. Thus, at the time of diagnosis, modifications of lung structure have already been established by the disease and pathological features, such as various stages of epithelial damage, alveolar epithelial cell (AEC) 2s hyperplasia, dense fibrosis, and abnormally proliferating mesenchymal cells, are found. At this time, it is not possible to determine the course of events that have led to lung damage; however, it is accepted that dysfunctional epithelia are key to the pathogenesis of IPF [15].
Under normal conditions of lung injury, AEC1s are replaced with proliferating and differentiating AEC2 cells and stem cells, which restore alveolar integrity by stimulating coagulation, the formation of new vessels, activation and migration of fibroblasts, and synthesis and proper alignment of collagen. Chemokines, such as transforming growth factor (TGF)-β1, platelet-derived growth factor (PDGF),
Dysfunctional Epithelia Trigger Aberrant Wound Healing Processes
It is assumed that fibrosis advances over long periods of time in patients with IPF. Thus, at the time of diagnosis, modifications of lung structure have already been established by the disease and pathological features, such as various stages of epithelial damage, alveolar epithelial cell (AEC) 2s hyperplasia, dense fibrosis, and abnormally proliferating mesenchymal cells, are found. At this time, it is not possible to determine the course of events that have led to lung damage; however, it is accepted that dysfunctional epithelia are key to the pathogenesis of IPF [15].
Under normal conditions of lung injury, AEC1s are replaced with proliferating and differentiating AEC2 cells and stem cells, which restore alveolar integrity by stimulating coagulation, the formation of new vessels, activation and migration of fibroblasts, and synthesis and proper alignment of collagen. Chemokines, such as transforming growth factor (TGF)-β1, platelet-derived growth factor (PDGF), vascular endothelial growth factor (VEGF), and fibroblast growth factor (FGF), are central to these processes. Conversely, continued lung injury or loss of normal restorative capacity invokes an inflammatory phase of the wound healing process. The associated increases in the expression levels of interleukin-1 (IL-1) and tumor necrosis factor-alpha (TNF-α) create a biochemical environment that favors chronic flaws of regeneration and tissue remodeling [16].
TGF-β
TGF-βs are multifunctional cytokines that are present as three isoforms: TGF-β1, TGF-β2, and TGF-β3. Although the biological activities of these isoforms are indiscrete, TGF-β1 plays a predominant role in pulmonary fibrosis [17]. The three TGF-β receptors, type I (TGFRI), type II (TGFRII), and type III (TGFRIII), have the potential to bind to all three TGF-βs with high affinity. However, TGF-β is the best characterized promoter of extracellular matrix (ECM) production and is considered the strongest chemotactic factor for immune cells, such as monocytes and macrophages. In these cell types, TGF-β activates the release of cytokines, such as PDGF, IL-1β, basic FGF (bFGF), and TNF-α, and autoregulates its own expression. Increases in TGF-β production are consistently observed in epithelial cells and macrophages from lung tissues of patients with IPF [18] and in rodents with bleomycin-induced pulmonary fibrosis [19]. Smad proteins are known as mediators of TGF-β signaling from the membrane to the nucleus [20]. Activated TGF-β receptors induce phosphorylation of Smad2 and Smad3, and complexes of these with other Smad proteins are translocated into the nucleus to regulate transcriptional responses. Studies show that the deficiency of Smad3 attenuates bleomycin-induced pulmonary fibrosis in mice [21] and that the inhibitory Smad7 prevents the phosphorylation of Smad2 and Smad3 via activated TGF-β receptors [22,23].
TGF-β1 is considered the most important mediator of IPF. AEC2s produce TGF-β1 following actin-myosin-mediated cytoskeletal contractions that are induced by the unfolded protein response (UPR) following ανβ6 integrin activation. The αvβ6 integrin/TGF-β1 pathway is a constitutively expressed molecular sensing mechanism that is primed to recognize injurious stimuli. TGF-β1 is a strong profibrotic mediator that promotes the epithelial-mesenchymal transition (EMT); epithelial cell apoptosis; epithelial cell migration; other profibrotic mediator production; circulating fibrocyte recruitment; fibroblast activation and proliferation and transformation into myofibroblasts; and VEGF, connective-tissue growth factor, and other pro-angiogenic mediator production [24].
PDGF
PDGF is a potent chemoattractant for mesenchymal cells and induces the proliferation of fibroblasts and the synthesis of ECM. Activated homologous A and B subunits of PDGF can form three dimeric PDGF isoforms. Alveolar macrophages with IPF produce higher volumes of PDGF-B mRNA and protein [25,26]. AEC2s and mesenchymal cells also express abnormal levels of PDGF in animal models [27]. Moreover, PDGF-B transgenic mice develop lung disease with diffusely emphysematous lung lesions and inflammation/fibrosis in focal areas [28]. In agreement, intratracheal instillation of recombinant human PDGF-B into rats produces fibrotic lesions that are concentrated around large airways and blood vessels [29]. In another study, gene transfer of an extracellular domain of the PDGF receptor ameliorated bleomycin-induced pulmonary fibrosis in a mouse model [30]. Insulin-like growth factor (IGF)-1 also promoted fibroblast proliferation synergistically with PGDF [31]. Accordingly, alveolar macrophages from patients with IPF expressed IGF-1 mRNA and protein at greater levels than those in normal alveolar macrophages [31,32].
FGF
bFGF is a stimulator of fibroblast and endothelial cell proliferation that has been correlated with the proliferative aspects of fibrosis. In particular, bFGF expression is up-regulated at various periods of wound healing, and recombinant bFGF has been shown to accelerate wound healing. Accordingly, anti-bFGF antibody inhibited the formation of granulated tissue and normal wound repair. Alveolar macrophages are a predominant source of bFGF in intra-alveolar fibrotic areas following acute lung injury [33]. In a study of IPF, mast cells were found to be the predominant bFGF-producing cells, and bFGF levels were associated with bronchoalveolar lavage cellularity and with the severity of gas exchange abnormalities [34].
TGF-α
TGF-α induces proliferation in endothelial cells, epithelial cells, and fibroblasts, and is present in fibrotic areas [35]. In proliferative fibrotic lesions in rats with asbestos-or bleomycin-induced pulmonary fibrosis, AECs and macrophages had elevated expression levels of TGF-α [36]. Similarly, in transgenic mice expressing human TGF-α, proliferative fibrotic responses in interstitial and pleural surfaces were epithelial cell specific [37]. These results indicate that TGF-α is involved in cell proliferation under fibrotic conditions following lung injury.
Keratinocyte Growth Factor (KGF)
KGF is produced by mesenchymal cells, and the KGF receptor is expressed in the epithelial tissues of developing lungs. In rats, KGF accelerated the functional differentiation of AEC2s, and the intratracheal instillation of KGF significantly improved bleomycin-induced pulmonary fibrosis [38]. These data suggest that KGF participates in the maintenance and repair of alveolar epithelium and has potential in the treatment of lung injury and pulmonary fibrosis.
Hepatocyte Growth Factor (HGF)
HGF is produced by mesenchymal cells and has been identified as a potent mitogen for mature hepatocytes. The HGF receptor is a c-Met proto-oncogene product that is predominantly expressed in various types of epithelial cells. HGF levels are higher in bronchoalveolar lavage fluid and serum from patients with IPF than in serum from healthy people [39,40]. HGF is also highly expressed by hyperplastic AECs and macrophages in lung tissues of patients with IPF. In in vitro studies of epithelial cells, HGF promoted DNA synthesis in AEC2s [41]. The administration of HGF also inhibited fibrotic changes in mice with bleomycin-induced lung injury [42]. Promisingly, the combination of HGF and interferon-γ (IFN-γ) enhanced the migratory activity of A549 cells by up-regulating the c-Met/HGF receptor [43]. Based on these observations, HGF treatments may offer a novel strategy for promoting the repair of inflammatory lung damage for patients with pulmonary fibrosis.
Changes in AEC2s that Lead to Aberrant Tissue Repair
Repetitive exposures of alveolar epithelium to microinjuries, such as infection, smoking, toxic environmental inhalants, and gastroesophageal reflux, contribute to AEC1 damage. AEC2s normally regenerate damaged cells, but when dysfunctional, their ability to reestablish homeostasis is impaired. This condition is considered indicative of the pathogenesis of IPF [44,45].
UPR
High cellular activity leads to protein over-expression, and if unchecked, it can cause endoplasmic reticulum (ER) stress. The correcting protective pathway is stimulated by the imbalance between cellular demand for protein synthesis and the capacity of the ER to dispose of unfolded or damaged proteins. This protective pathway is known as UPR, and it re-establishes ER homeostasis. To this end, this pathway inhibits protein translation, targets proteins for degradation, and induces apoptosis when overwhelmed. The activation of UPR stimulates the expression of profibrotic mediators, such as TGF-β1, PDGF, C-X-C motif chemokine 12 (CXCL12), and chemokine C-C motif ligand 2 (CCL2), and thus, can lead to apoptosis [46].
Epithelial-Mesenchymal Transition (EMT)
EMT is a molecular reprograming process, and in AEC2s, it is induced by UPR and enhanced by profibrotic mediators and signaling pathways. Under these conditions, epithelial cells express mesenchymal cell-associated genes, detach from basement membranes, and migrate and down-regulate their typical markers. The most used marker of these transitioning cells is alpha smooth-muscle actin (αSMA). However, EMT occurs during development and in cancerous and fibrotic tissues, but it is not involved in the restoration of tissues through wound healing processes [46].
Wnt-β-Catenin Signaling
Other key pathways of IPF are related to the deregulation of embryological programs, such as Wnt-β-catenin signaling, which has been associated with EMT and fibrogenesis following activation by TGF-β1, sonic hedgehog, gremlin-1, and phosphatase and tensin homolog. Deregulation of these pathways confers resistance to apoptosis and offers proliferative advantages to cells [47].
Endothelium and Coagulation
Damage to alveolar structures and the loss of AECs with basement membranes involves alveolar vessels and leads to increased vascular permeability. Wound clots form during this early phase of wound healing responses, and sequentially, new vessels are formed through the proliferation of endothelial cells and endothelial progenitor cells (EPCs). Patients with IPF with failure of re-endothelization have significantly decreased numbers of EPCs, likely resulting in dysfunctional alveolar-capillary barriers, profibrotic responses, and compensatively augmented VEGF expression. This series of endothelial changes could stimulate fibrotic processes and abnormalities of vessel functions, contributing to cardio-respiratory declines and advanced disease. Furthermore, endothelial cells may undergo a mesenchymal transition with similar consequences as those of EMT [48].
Endothelial and epithelial damage also activates coagulation cascades during the early phases of wound healing. Coagulation proteinases have several cellular effects on wound healing. In particular, the tissue factor-dependent pathway is central to the pathogenesis of IPF and promotes a pro-coagulation state with increased levels of inhibitors of plasminogen activation, active fibrinolysis, and protein C. Under these pro-coagulation conditions, degradation of ECM is decreased, resulting in profibrotic effects and the induction of fibroblast differentiation into myofibroblasts via proteinase-activated receptors [16].
Immunogenic Changes that Lead to Pulmonary Fibrosis
The pathobiology of IPF is led by aberrant epithelial-mesenchymal signaling, but inflammation may also play an important role because inflammatory cells are involved in normal wound healing from early phases. Initially, macrophages produce cytokines that induce inflammatory responses and participate in the transition to healing environments by recruiting fibroblasts, epithelial cells, and endothelial cells. If injury persists, neutrophils and monocytes are recruited, and the production of reactive oxygen species exacerbates epithelial damage. The resulting imbalances between antioxidants and pro-oxidants may also promote apoptosis of epithelial cells and activation of pathways that impair function. Finally, monocytes and macrophages produce PDGF, CCL2, macrophage colony stimulating factor, and colony stimulating factor 1. These proteins may also have direct profibrotic effects [44,49].
The roles of lymphocytes in IPF are still unclear. However, some lymphocytic cytokines are considered profibrotic due to their direct effects on the activities of fibroblast and myofibroblast. Th-1, Th-2, and Th-17 T-cells have been clearly associated with the pathogenesis of IPF. The Th1 T-cell subset produces IL-1α, TNF-α, PDGF, and TGF-β1 and has net profibrotic effects. Th2 and Th17 responses appear more important in the pathogenesis of IPF. In particular, the typical Th2 interleukin IL-4 induces IL-5, IL-13, and TGF-β1 expression, leading to the recruitment of macrophages, mast cells, eosinophils, and mesenchymal cells and the direct activation of fibroblasts. Additionally, fibroblasts from patients with IPF are hyperresponsive to IL-13, which has a positive effect on fibroblast activity and enhances the production of ECM. The Th17 T-cell subset indirectly promotes fibrosis by increasing TGF-β1 levels. Th17 cells are also positively regulated by TGF-β1, suggesting the presence of a positive feedback loop [16]. Numbers of regulatory T-cells are reportedly lower in bronchoalveolar lavage fluid and peripheral blood samples from patients with IPF than in those of healthy subjects. Regulatory T-cells (Tregs) play a crucial role in immune tolerance and the prevention of autoimmunity; deficiencies in numbers and functions of these T-cells play an important role in the initial phases of pathogenesis of IPF. The function of Treg in IPF is severely impaired due to reduced number of infiltrating Tregs in addition to dysfunction of Tregs. Interestingly, the compromised Treg function in bronchoalveolar lavage is associated with parameters of the disease severity of IPF, indicating a causal relationship between the development of IPF and impaired immune regulation mediated by Tregs [50]. Previous studies have demonstrated low IFN-γ levels in the lungs of patients with IPF. IFN-γ inhibits fibroblastic activity and abolishes Th2 responses. However, further studies are required to characterize the roles of inflammation in the pathobiology of IPF. Currently, the early stages of IPF are poorly understood, as are the mechanisms of disease progression [49,51]. Nonetheless, pirfenidone (5-methyl-1-phenyl-2-[1H]-pyridone) was designed to have anti-inflammatory and antifibrotic effects and was efficacious in the clinical setting [6].
Interactions Between ECM and Mesenchymal Cells, Fibrocytes, Fibroblasts, and Myofibroblasts
Contributions of mesenchymal cells, and particularly fibroblasts and myofibroblasts, are crucial for the pathogenesis of IPF. These cells are recruited, activated, and induced to differentiate and proliferate in the abnormal biochemical environments that are created by activated epithelial and endothelial cells. Although the initial trigger and source of mesenchymal cell recruitment remain unclear, the current published consensus defines fibroblasts and myofibroblasts as the key cell types for IPF. Circulating fibrocytes, pulmonary fibroblasts, and myofibroblasts have also been identified among mesenchymal cells that are involved in IPF [52]. The most recent studies of these processes are summarized in a well-integrated review [53].
Common Characteristics of IPF and Lung Cancer
Multiple studies compare IPF with cancer to provide insights into the pathogenesis of both diseases, for which survival rates are low. Arguments against the similarities of cancer and IPF include the presence of homogeneity, metastases, and laterality in cancers. However, cytogenetic heterogeneity has been shown in myofibroblasts, which do not metastasize to other organs. In addition, simultaneous involvement of both lungs is a definitive indication of IPF. However, this is primarily based on the generally accepted assumption that tumors are almost always monoclonal and grow in only one lung before metastasizing and invading other organs. From an anatomical viewpoint, patients with IPF mainly exhibit fibrosis in the lung periphery and in the lower lobes, which are sites of lung tumors in a high percentage of cases [54]. Additionally, patients with lung transplants due to IPF have much higher rates of lung cancer, as stated above [13,14]. These observations warrant further studies regarding the molecular connections between these two lung diseases. Furthermore, epigenetic and genetic abnormalities, changed relationships between cells, uncontrolled proliferation, and abnormal activation of specific signal transduction pathways are pathogenic features of both diseases [55,56]. Principal fibrogenic molecules, signal transduction pathways and immune cells that potentially participate both in two diseases are shown in Table 2.
Epigenetic and Genetic Abnormalities
Hypomethylation of oncogenes and methylation of tumor suppressor genes are established pathogenic mechanisms for most tumors. Epigenetic responses to environmental exposures, including smoking and dietary factors, and aging have recently been identified in patients with IPF. Recent studies also demonstrated changes to global methylation patterns in patients with IPF that are reciprocal to those in patients with lung cancers [57]. Under the conditions of IPF, hypermethylation of the CD90/Thy-1 promoter region decreases the expression of the glycoprotein Thy-1, which is normally expressed by fibroblasts [58,59]. The loss of this molecule in patients with IPF also correlates with invasive behaviors of cancers and the transition from fibroblasts into myofibroblasts. Hence, pharmaceutical inhibition of the methylation of Thy-1 gene may restore Thy-1 expression, suggesting a new therapeutic approach for this disease. Specific gene mutations have also been considered important to the origin and progression of cancer [60]. Similarly, expression of the oncogene p53, fragile histidine triads, microsatellite instability, and loss of heterozygosity were observed in approximately half of the cases of IPF, frequently in the peripheral honeycombed lung regions that are specifically characteristic of IPF [60][61][62][63]. Additionally, mutations that are generally related to cancer occurrence and development, including those affecting telomere shortening and telomerase expression, have been observed in familial IPF [64][65][66]. Recently, circulating and cell-free DNA has been considered as a diagnostic and prognostic biomarker of cancer [67]. In these studies, free circulating concentrations of DNA increased in patients with cancer and IPF compared with that in patients with other fibrotic lung diseases [68]. In addition to circulating DNA, abnormal expression levels of mRNA were correlated with the pathogenesis of both diseases. These studies suggest that short non-protein-coding RNAs regulate carcinogenesis related genes that are involved in growth, invasion, and metastasis; these features are characteristic of cancer cells [69][70][71]. Recent papers show that 10% of mRNAs are aberrantly expressed in patients with IPF [72][73][74]. Among them, let-7, miR-29, miR-30, and miR-200 were down-regulated, whereas miR-21 and miR-155 were up-regulated. These changes corresponded with groups of genes that are associated with fibrosis, regulation of ECM, induction of EMT and apoptosis. Some of these mRNAs may also affect and be affected by TGF-β expression, potentially speeding functional deterioration in patients with IPF.
Abnormal Cell-Cell Communication
Intercellular channels provide metabolic and electrical coupling of cells and are formed by proteins of the connexins (Cxs) family. Cxs are necessary for the synchronization of cell proliferation and tissue repair [75]. Among them, Cx43 is the most abundant on fibroblast membranes and is involved in tissue repair and wound healing. At wound sites, the repression of Cx43 promotes repair of injured skin tissues with increased cell proliferation and migration of keratinocytes and fibroblasts. Accordingly, down-regulation of Cx43 is related to increased expression levels of TGF-β and production of collagen and acceleration of the differentiation of myofibroblast, which likely promotes healing. These changes contribute to the loss of control over the proliferation of fibroblasts that characterizes abnormal repair and fibrosis. This contention is supported by observations of low expression of Cx43 in fibroblasts derived from keloids and hypertrophic scars than in those derived from normal skin tissues [76]. Although low expression levels of Cxs are often correlated with the progression of cancer and the loss of intercellular communication [77], human lung carcinoma cell lines with high expression of Cx43 showed reduced proliferation [78]. Reduced expression of Cx43 was reported in primary lung fibroblasts from patients with IPF, and reduced intercellular communication was also identified in these cells [79]. Limited cell-cell communications are often reported in fibroblasts from patients with IPF and in cancer cells, reflecting common defects of contact inhibition and uncontrolled proliferation.
Abnormal Activation of Signaling Pathways
The Wnt/β-catenin signaling pathway regulates molecules that are related to tissue invasion, such as matrilysin, laminin, and cyclin-D1. However, arguably, the most important function of Wnt/β-catenin pathway is to mediate crosstalk with TGF-β. This pathway is abnormally activated in some tumors, as shown in lung cancer and mesothelioma [80]. Wnt/β-catenin pathway activation was also shown recently in fibroproliferative disorders of liver and kidney tissues [81]. The Wnt/β-catenin pathway is strongly activated in the lung tissues of patients with IPF [82], potentially reflecting the activities of TGF-β [83]. Specifically, TGF-β potentially activates extracellular signal-regulated protein kinases 1 and 2 (ERK1/2), and the target genes of this pathway activate other signaling pathways, including the phosphatidylinositol 3-kinase (PI3K)/Akt pathway, which regulates proliferation and apoptosis. The roles of PI3K in proliferation and differentiation into myofibroblasts have been demonstrated following stimulation with TGF-β [84]. In cancer cells, the activation of PI3K pathway participates in the demise of regulatory controls over cell proliferation. Therapeutic inhibitors have been developed using the PI3K pathway as a target, and their effects on tumor growth and survival is being assessed in many cancers [85]. Oral administration of a PI3K pathway inhibitors significantly prevented bleomycin-induced pulmonary fibrosis in rats [86]. Hence, clinical trials of such inhibitors are eagerly awaited for patients with IPF.
Tyrosine kinases are key mediators of multiple signaling pathways in healthy cells with demonstrated roles in cell growth, differentiation, adhesion, and motility and in the regulation of cell death. Tyrosine kinase activity is controlled by specific transmembrane receptors that mediate the activity of various ligands. Conversely, abnormal activities of these kinases have been associated with development, progression, and spread of several types of cancer [87]. Recently, activities of tyrosine kinase receptors were investigated in wound healing process and fibrogenesis.
TGF-β, PDGF, VEGF, and FGF are common mediators of carcinogenesis and fibrogenesis. Among them, VEGF may directly or indirectly promote cell survival and proliferation by activating ERK1/2 and PI3K. Accordingly, elevated expression levels of VEGF mRNA were shown in EPCs from patients with IPF. Furthermore, antifibrotic strategies using multiple inhibitors of tyrosine kinase receptors have been evaluated in a rat model of bleomycin-induced fibrosis; PDGF, VEGF, and FGF inhibitors produced significant improvement in fibrosis [48,[88][89][90]. In support of these in vitro and in vivo observations, the multiple tyrosine kinase inhibitor nintedanib showed highly favorable results for the treatment of IPF [7].
Abnormal Migration and Invasion Activities
TGF-β is the most important mediator of the pathogenesis and carcinogenesis of IPF. In tumor microenvironments, TGF-β, predominantly from cancer-derived epithelial cells, induces myofibroblast recruitment at the invasive front of the cancer tissue and protects myofibroblasts from apoptosis. These cells encircle tumor tissues and produce TGF-β. With inflammatory mediators and metalloproteinases, myofibroblasts break basement membranes of surrounding tissues to facilitate tumor invasion [91,92]. Likewise, in IPF, myofibroblasts maintain proliferation through autocrine production of TGF-β, leading to their uncontrolled proliferation [93]. Moreover, related, antifibrotic prostaglandin E2 is down-regulated in myofibroblasts from IPF tissues [94]. TGF-β1 promotes the nuclear localization of myocardin-related transcription factor-A (MRTF-A), which regulates the differentiation and survival of fibroblasts, resulting in enhanced lung fibrosis [95][96][97][98]. MRTF-A has been targeted as a mediator of tumor progression and metastasis [99][100][101].
In cancer cells, the capacity to invade surrounding tissue strongly correlates with the expression of various molecules, including laminin, heat shock protein 27, and fascin [102][103][104]. In IPF, epithelial cells around fibroblast foci also express these molecules [105]. However, these molecules are exclusively expressed by bronchiolar basal cells, which are located as a layer between luminal epithelial cell and myofibroblast layers. Hence, these molecules are likely contributors to the migration of cells and the invasion of bronchiolar basal cells into myofibroblasts and luminal epithelium and are expressed at the invasive front of tumors.
Matrix metalloproteases and integrins are strongly associated with invasion and migration of cells [106]. Integrins activate cancer cells through the KRAS/RelB/NF-κB pathway and lead to the development of stem cell-like properties, such as independent growth and drug resistance. These properties provide cell-cell communications between inflammatory cells, fibroblasts, and parenchymal cells through ECM. Under conditions of IPF, integrin promotes initiation, maintenance, and resolution of tissue fibrosis. Accordingly, integrin expression was reportedly high in myofibroblasts and AECs after lung injury. Integrin is also considered a strong regulator of TGF-β during the progression of lung fibrosis. A clinical study of the humanized antibody STX-100 has been conducted for IPF [107]. Other inhibitors, such as specific antibodies against αvβ6, have also been investigated in clinical trials, and these antibodies were tested in preclinical models of fibrosis and in the murine model of bleomycin-induced pulmonary fibrosis.
Inflammatory Environment
Inflammatory reaction is described by some reports as a promoting factor in the development and progression step of tumorigenesis [108]. As described above, some kinds of macrophages produce cytokines which contribute to the inflammatory responses such as fibrosis-associated macrophages. This macrophage behaves as an M2 phenotype macrophage expressing arginase and CD206 [109]. M2 macrophages have been broadly identified as trigger cells towards tumor progression [110][111][112]. Myeloid-derived suppressor cells are associated with poor prognosis in malignancies and their accumulation in IPF is also correlated with disease progression [113]. On the other hand, infiltrating T lymphocytes play a crucial role in tumor progression and suppression, although their roles in IPF are still unclear [114]. Infiltrating Tregs are significantly correlated with the tumor progression whereas deficiency in numbers and functions of Tregs is observed in the initial step of IPF (Table 2) [50,115]. Further studies regarding the role of Treg in the IPF-related cancer are awaited.
Conclusions
In conclusion, cancer and fibrosis are both severe lung diseases, and they share biological pathways. Although the specific genetic and cellular mechanisms are not yet fully understood, several signaling pathways and microenvironments have been shown to disrupt tissue architecture and lead to dysfunction. Conversely, it is clear that lung tumorigenesis and fibrosis display highly heterogeneous behaviors, warranting personalized therapeutic approaches. Lung fibrosis may eventually be attenuated by therapies that are developed after considering mechanisms that are common to cancer and IPF.
|
v3-fos-license
|
2021-03-19T13:12:06.692Z
|
2021-02-02T00:00:00.000
|
232271445
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/text-2019-0145/pdf",
"pdf_hash": "05c350e58de54fac04db11176c3c2d54b3870c5e",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45642",
"s2fieldsofstudy": [
"Linguistics"
],
"sha1": "6a7c4a6ef7e71b62e7d27a2321e876a9a38e7997",
"year": 2021
}
|
pes2o/s2orc
|
I’m thinking and you’re saying: Speaker stance and the progressive of mental verbs in courtroom interaction
This study investigates the use of progressives with mental verbs in courtroom talk and shows a range of subjective meanings which are not delivered by the simple form. Looking at the data from a British libel trial, it explores patterned co-occurrences with first-person subjects vs. second- and third-person subjects, revealing both emphatic, polite and interpretative uses of the analyzed items. In addition, context-sensitivity and speaker status (judge vs. other participants) are shown to be significant factors affecting both the choice of verbs and their interactional configurations. The findings reveal not only well-established uses of “progressive statives” (wonder and think) but also less conventional ones which convey intensity and expressivity (e.g., understand, remember and want). It is also revealed that the use of progressives with mental verbs differs from the deployment of progressives with communication verbs. In both groups of verbs, however, the interpretative meaning is common. In sum, the study situates progressives with mental verbs among stancetaking resources which speakers employ to share their thoughts, wishes and desires, and to position themselves against other interactants and their propositions.
Introduction 1
The past few decades have seen a great deal of interest in contextual analyses of the progressive construction, with corpus-based studies revealing its increased third-person subjects; and 3) to compare patterns with such progressives and their pragmatic functions with those of progressives of communication verbs.
The remainder of the article is organized as follows. Section 2 introduces the notions of subjectivity, subjectification and stance as well as offers an overview of the literature on the subjective uses of the progressive. Section 3 describes the data and explains the theoretical background of the study. Section 4 looks at examples of the progressive of mental verbs in the data and compares them with earlier findings on the progressive of communication verbs. Section 5 closes with a discussion and conclusions.
2 Literature review
Subjectivity and speaker stance
Language is hardly ever neutral since it usually expresses somebody's point of view and (more or less explicit) evaluation. Linguistic subjectivity is thus "an expressionan incarnation, evenof perceiving, feeling, speaking subjects", or, to put it differently, it is "the intersection of language structure and language use in the expression of self" Finegan (1995: 1-2). To identify and interpret subjective meanings in discourse, analysts apply various analytical concepts and methodological tools. Currently, there are two major strands of subjectivity research: synchronic (cognitive), represented, e.g., by Langacker (1987) and diachronic, focusing on the subjectification of language structures (see, e.g., Traugott and Dasher 2002). As for the difference between subjectivity and subjectification, the first term refers to "speakers' expression of self and the representation of perspective or point of view in discourse" whereas the latter denotes "the processes of linguistic evolution that lead to such strategies" (Finegan 1995: 1).
This study adopts the notion of stance(taking) in line with discourse-functional approaches which see it as a collaborative activity performed by co-present participants (Englebretson 2007). 2 It is also believed here that "[s]tance has the power to assign value to objects of interest, to position social actors with respect to those objects, to calibrate alignment between stance takers and to invoke systems of sociocultural value" (du Bois 2007: 139). In other words, stance is located in the linguistic resources with which speakers express their attitudes, beliefs and assessments related to discourse objects and subjects. In the remainder of the article, I argue that the use of progressives is one of such resources.
Subjective use of the progressive
As already noted, context-specific behavior of the progressive has been addressed in corpus-based studies, in contrast to grammar books which foreground the imperfective-aspectual dimension of this construction and its reference to "temporary situations, activities or goings-on" (Leech 2004: 19). Relevant to the focus of the current investigation, however, are not invented examples illustrating traditional uses of the progressive, but rather empirical works documenting its novel, interactional meanings. One such investigation of spoken British English suggests that progressives have two central features -'continuousness' and 'repeatedness'which may be variously combined to describe actions and events (Römer 2005: 86-90). 3 It also identifies seven additional functions: general validity, politeness or softening, emphasis or attitude, shock or disbelief, gradual change and development, old and new habits and, finally, framing (Römer 2005: 95). In her discussion of emphatic and attitudinal progressives, in particular, Römer (2005: 99) notes a high percentage of first-person subjects and the co-occurrence with the time adverbials always, now and all the time. She also notes that always collocates with second-and third-person subjects, and she links such instances to the speaker's annoyance or irritation. Among the verbs found in "emphasis/attitude" contexts, Römer (2005: 100) lists four mental verbs, i.e., hoping, meaning, seeing and wanting. Interestingly, the very same forms are also associated with "politeness or softening," just like thinking and wondering are (Römer 2005: 98).
Recent change in the verb phrase, including emergent patterns with progressives, has also been reported in several studies in Aarts et al. (2013). One of them concerns progressive verbs in American English and it discusses a recent rise in the BE being + ADJ. pattern (e.g., I'm being facetious) as well as the spread of the progressive to private verbs which typically resist this construction (Levin 2013). 4 In addition, the increase in progressives is attributed to four factors: subjectification, democratization, colloquialization and generalization (Levin 2013: 213). Levin (2013: 213) also argues that progressives with private verbs convey subjective meaning components such as intensification, tentativeness and politeness, which he sees as "a prime example of subjectification" (Levin 2013: 213). Commenting specifically on the current usage of the progressive with think, he distinguishes its four main meanings: 'cogitate, ' 'intend,' 'quotative,' and 'interpretative' (Levin 2013: 209). Likewise, the progressive I'm thinking features in another contribution in the same volume. In his diachronic study, Kaltenböck (2013) focuses on "main clause-like" comment clauses (e.g., I think, I suppose, I guess) and looks at their possible extension to variant forms including progressives. Based on mostly written corpus evidence from American English, he notes a rise in progressives such as I'm thinking and I'm guessing, hypothesizing that they may be taking over the epistemic meaning of I think whose modal use is fading (Kaltenböck 2013: 311). Similar observations are shared by Freund (2016), who explores stativeprogressive change in British English and who finds that progressives with think and love are salient in conversational data. Explaining the meaning of aspect, Freund (2016: 51) highlights the most significant feature of the progressive: the fact that "it is under the speaker's control." She goes on to say that speakers select "auxiliary and main verb inflections, in order to express a personal view of an event as complete, ongoing, beginning, continuing, ending or repeating" (Freund 2016: 51). For this reason, following Smith (1983: 479), she refers to the progressive as the "viewpoint aspect." At the same time, she admits that non-aspectual uses of this construction may seem problematic since they do not reflect the progressive's core meanings of "temporariness or incompletion" (Freund 2016: 52). To identify new patterns of use in informal interaction, Freund considers four semantic categories of stative verbs: relational, cognitive, affective and perceptional. Her findings reveal, however, that there is no correlation between the semantic categories of verbs and the increased frequency of certain progressives. 5 A cognitive-linguistic perspective on "progressive statives" is, in turn, offered in Prażmo (2018), who in her account of the BE + -ing construction points to its core meaning of "immediacy in temporal reality" and a number of peripheral "meaning potentials" (Norén and Linell 2007). Discussing the "modal potential" of the present progressive, Prażmo (2018: 46) validly observes that some of its meanings remain underspecified and "can only be substantiated in a certain linguistic context and pragmatic environment." 6 To illustrate the "modal colouring" of progressives with stative verbs, Prażmo focuses on combinations with want, positing that they encode the speaker's doubt and uncertainty towards the proposition as well as introduce a sense of tentativeness. Thus, as she suggests, the form I am wanting results from the blending of the speaker's 'desire or wish for something' and the 'temporariness and relevance to the moment of speaking', resulting in his/her 'planning to, going to or even acting on that desire' (Prażmo 2018: 57). Taking a broader perspective, Prażmo enumerates unconventional meanings, or "extra effects" of "progressive statives" in general, suggesting that they can: 1) intensify the emotion expressed by the verb; 2) indicate current behavior as opposed to general description; 3) introduce change in states by focusing on differences in degree across time; 4) show limited duration; 5) emphasize conscious involvement; 6) show vividness; 7) express politeness; 8) mitigate criticism; and 9) avoid imposition (adapted from Prażmo 2018: 55).
Also, the cognitive view agrees with the idea that the core meaning of the interpretative progressive is contingency, i.e., clarification of a singular event which is 'not entirely obvious' and which is not applicable to other situations (cf. De Wit and Brisard 2014;Martinez Vazquez 2018). This is also in line with Nuyts's (2001: 363) observation that "bringing up one's commitment, of any type, to a state of affairs in a discourse implies that the status of the state of affairs in this regard is not obvious, e.g., because the hearer turns out to hold a different view, or because there is otherwise new information relevant for one's view." To sum up, however varied in their approaches and research foci, corpusbased analyses like those mentioned above provide ample evidence that the non-aspectual use of the progressive is spreading and that this construction conveys subjective meanings which are absent in its non-progressive counterpart. Therefore, in my view, the BE + -ing construction deserves a more in-depth analysis in a larger number of contexts, including legal genres which so far have received scant attention in this research area. To fill this gap, the study reported here explores the subjective use of the progressive in courtroom interaction and argues for its classification as a marker of stance.
Data and method
The data used in the analysis come from a British libel trial related to the portrayal of the Holocaust by revisionist historian David Irving (for a more detailed description of the trial, see Szczyrbak 2018). The analysis places itself within Corpus-Assisted Discourse Studies and adopts the distributional approach to phraseology which equates phraseological units with word combinations identified on the basis of their frequencies (Granger and Paquot 2008: 29). It also draws on Interactional Linguistics and owes its approach to spoken production to Conversation Analysis as well. It makes use of both quantitative and qualitative approaches to material drawn from transcribed courtroom data covering the whole of the 32-day trial (totalling around 1.5 million words). Given the limited scope of the study, 12 mental verbs, believed to be the most common ones, were selected for analysis. Thus, the list included the verbs: think, believe, wonder, guess, assume, suppose, hope, doubt, find, know, remember and want. The results of the semi-automated analysis aided by the concord function of Word-Smith Tools (Scott 2012) are reported below.
Results
In total, 188 concordance lines with progressives of mental verbs were retrieved, out of which more than two-thirds turned out to be patterns with first-person subjects (132 tokens). Among the latter, 93 occurrences were present progressives whereas 39 represented past progressives. These figures clearly indicate speakerorientedness and focus on the here-and-now context of communication. Progressives with second-and third-person subjects, on the other hand, returned 56 hits, out of which 41 were present progressives and 15 represented the past form. Strikingly, in this group, no single pattern was repeated more than five times. This shows that explicit references to second-and third-person subjects' wishes, hopes and attitudes were not relevant to the ongoing interaction and so they did not form any salient patterns.
In what follows, I examine selected combinations with first-, second-and third-person subjects in detail.
Progressives of mental verbs with first-person subjects
Progressives with first-person subjects (Table 1) proved to be important for the overall findings as they represented more than two-thirds of the analyzed items. This agrees with earlier studies, pointing to the speaker-centeredness and the here-and-now orientation of the progressive form. In the data at hand, interestingly, speaker status emerged as a significant factor. More than half of all the subjective uses of the BE + -ing construction were identified in the judge's turns (68 tokens vs. 64 tokens found in the turns of the other speakers). It might be speculated that as an authority figure, the judge was in a position to share his own thoughts, to convey emphasis and intensity, and even to signal his lack of understanding. The remaining participants (claimant, counsel, witnesses) focused less on their emotions and chose to mark tentativeness, politeness and a lower degree of imposition instead. Likewise, the progressive form appeared most useful at contentious moments, when the speakers' views were being (re-) interpreted and (re-)assessed. On the other hand, the presence of modifiers (e.g., just, quite, really, actually) in some of the patterns provides further evidence that the speakers relied on progressives to perspectivize their utterances.
Wonder
With 41 occurrences, wondering emerged as the preferred progressive with the firstperson subject. Interestingly, while the present progressive often co-occurred with just (17 tokens), there was only one such co-selection with the past progressive. Predictably, I am (just) wondering referred to the speaker's ongoing mental process (1) whereas I was wondering signaled tentativeness and acted as a conventional politeness marker lowering the degree of imposition (2). In sum, it was found, progressives with wonder represented well-established uses and this may explain why they were the most frequently selected forms among the analyzed items.
Think
Another frequent progressive with the first-person subject was, as might be expected, the progressive with think. According to the existing research, the functions of I'm/I am thinking vary greatly depending on the discourse type (Levin 2013;Martinez Vazquez 2018). In the courtroom data analyzed here, three meanings were distinguished: 1) 'cogitate'; 2) 'hold an opinion'; and 3) 'interpretative.' The first of the three uses expresses the "activation or arousal of thought processes", which is "equivalent to 'considering' or 'ruminating'" (Leech 2004: 29). As such, it points to the subject being in control of a mental process at a particular point in time. This use was most prominent in the data, and it was identified both in present and past progressives. In example (3), for instance, the judge refers to his thought processes while seeking to win the hearer's positive regard for his way of thinking with you know. The second type of progressive, in turn, similar to deliberative I think that, acts as an opinion marker. 7 In the data, there was only one such example (4), in which the speaker's preferred argument was preceded by yes marking agreement with the judge's stance. The third, "interpretative," use may well be paraphrased by 'I mean' and it resembles, e.g., the interpretative progressive with think found in political speeches (Martinez Vazquez 2018). 8 Worthy of attention in this context is preposition dropping noted both in present and past progressives (5). 9 Also, several past progressives with think followed the I was thinking simply/more/actually of pattern associated with clarification, as shown in (6). (3) [Judge] That is probably best. Anyway, I have given the hint yet again. Mr Rampton is going shortly to ask me to make a ruling about it and, if I have to make a ruling, you know the way I am thinking at the moment, so let us get on.
(4) [Judge] No. All I think is that sometime that is relevant.
[Counsel] It is obviously important.
[Judge] Both to the manipulation and also to Auschwitz.
[Counsel] Yes. I am thinking that the subject of Hitler's Adjutants is a long one with, I am afraid, probably quite a lot of documents to look at because of the records of what they said. That may take more than one day, which I do not have, so I was going to leave that until after Auschwitz.
[Judge] Well, I was thinking more of the camp official eyewitnesses, but take them, and I think there are probably about 10 or maybe a dozen of them, something like that.
Hope
Apart from the patterns discussed above, the data provided evidence of progressives with hope, a common desire operator representing volitive modality. Like other "verbs of attitude" (e.g., love, want)alternatively referred to as "anti-progressive" (Leech 2004)hope can turn from a static verb into a dynamic one once it is combined with the progressive form. Because of this, it can convey dynamism and a higher degree of intensity or expressivity, which is not delivered when the simple 9 Freund (2016: 58) also identified preposition dropping after I'm thinking in informal conversational data. However, in her dataset the pattern was realized as a list of three or more items preceded by I'm thinking (e.g., I'm thinking pub, no speeches, no first dance, around forty people and some decent grub.). In such instances, the speaker visualized a future event while expressing some uncertainty. In my dataset, no such uses were attested.
form is used (cf. De Wit and Brisard 2009: 15). As could be observed in the data, the person who tended to stress their wishes and desires most was the claimant (16 tokens), although some instances were identified in the turns of the counsel and the judge as well. Apparently, the witnesses felt no need to highlight their personal wishes and desires, and so progressives with verbs of attitude were not found in their turns. Furthermore, the witnesses' responses were more restricted as they were related to the "secondary reality" of the courtroom (the disputed actions and events) and not to the "primary reality" (the proceedings themselves) (cf. Gibbons on police interviews [2005: 142-150]). As regards its syntactic realizations, I was hoping was followed by the zero or that-complementizer, or by the to infinitive, and there was no evidence of its parenthetical use. As examples (7) and (8) demonstrate, progressives with hopewhich were sometimes followed by negationindicated the speaker's most immediate wish anchored in "temporariness" rather than a long-term desire, and they conveyed a greater level of intensity than would have been expressed by the simple form I hope. 10 (7) [Claimant] It helps on numbers, my Lord, because we have numbers of items that had been collected from the victims by April 30th 1943.
[Judge] It does not say "from when".
[Claimant] I am hoping that the witness will assist us on this.
Understand
A brief note on progressives with understand also seems relevant to the discussion, although in the data there were only eight co-occurrences with first-person subjects and barely one co-occurrence with a second-person subject. As a cognitive verb reflecting an "intellectual state," understandsimilarly to believe, know and realizestill resists the progressive (unlike think found in the same category). When combined with the progressive form, however, it conveys emphasis and adds expressivity (cf. De Wit and Brisard's [2009: 15] "connotation of intensification"). As can be seen in example (9), the judge uses the progressive form of understand to stress his lack of understanding of the evidence at hand while mitigating the force of this statement with I am afraid and not really. In example (10), in turn, the counsel stresses his confusion as to which entry in the Goebbels' diaries is the subject of the ongoing discussion. (9) [Judge] (…) I am afraid I am not really understanding the footnote crossreferences. Am I going to be provided with them or not? That was a question. (10) [Counsel] Sorry, I am not understanding, but I thought we had, unless I have gone completely mad, a discussion this morning about the entry for 13th December 1941?
Remember
Finally, example (11) contains the interpretative use of the progressive with remember embedded in a reporting utterance. Here, the claimant offers his own reading of the written evidence presented in court, producing his interpretation (rather than a verbatim report) of another witness's account, with the modifiers actually and just introducing a contrast between what, in the speaker's view, is real/factual and what is doubtful. In this instance, the progressive with remember may well be replaced by the progressive with recall, both of which indicate "animate agency" (Leech 2004: 28). 11 This is the only example of the interpretative progressive in the data in which the speaker adopts the I-perspective to represent another party's standpoint. 12 11 According to Prażmo (2016: 175), in some contexts, the progressive with remember may also convey the meaning of 'paying tribute to someone,' as her online data indicate (e.g., Today, I'm remembering our nation's fallen heroes […]). In the courtroom context, such meanings were not identified.
12 Other examples include third-person subjects referring to non-present witnesses' accounts, as in here he is remembering it in June 1947 or does this strike you as being something that he is really remembering? These examples may also be classified as "narrative" (Martinez Vazquez 2018) since they show the relevance of past events to the ongoing discourse and make them more vivid (cf. the "historic present" in Quirk et al. [1985: 181]). (11) [Claimant] Are you familiar with the passage where Eichmann, challenged about a particular episode, interrupted the interrogator 2 min later and said words to this effect: "I am sorry. You asked me 2 min ago about that episode, and I have to say now I cannot remember whether I am actually remembering it or just remembering being asked a question about it more recently"?
Progressives of mental verbs with second-and thirdperson subjects
Although less frequent than the progressives discussed in Section 4.1, progressives of mental verbs with second-and third-person subjects in their own way contributed to the co-construction of meaning and, thus, the discursive making of evidence (Tables 2 and 3). Half of these forms (28 tokens) included third-person subjects who did not participate in the ongoing interaction and who, therefore, belonged to the "secondary reality" of the courtroom, revived through the accounts provided by the co-present speakers (cf. Section 4.1). This effect was achieved due to the interpretative progressive describing third parties' purported convictions and beliefs. On the other hand, patterns with second-person subjects tended to co-occur with the volitive modality markers want and hope (15 tokens). Modifiers such as really or just were attested as well, but they were used rather sporadically.
Think
In the group of progressives with second-and third-person subjects, patterns with think were quite visible (13 tokens). Their role in discourse differed, however, from the functions associated with first-person subjects. Some of these progressives, it was found, were interpretative (12) while others exemplified the conventional thirdperson reference typical of institutional contexts (13). Contrary to what was observed in patterns with first-person subjects, invocation of second-person subjects' thoughts and beliefs did not appear relevant and it was not very common. 13 [Claimant] Yes, usually there is a line above the table talks saying who is present as the guests of honour. Usually three or four people are listed. Verna Kopen did the same in his records of the table talks.
[Judge] I am a bit puzzled about this, because if you interpret the table talk as meaning that Hitler really was thinking only in terms of deportation, I know it has been a long day, but how do you reconcile that with your acceptance, because I understand you do accept it -- [Claimant] It sets my teeth on edge, a lot of it.
[Judge] It is not going to bulk very large in my thinking.
[Claimant] Your Lordship knows how your Lordship is thinking but, with respect, I do not. You have a poker face and a complete mask like demeanour which keeps me totally in the dark. People ask me when I go home how have you done and I say I not know.
Hope
As already mentioned, the only regularity which seemed to emerge from an examination of patterns with second-person subjects was that they attracted volitive modality markers. The first of them, i.e., hope, was combined with the progressive to refer to the co-present speakers' hopes and desires, which gave them more prominence. This, however, was not coupled with their positive assessment. It may in fact be argued that the emphasis was added to confront and challenge the views presented by the opposing party (see 14 and 15).
[Claimant] Do liars not deserve to be exposed as such? If you saw the audience as you saw them in that film, did you see any skinheads or extremists or people wearing arm bands? I did not. They looked like a perfectly ordinary bunch of middle-class Canadians.
[Counsel] No doubt they too, Mr Irving, will spread the word, if I may use that terminology?
[Claimant] Is that evidence or are you asking me a question?
[Counsel] I am asking you a question. That is what you are hoping, is it not?
[Claimant] Spread the word that there are elements of the Holocaust story that need to be treated with scepticism, yes.
[Claimant] (…) Characteristically of the weakness of their case, Professor Funke listed one entry in a diary where I noted "road journey with a Thomas" whose second name I never learned; Funke entered the name "Dienel?" So for as I know, I have never met a Dienel, but it illustrates the kind of evidence that the Defence were hoping to rely upon. (…)
Want
Progressives with the second volitive marker found in the data, i.e., want, behaved quite differently. In this case, all occurrences with the second-person subject you (just like the progressive I am (just) wanting) were identified only in the judge's turns. It was also possible to see that they performed the interpretative function, additionally signaled by the preceding I think (see 16 and 17). One might even go further and suggest, in agreement with Prażmo (2018), that you are/were wanting integrated the meaning of the subject's 'desire or wish for something' and 'temporariness and relevance to the moment of speaking.' This made these progressives similar to the be going to structure, expressing not only the subject's desire but also his/her 'intention to act on that desire' (Prażmo 2018: 57). The examples found in the courtroom data corroborate this interpretation (cf. I am just wondering whether he is not wanting to go off somewhere else).
Believe
The last excerpt illustrates the progressive with believe. Of all stative verbs, the "intellectual state" verbs such as believe, suppose and know resist the progressive the most. In the data, only one combination of believe with the progressive form was identified and it is presented below (18). 14 What can be seen here is, in fact, the juxtaposition of the progressive with the simple form (the people who are believing that the gas chambers were not used for homicidal purposes vs. the people who believed that they were used for homicidal purposes). It may well be the case that differentiating between the two forms, the witness implicitly evaluates the convictions held by the two groups of people. He seems to be assessing the beliefs held by the second group as more stable (and possibly rational) while indicating the temporary (and possibly reversible) nature of the beliefs represented by the first group. Though marginal, such uses lend support to the claim that the progressive construction is a marker of contingency.
[Witness] It is difficult to say. It seems to be that the book buying habits of the people who are believing that the gas chambers were not used for The first difference between the patterns reported in the current study and those described in Szczyrbak (2018) concerned the respective frequencies: communication verbs recurrently attracted the progressive form whereas the co-selection of mental verbs and the progressive was much rarer. As regards the discourse functions of the verbs, progressives with mental verbs foregrounded the speaker's perspective and they conveyed emphasis or, conversely, tentativeness. They specifically showed the relevance of the speaker's mental operations (thoughts, wishes and desires), and they were linked to epistemic and volitive modalities. 15 Communication verbs, in turn, focused more on the speaker's or the addressee's linguistic performance, and they were frequently employed to restate (or reframe) earlier claims, or to challenge Used for self-and other-reporting; discourse organization ("signposting"); epistemic/ evidential use + evaluative overtones Co-occurrence with I → focus on the speaker's mental disposition/process of thinking/ epistemic position + temporariness/here-andnow situation Co-occurrence with I → focus on the speaker's verbal performance/epistemic position + hereand-now situation Co-occurrence with you → interpretative use/ volitive modality (you are hoping, you are wanting) Co-occurrence with you → querying the response/negative evaluation of competitive narratives (are you saying/suggesting) Co-occurrence with third-person subjects → interpretative use (mindsay) Co-occurrence with third-person subjects → interpretative use (hearsay) Co-occurrence with modifiers Co-occurrence with modifiers Most of the progressives used by the judge Most of the progressives used by the opposing parties: the counsel and the claimant Parenthetical use not attested Parenthetical use attested Switches between the simple and the progressive form → describing vs. interpreting (he assumes vs. he is not assuming) Switches between the simple and the progressive form → quoting/describing vs. interpreting (he talks vs. he is saying) Some of the verbs still resist the progressive form: know, realize, believe, suppose, guess (only the meaning of 'conjecture' found) 15 Volitive modality is classified as a subcategory of deontic modality. alternative accounts. Both groups of progressives were interpretative; however, the difference lay in the patterns in which the verbs were found. Mental verbs were used chiefly in I-oriented declaratives (I am thinking) whereas communication verbs were frequent in you-oriented interrogatives (are you saying). Progressives with mental verbs were preferred by the presiding judge while progressives with communication verbs were favored by the opposing parties (the counsel and the claimant).
Another thing to note is that both groups of verbs co-occurred with modifiers; however, tentative qualifiers (still, quite, just) were co-selected with mental verbs rather than communication verbs. The syntactic realizations of the two groups of verbs differed as well. Namely, the comment clause status of progressives with communication verbs was frequent and it involved the qualification of the source of information (self vs. other). This was not the case with mental verbs, whose progressives were used predominantly for emphasis and intensification, rarely associated with parenthetical use. Finally, in both groups of verbs, switches between the simple form and the progressive form were evidenced (e.g., he assumes vs. he is not assuming; he talks vs. he is saying), bringing out the difference between a mere description (or a verbatim report) and the speaker's own evaluation.
Discussion and conclusions
The foregoing analysis shows that despite their relative infrequency, progressives with mental verbs provide insights into how participants in courtroom proceedings position themselves vis-à-vis other speakers and it also makes clear how their use differs from the use of communication verbs. At the same time, the study demonstrates that progressives are vehicles for subjective meanings which are not delivered by the simple form of verbs.
Overall, the analysis has revealed that the progressive of mental (or private) verbs was used predominantly by first-person subjects focusing on their thoughts, wishes and desires which were thus emphasized and given more prominence (e.g., I am hoping, I am wanting, I am understanding). The verbs wonder and think, it was found, were the most common choices: I was wondering performed the politeness function whereas I am thinking frequently indicated the act of cogitation. The "contingency" of progressives has also been evidenced, with some of the contextual readings going beyond the well-established meanings of individual verbs (e.g., whether he is not wanting to go off, people who are believing).
Furthermore, context-sensitivity and the role of speaker status (judge vs. other participants) emerged as significant factors affecting both the choice of verbs and their interactional configurations which differed, on the one hand, from the patterns involving the progressive of communication verbs and, on the other, from the patterns with the progressive of mental verbs found in other settings (e.g., media or online discourse). Noteworthy was, for instance, the absence of parenthetical progressives (e.g., …, I'm thinking,…), "affective" progressives (e.g., I'm loving it) or the "intend" progressives referring to future plans (e.g., I'm thinking of going), which seems understandable given the institutional constraints of courtroom interaction. At the same time, other unconventional uses and meanings were attested, showing that trial discoursedespite the high degree of formalitymirrors to some extent global trends visible in informal conversational contexts. 16 All things considered, it should be reiterated that the English progressivein its new incarnations and contexts of usemarks speaker stance and is increasingly subjective, which may be attributed to language change and the resultant extension of meanings of the BE + -ing construction as well as a global shift towards more colloquial and less authoritarian communication. These trendsas the data at hand demonstrateare making their way also into spoken legal communication, perhaps no longer validly described as conservative and resistant to change.
|
v3-fos-license
|
2020-02-06T09:04:57.531Z
|
2020-02-01T00:00:00.000
|
211044598
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4409/9/2/336/pdf",
"pdf_hash": "1d650835ce36ead0a3ede07290141c6ade3e1976",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45643",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"sha1": "2daa44ed87e4df6885506a44408651fdc6bc8ee7",
"year": 2020
}
|
pes2o/s2orc
|
Prolonged Hyperoxygenation Treatment Improves Vein Graft Patency and Decreases Macrophage Content in Atherosclerotic Lesions in ApoE3*Leiden Mice
Unstable atherosclerotic plaques frequently show plaque angiogenesis which increases the chance of rupture and thrombus formation leading to infarctions. Hypoxia plays a role in angiogenesis and inflammation, two processes involved in the pathogenesis of atherosclerosis. We aim to study the effect of resolution of hypoxia using carbogen gas (95% O2, 5% CO2) on the remodeling of vein graft accelerated atherosclerotic lesions in ApoE3*Leiden mice which harbor plaque angiogenesis. Single treatment resulted in a drastic decrease of intraplaque hypoxia, without affecting plaque composition. Daily treatment for three weeks resulted in 34.5% increase in vein graft patency and increased lumen size. However, after three weeks intraplaque hypoxia was comparable to the controls, as were the number of neovessels and the degree of intraplaque hemorrhage. To our surprise we found that three weeks of treatment triggered ROS accumulation and subsequent Hif1a induction, paralleled with a reduction in the macrophage content, pointing to an increase in lesion stability. Similar to what we observed in vivo, in vitro induction of ROS in bone marrow derived macrophages lead to increased Hif1a expression and extensive DNA damage and apoptosis. Our study demonstrates that carbogen treatment did improve vein graft patency and plaque stability and reduced intraplaque macrophage accumulation via ROS mediated DNA damage and apoptosis but failed to have long term effects on hypoxia and intraplaque angiogenesis.
Introduction
The (in)stability of atherosclerotic plaques determines the incidence of major cardiovascular events such as myocardial infarction and stroke [1]. Lack of oxygen within the plaque, or intraplaque hypoxia, has been identified as one of the major contributors to plaque instability [2,3]. It has been detected in advanced human atherosclerotic lesions [4] as well as in murine atherosclerotic lesions [5,6].
The intraplaque lack of oxygen is provoked by progressive thickening of the neointimal layer [7] and overconsumption of O 2 by plaque inflammatory cells [4]. The key regulator of hypoxia is the
Mice
This study was performed in compliance with Dutch government guidelines and the Directive 2010/63/EU of the European Parliament. All animal experiments were approved by the animal welfare committee of the Leiden University Medical Center. Male ApoE3*Leiden mice, crossbred in our own colony on a C57BL/background, 8 to 16 weeks old, were fed a diet containing 15% cacao butter, 1% cholesterol and 0.5% cholate (100193, Triple A Trading, Tiel, The Netherlands) for three weeks prior to surgery until sacrifice.
Vein Graft Surgery
Vein graft surgery was performed by donor mice caval vein interpositioning in the carotid artery of recipient mice as previously described [5,18]. Briefly, thoracic caval veins from donor mice were harvested. In recipient mice, the right carotid artery was dissected and cut in the middle. The artery was everted around the cuffs that were placed at both ends of the artery and ligated with 8.0 sutures. The caval vein was sleeved over the two cuffs, and ligated. On the day of surgery and on the day of sacrifice mice were anesthetized with midazolam (5 mg/kg, Roche Diagnostics, Basel, Switzerland), medetomidine (0.5 mg/kg, Orion, Espoo, Finland) and fentanyl (0.05 mg/kg, Janssen Cells 2020, 9, 336 3 of 18 Pharmaceutical Beerse, Belgium). The adequacy of the anesthesia was monitored by keeping track of the breathing frequency and the response to toe pinching of the mice. After surgery, mice were antagonized with atipamezol (2.5 mg/kg, Orion Espoo, Finland) and fluminasenil (0.5 mg/kg, Fresenius Kabi, Bad Homburg vor der Ho¨he, Germany). Buprenorphine (0.1 mg/kg, MSD Animal Health, Keniworth, NJ, USA) was given after surgery to relieve pain.
Carbogen Treatment
Acute reoxygenation was investigate in ApoE3*Leiden mice on day 28 after vein graft surgery. Mice were randomized in two groups, a control group (n = 13) and a carbogen treated group (n = 12) and exposed for 90 min to air (21% O 2 ) or carbogen gas (95% O 2 , 5% CO 2 ) respectively. Halfway during the treatment, the mice received intraperitoneal injection of hypoxia specific marker pimonidazole (100 mg/kg, hypoxyprobe Omni kit, Hypoxyprobe Inc., Burlington, MA, USA) and anesthesia. Directly after the end of the treatment, mice were sacrificed after 5 min of in vivo perfusion-fixation under anesthesia. Vein grafts were harvested, fixated in 4% formaldehyde, dehydrated and paraffin-embedded for histology.
Chronic reoxygenation was investigated in ApoE3*Leiden mice starting on day 7 after vein graft surgery. The decision for this timepoint was based on our previous finding that intraplaque angiogenesis is detectable in ApoE3*Leiden mice starting from day 14 after vein graft surgery [5]. Mice were randomized based on their plasma cholesterol levels (Roche Diagnostics, kit 1489437, Basel, Switzerland) and body weight in two groups, a control group (n = 16) and a carbogen treated group (n = 16) and exposed daily for 90 min to air (21% O 2 ) or carbogen (95% O 2 , 5% CO 2 ) respectively, until the day of sacrifice. On day 28 after surgery, mice received the last treatment and halfway during this last treatment they received intraperitoneal injection of hypoxia specific marker pimonidazole (100 mg/kg, hypoxyprobe Omni kit, Hypoxyprobe Inc., Burlington, MA, USA) and anesthesia. Immediately after the end of the treatment, mice were sacrificed as previously described for the acute reoxygenation experiment.
Histological and Immunohistochemical Assessment of Vein Grafts
Vein graft samples were embedded in paraffin, and sequential cross-sections (5 µm thick) were made throughout the embedded vein grafts. To quantify the vein graft thickening (vessel wall area), MOVAT pentachrome staining was performed. Total size of the vein graft and lumen were measured. Thickening of the vessel wall (measured as intimal thickening + media thickening) was defined as the area between lumen and adventitia and determined by subtracting the luminal area from the total vessel area. The optimal lumen area was calculated by converting the luminal circumference, measured as the luminal perimeter, into luminal area.
Intraplaque angiogenesis was measured as the amount of CD31 + vessels in the vessel wall area and intraplaque hemorrhage (IPH) was monitored by the amount of erythrocytes outside the (neo)vessels and scored as either not present, low, moderate or high.
RNA Isolation, cDNA Synthesis and qPCR
Total RNA was isolated from 10 (20 µm thick) paraffin sections (at least n = 6/group) following the manufacture's protocol (FFPE RNA isolation kit, Qiagen, Venlo, the Netherlands). cDNA was synthesized using the Superscript IV VILO kit according to the manufacture's protocol (TermoFisher, Waltham, MA, USA).
RNA was isolated according to standard protocol using TRIzol ® (Ambion ®, ThermoFisher,Waltham, MA, USA ) after which sample concentration and purity were examined by nanodrop (Nanodrop Technologies, ThermoFisher, Waltham, MA, USA). Complementary DNA (cDNA) was prepared using the High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, ThermoFisher, Waltham, MA, USA) according to manufacturer's protocol. For qPCR, commercially available TaqMan gene expression assays for the selected genes were used as explained above.
For ICC cells were fixated in 4% formaldehyde and antibodies directed at Mac-3 (BD Pharmingen, Franklin Lakes, NJ, USA), 8OHdG (bs-1278R, Bioss antibodies, Woburn, MA, USA) and cleaved caspase 3 (9661-S, Cell Signaling, Danvers, MA, USA) were used for immunocytochemical staining. Tile-scans of stained slides were photographed using a fluorescent microscope (Leica AF-6000, Leica, Wetzlar, Germany) and Fiji image analysis software was used to quantify the mean grey value expression of the targets (Imagej, Bethesda, MD, USA).
Statistical Analysis
Results are expressed as mean ± SEM. A 2-tailed Student's t-test was used to compare individual groups. Non-Gaussian distributed data were analyzed using a Mann-Whitney U test using GraphPad Prism version 6.00 for Windows (GraphPad Software). Probability-values < 0.05 were regarded significant.
Acute Carbogen Exposure Reduces Intraplaque Hypoxia
To evaluate the effect of acute carbogen treatment on advanced atherosclerotic vein graft lesions, ApoE3*Leiden mice that underwent vein graft surgery were exposed for 90 min to carbogen gas or normal breathing air. Mice exposed to carbogen gas (n = 8) showed a significant reduction of intraplaque (IP) hypoxia in the vein graft lesion compared to the air breathing group (n = 8) as shown Cells 2020, 9, Regarding the aspect of vein graft patency, single 90-min carbogen exposure directly before sacrifice did not affect vein graft patency (Figure 2A), vessel wall area, lumen perimeter, lumen area or optimal lumen area ( Figure 2B,C). Furthermore, weight nor cholesterol levels were changed ( Figure S1).
The percentage of collagen present in the lesion was comparable between the two groups ( Figure 2D) and at a cellular level, the percentage of macrophages ( Figure 2E) and SMCs ( Figure 2F Regarding the aspect of vein graft patency, single 90-min carbogen exposure directly before sacrifice did not affect vein graft patency (Figure 2A), vessel wall area, lumen perimeter, lumen area or optimal lumen area ( Figure 2B,C). Furthermore, weight nor cholesterol levels were changed ( Figure S1).
The percentage of collagen present in the lesion was comparable between the two groups ( Figure 2D) and at a cellular level, the percentage of macrophages ( Figure 2E) and SMCs ( Figure 2F) were not altered by the acute exposure to carbogen.
Chronic Carbogen Exposure Does not Influence Intraplaque Hypoxia
To evaluate the effect of hyperoxic carbogen treatment on plaque composition and remodeling we performed a chronic carbogen treatment on ApoE3*Leiden mice with advanced atherosclerotic vein graft lesions. Mice were exposed for 90 min daily to carbogen gas (n = 13) or normal breathing air (n = 12) for 21 days. Neither weight or cholesterol levels were affected by the treatment ( Figure S2).
Surprisingly chronic exposure to carbogen gas did not reduce intraplaque hypoxia in the treated group when compared to the air breathing group (Figure 3). In fact, the degree of pimonidazole staining in the vein graft area was not different between the two groups ( Figure 3).
Chronic Exposure to Carbogen Plays a Protective Role Against Occlusions
Chronic carbogen treatment resulted in a beneficial effect on vein graft patency, increasing the rate of vein graft patency by 34.5% ( Figure 4A). In fact, only 53% of the mice of the control group presented a patent vein graft ( Figure 4A), while 87.5% of the mice exposed to carbogen gas had a patent vein graft. vein graft lesions. Mice were exposed for 90 min daily to carbogen gas (n = 13) or normal breathing air (n = 12) for 21 days. Neither weight or cholesterol levels were affected by the treatment ( Figure S2).
Surprisingly chronic exposure to carbogen gas did not reduce intraplaque hypoxia in the treated group when compared to the air breathing group (Figure 3). In fact, the degree of pimonidazole staining in the vein graft area was not different between the two groups ( Figure 3).
Chronic Exposure to Carbogen Plays a Protective Role Against Occlusions
Chronic carbogen treatment resulted in a beneficial effect on vein graft patency, increasing the rate of vein graft patency by 34.5% ( Figure 4A). In fact, only 53% of the mice of the control group presented a patent vein graft ( Figure 4A), while 87.5% of the mice exposed to carbogen gas had a patent vein graft. Vessel wall area thickening was not affected by exposure to carbogen gas since no differences could be detected between the two groups when taken only the patent grafts into account ( Figure 4B). More importantly lumen size was affected by carbogen gas. In fact, carbogen treated mice presented a significant increase in the lumen perimeter when compared to control ( Figure 4C,E, pvalue = 0.048), and an increase in the optimal lumen area ( Figure 4D, p-value = 0.067). Vessel wall area thickening was not affected by exposure to carbogen gas since no differences could be detected between the two groups when taken only the patent grafts into account ( Figure 4B). More importantly lumen size was affected by carbogen gas. In fact, carbogen treated mice presented a significant increase in the lumen perimeter when compared to control ( Figure 4C,E, p-value = 0.048), and an increase in the optimal lumen area ( Figure 4D, p-value = 0.067).
Chronic Carbogen Treatment Does Not Have an Effect on Intraplaque Angiogenesis and Intraplaque Hemorrhage
To see whether exposure to carbogen gas had an effect on the hypoxia triggered IP angiogenesis, the amount of CD31 + vessels in the vein graft lesions (white arrows in Figure 5A zoom in) was evaluated and no difference in the number of neovessels in the carbogen group was observed when compared to the control group (p-value > 0.99).
Cells 2020, 9, x 9 of 20 In addition, when corrected for intimal thickness no differences were observed between the groups (p-value = 0.91) ( Figure 5A,B). As a measure of the quality of the IP angiogenesis the degree of intraplaque hemorrhage was analyzed (yellow stars in Figure 5A zoom-in) as the amount of Ter119 + cells found outside the neovessels and quantified as not present, low, moderate or high, and no differences could be seen when comparing the two groups ( Figure 5C).
To determine the effects of hyperoxia on angiogenesis related genes the expression of Hif1a was analyzed. We surprisingly found a significant upregulation of Hif1a mRNA expression in the carbogen treated group when compared to the control ( Figure 5D, p-value = 0.05), while mRNA expression of Cxcl12, Vegfa and Epas1 were not altered ( Figure 5E-G). No differences between control and one-time carbogen treated group were found when analyzing gene expression in vein grafts from the acute carbogen treatment ( Figure S3). In addition, when corrected for intimal thickness no differences were observed between the groups (p-value = 0.91) ( Figure 5A,B). As a measure of the quality of the IP angiogenesis the degree of intraplaque hemorrhage was analyzed (yellow stars in Figure 5A zoom-in) as the amount of Ter119 + cells found outside the neovessels and quantified as not present, low, moderate or high, and no differences could be seen when comparing the two groups ( Figure 5C).
To determine the effects of hyperoxia on angiogenesis related genes the expression of Hif1a was analyzed. We surprisingly found a significant upregulation of Hif1a mRNA expression in the carbogen treated group when compared to the control ( Figure 5D, p-value = 0.05), while mRNA expression of Cxcl12, Vegfa and Epas1 were not altered ( Figure 5E-G). No differences between control and one-time carbogen treated group were found when analyzing gene expression in vein grafts from the acute carbogen treatment ( Figure S3).
Chronic Carbogen Treatment Induces Accumulation of Reactive Oxygen Species and Apoptosis
Although an effect on Hif1a upregulation was observed, surprisingly no effect on angiogenesis could be seen. Therefore, we looked into other mechanisms that could possibly regulate Hif1a. We hypothesized that the mRNA upregulation of Hif1a in the carbogen treated group ( Figure 5D) was caused by an accumulation of reactive oxygen species (ROS) induced by the carbogen treatment. ROS is known to be induced by prolonged hyperoxia [17] and to regulate the transcription of different genes involved in hypoxia and in inflammation such as Hif1a and Il6.
Il6 mRNA expression was studied as a representative for ROS induced factors and quantification of its expression showed a trend towards increased expression in the carbogen group of the chronic exposure study when compared to the control group ( Figure 6A, p-value = 0.09).
Next the presence of ROS was studied in the vein graft lesions by quantifying the amount of ROS-mediated DNA damage, analyzed by 8-hydroxy-2 deoxy-guanosine (8OHdG) immunohistochemical staining. We determined the subcellular location of the staining of 8OHdG. As observed in Figure 6C, a strong 8OHdG positive staining was found in the nuclei of the cells, with an occasional staining outside of the nuclei in the mitochondria, seen as cytoplasmic staining ( Figure 6C, right panel). This suggests that the main site of ROS induced DNA damage is nuclear, and not mitochondrial. 8OHdG positive staining could be seen as light blue staining in the nuclei of the cells as a results of co-localized DAPI and 8OHdG staining ( Figure 6D zoom-in) and the quantification corrected for the vessel wall area resulted in an increase of DNA damage in the carbogen treated group when compared to the control group ( Figure 6B,D), supporting the idea that ROS levels are increased.
ROS is known to induce apoptosis as a consequence of DNA damage. Therefore, the amount of cells positive for cleaved caspase 3 (CC3) in the atherosclerotic vein graft lesions was determined in the carbogen treated and in the control groups ( Figure 6E). Due to their high oxygen consumption we hypothesized that macrophages could possibly be the main cell type affected by DNA damage induced by ROS and subsequent apoptosis. As shown in the bottom panel of Figure 6D, macrophages rich areas in the lesions of mice treated with carbogen were found to be strongly positive for CC3 when compared to control ( Figure 6E bottom panel).
When looking at the total amount of cells positive for cleaved caspase 3 in the intimal area, an increase in apoptotic cells CC3 + in the lesions of mice exposed to carbogen gas was found compared to the air breathing group ( Figure 6E, p-value = 0.06).
Chronic Carbogen Exposure Reduces Inflammatory Cell Content
The effects of chronic carbogen gas treatment on intraplaque inflammation were studied on macrophages since they produce high amounts of ROS, consume elevate amounts of O 2 and are known to be hypoxic [6]. Interestingly, the group of mice exposed daily to carbogen for 21 days showed a significant reduction in macrophage content when compared to the control group breathing normal air (p-value = 0.0126).
When corrected for the differences in vein graft thickening, the relative percentage of macrophages was significantly decreased in the carbogen exposed group by the 15.2% ( Figure 7A, p-value = 0.0044).
Cells 2020, 9, x 12 of 20 respectively. In the right panel quantification of percentage of intimal area positive for cleaved caspase 3. Data are presented as mean ± SEM. * p < 0.05 by two-sided Student's t test.
Chronic Carbogen Exposure Reduces Inflammatory Cell Content
The effects of chronic carbogen gas treatment on intraplaque inflammation were studied on macrophages since they produce high amounts of ROS, consume elevate amounts of O2 and are known to be hypoxic [6]. Interestingly, the group of mice exposed daily to carbogen for 21 days showed a significant reduction in macrophage content when compared to the control group breathing normal air (p-value = 0.0126).
When corrected for the differences in vein graft thickening, the relative percentage of macrophages was significantly decreased in the carbogen exposed group by the 15.2% ( Figure 7A, pvalue = 0.0044). To study whether the decrease in macrophages was not due to a reduced infiltration of macrophages, nor a reduced proliferation of resident macrophages, local cytokines expression in the vein grafts was studied and the proliferation of macrophages was analyzed.
First, the mRNA expression levels of Ccl2 and Tnf in the vein graft atherosclerotic lesions were examined. The mRNA levels of Ccl2 and Tnf did not differ between the carbogen treated group and the control (Figure 7B,C).
Using a triple IHC staining for Mac-3, Ki-67 and DAPI the amount of proliferating macrophages was determined. As shown in Figure 7D and, there was no difference in the number of proliferative macrophages corrected for the vessel wall thickening (p-value = 0.16).
Thus, the data suggest that the reduction of plaque macrophages could be due to enhanced macrophage apoptosis.
Chronic Carbogen Treatment Does Not Affect Plaque Size but Increases Plaque Stability
To evaluate the effect of prolonged carbogen treatment and accumulation of ROS on plaque composition, the amount of collagen (positive collagen area in the total vessel wall) and smooth muscle cells (positive αSMA area in the total vessel wall) was analyzed, two main predictors of plaque stability.
The collagen content in the plaque was not affected by carbogen treatment, and was comparable between the two groups ( Figure 8A). Similarly, SMCs content in the carbogen group was not different from the control group ( Figure 8B). Interestingly, when calculating the plaque stability index, defined as the amount of collagen and SMCs divided by the vessel wall area, atherosclerotic plaques of the mice daily exposed to carbogen resulted to be more stable than the lesion of the control group ( Figure 8C
ROS Increases DNA Damage and Apoptosis in Bone Marrow Derived Macrophages In Vitro
To unravel the molecular and cellular mechanism underlying the observed changes in macrophage content, in particular whether this could be due to hyperoxia induced ROS accumulation, we treated macrophages derived from bone marrow of APOE3*Leiden mice with t-BHP, a known ROS mimic [19]. t-BHP treatment increased the occurrence of DNA damage in BMM as measured by 8-OHdG immunocytochemical staining ( Figure 9A), confirming its activity as a ROS mimic and the induction of DNA damage by ROS.
ROS Increases DNA Damage and Apoptosis in Bone Marrow Derived Macrophages In Vitro
To unravel the molecular and cellular mechanism underlying the observed changes in macrophage content, in particular whether this could be due to hyperoxia induced ROS accumulation, we treated macrophages derived from bone marrow of APOE3*Leiden mice with t-BHP, a known ROS mimic [19]. t-BHP treatment increased the occurrence of DNA damage in BMM as measured by 8-OHdG immunocytochemical staining ( Figure 9A), confirming its activity as a ROS mimic and the induction of DNA damage by ROS. Quantification revealed a 2.2-fold increase in DNA damage in macrophages treated with 200 µm t-BHP (p-value = 0.006) and a two-fold increase in DNA damage in macrophages treated with 400 µm t-BHP (p-value = 0.006) when compared to control ( Figure 9B).
We then evaluated the effect of the ROS mimic t-BHP on the expression of several genes. Similar to changes in expression in vivo, we found that t-BHP-induced ROS caused a significant increase of Hif1a mRNA expression (p-value = 0.007 and 0.02 respectively) when compared to control ( Figure 9C). Interestingly, we also found that ROS caused an increase in the expression of pro-inflammatory genes Ccl2 and Tnf, but decreased Epas1 expression compared to control ( Figure S4) To assess if ROS ultimately causes apoptosis in cultured BMM, we examined the expression of CC3 and found a significant and dose dependent increase in CC3 expression, thus apoptosis, in t-BHP treated BMM when compared to control ( Figure 9F). The group treated with 200 µm t-BHP showed a 10% increase (p-value = 0.03) and the group treated with 400 µm t-BHP a 27% increase (p-value = 0.01) in CC3 expression when compared to control ( Figure 9D). Moreover, we observed a drastic reduction in the total number of cells by 72% and 70% in the groups treated with 200 and 400 µm t-BHP, respectively, when compared to control ( Figure 9E, p-value = 0.01 for both groups). Combined these data demonstrate that ROS directly affects gene expression in macrophages and causes DNA damage and apoptosis.
Discussion
The results of the present study show that carbogen treatment in an acute short term setting resulted in a profound reduction of intraplaque hypoxia in murine vein grafts lesions in vivo. Long term treatment with carbogen resulted in a beneficial effect on vein graft patency in ApoE3*Leiden mice, but surprisingly, had no effect on hypoxia, intraplaque angiogenesis and intraplaque hemorrhage. On the other hand, long term carbogen treatment resulted in hyperoxia-induced ROS formation with consequent effects on HIF1a mRNA levels and macrophage apoptosis. A reduction in macrophage content in the vein graft lesions was observed, resulting in less unstable lesions. Moreover, comparable to what was observed in vivo, in vitro induction of ROS using the ROS mimic t-BHP in BMM resulted in a strong increment in DNA damage and apoptosis.
Carbogen inhalation is widely used in the oncological field [20,21]. It has been shown that the time to achieve a maximal increase in tumor oxygenation with carbogen inhalation depends on various factors such as the type of cell involved, the location, and the size of the tumor [22,23]. Moreover, Hou et al. observed an effect of carbogen treatment comparable to what was observed in the present study, both in the short and the long term experiments. Single carbogen inhalation significantly increased tumor oxygenation, while during multiple administrations of carbogen the effect was reduced, indicating that the response to chronic carbogen is not consistent over days [22]. Nevertheless, we showed that prolonged carbogen treatment has a protective role against vein graft occlusions. Vein graft occlusion is a phenomenon often seen after vein grafting in which the vessel lumen is narrowed due to extensive intimal hyperplasia that progress to stenosis and occlusion [24]. This phenomenon is also observed in ApoE3*Leiden mice that undergo vein graft surgery. Besides the reduction in vein graft occlusions, an increase in vein graft patency due to an increase in lumen perimeter and optimal lumen area of the hyperoxic vein grafts was observed, similar to the study by Fowler et al. [25]. In that study carbogen is used in the treatment of central retinal artery occlusion to increase blood oxygen maintaining oxygenation of the retina [25]. This effect of hyperoxygenation on retinal artery remodeling can be related to the effect of carbogen on patency and increase in lumen perimeter and increase in the optimal lumen found in the present study.
We did not observe a reduction in hypoxia nor an effect on intraplaque angiogenesis in the prolonged carbogen study. Furthermore, no changes in local gene expression of Vegfa were observed in the vein grafts, but interestingly Hif1a was upregulated in the prolonged carbogen exposure study and not downregulated as expected. In fact following our initial hypothesis we would have expected a reduction in intraplaque angiogenesis in parallel with a reduction in Hif1a and Vegfa expression.
For this reason, we studied other known processes that regulate Hif1a and observed an accumulation of ROS in the carbogen exposed group when compared to the control group. Repeated exposure to hyperoxia is known to be associated at a cellular level with an accumulation of ROS [26,27]. When the exposure is repeated too often, the oxidant insult is no longer compensated by the host's antioxidant defense mechanisms and therefore cell injury and death ensue [28]. Cell injury induced by ROS comprises lipid peroxidation, protein oxidation and DNA damage [29,30]. We observed an increase in DNA damage measured as an augmented presence of 8OHdG staining in the long term carbogen treated group when compared to the control group, indicating that a daily long term treatment with carbogen gas results in accumulation of ROS that in turns induces DNA damage in the atherosclerotic lesions. Moreover, we also observed an increase in DNA damage in bone marrow macrophages in vitro under ROS stimulation. It is known that DNA damage can be found in the nuclei and in the mitochondria [31,32]. Both in the vein graft lesions in vivo and in the cultured t-BHP treated macrophages in vitro, a strong 8OHdG positive staining in the nuclei of the cells, with an occasional cytoplasmic staining could be seen. The subcellular location of the staining of 8OHdG suggests that the main site of ROS induced DNA damage is nuclear, and not mitochondrial. ROS generated by repeated hyperoxia treatment can alter gene expression by modulating transcription factor activation, like NF-kβ, which then impact downstream targets [33]. It has been shown that hyperoxia also results in nuclear translocation of NF-kβ and NF-kβ activation in several cell types [34]. Our results show that long term carbogen treatment result in Hif1a gene expression upregulation. In addition, in vitro BMM treated with the ROS mimic t-BHP also showed an upregulation of Hif1a gene expression. Interestingly, the transcription of this gene is known to be regulated by NF-kβ transcription factor. In fact, Bonello et al., demonstrated that ROS induced Hif1a transcription via binding of NF-kβ to a specific site in the Hif1a promoter [35]. Those findings could be further investigated in future experiments using antioxidants such as NAC to see whether it can reverse the carbogen treatment.
We showed that the accumulation of ROS in the carbogen treated group caused an increase in apoptosis, accumulated in macrophages rich areas, and resulted in a decrease in the amount of macrophages. Even though we cannot exclude that the association of macrophages with cleaved caspase 3 could be due to efferocytosis of apoptotic cells, macrophage efferocytosis is frequently hampered in atherosclerotic lesions, therefore it is likely that these macrophages are apoptotic. Previously, in contrast with our findings, a strong correlation between macrophage content and hypoxia was shown by Marsch et al. [15]. Moreover, hypoxia potentiates macrophage glycolytic flux in a Hif1a dependent manner [36] in order to fulfill the need of ATP for protein production and migration. Taken together, this points to a high request and high use of O 2 by plaque macrophages and a consequent high exposure of these inflammatory cells to ROS accumulated during hyperoxia. We demonstrated that ROS causes accumulation of DNA damage and subsequently an increase in apoptosis and cell death in BMM in vitro. The link between ROS induced DNA damage and apoptosis detected in vitro might explain the observed apoptosis in macrophages in vivo. Moreover, a reduction in the number of macrophages is associated with plaque stability and plaque stability is reflected in an increase in vein graft patency as observed in the present study.
Previously Marsch et al. showed that repeated carbogen treatment in LDLR -/mice lead to reduction in intraplaque hypoxia, necrotic core size and apoptosis [15]. In the present study we showed that repeated carbogen treatment in accelerated vein graft atherosclerotic lesions in ApoE3* Leiden mice resulted in increased apoptosis and unaltered intraplaque hypoxia when compared to controls. Accelerated atherosclerotic lesions in ApoE3*Leiden mice highly resemble human atherosclerotic lesions and, differently from LDLR -/mice, do present intraplaque angiogenesis. Our results show that although we did not observe reduced intraplaque angiogenesis and IPH daily hyperoxia treatment with carbogen gas in this murine model lead to accumulation of ROS that could not be cleared by anti-oxidant agents and the ROS build-up lead to DNA damage and induced apoptosis. In fact, differently from Marsch et al., who treated mice daily for five days, followed by two days of no carbogen exposure we performed the treatment daily and started our treatment seven days after mice underwent vein graft surgery, when the atherosclerotic lesions already started forming. This starting time point was based on our previous findings [5] in which we found that intraplaque neovascularization in ApoE3*Leiden mice that underwent vein graft surgery is visible 14 days after surgery. Therefore, we were able to study the effect of carbogen treatment on lesion stabilization rather than on lesion formation.
One of the limitations of the current study may be the choice of the model used, the ApoE3*Leiden mice vein grafts. However, since in most mouse models for spontaneous atherosclerosis intraplaque angiogenesis is absent, and the lesions observed in the ApoE3*Leiden mice vein grafts show many features that can also be observed in advanced human lesions, including intraplaque hypoxia, angiogenesis and intraplaque hemorrhage, we believe this model is suitable for the current studies. The fact that the most prominent effects observed relate to hyperoxygenation induced ROS production, macrophage apoptosis and vein graft patency, whereas the experimental set-up was initially designed to identify effects on intraplaque angiogenesis, might indicate another limitation in our study set-up.
Based on the results obtained in the present study we can conclude that although short term carbogen gas treatment leads to a profound reduction in intraplaque hypoxia, the treatment has mixed effects. Despite the beneficial effects of the hyperoxygenation treatment on vein grafts, i.e., improved vein graft patency and a strong trend towards an increased plaque stability index, chronic hyperoxygenation also induced Hif1a mRNA expression, ROS accumulation and apoptosis. That all will harm the vein grafts in the current model under the current conditions. This indicates that in order to define potential therapeutic benefits of hyperoxygenation treatment further research is needed to define optimal conditions for this treatment in vein graft disease.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2016-01-25T19:18:26.375Z
|
2012-11-01T00:00:00.000
|
13789777
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=24228",
"pdf_hash": "e2e795807e4d3396964057dcbe14e8a73b475560",
"pdf_src": "Crawler",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45645",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"sha1": "e2e795807e4d3396964057dcbe14e8a73b475560",
"year": 2012
}
|
pes2o/s2orc
|
Information Sharing in a Supply Chain with Horizontal Competition: the Case of Discount Based Incentive Scheme
Li [1] examined the incentives for information sharing in a two-level supply chain in which there are a manufacturer and many competing retailers. Li showed that direct and leakage effects of information sharing discourage retailers from sharing their information and identified conditions under which demand information sharing can be traded. The purpose of this note is to show that full information is the equilibrium if the manufacturer adopts a discount based incentive scheme instead of the side-payment scheme used by Li. The discount-based scheme eliminates the direct as well as leakage effects. Discount based scheme is attractive because similar schemes are commonly used in practice and it results in Pareto-efficient information sharing equilibrium that has a higher social welfare and consumer surplus than the no information sharing scenario. The total social benefits and consumer surplus are higher in discount based incentive scheme. Consequently, many of the key results of Li are critically dependent on the assumption that the manufacturer uses side payment for information.
Introduction
Li [1] examined the incentives for firms to share information vertically in a two-level supply chain in which there are a single upstream manufacturer and many downstream competing retailers.Li showed that while the manufacturer always benefits from retailers' demand information, retailers would not voluntarily share their information.The results is because the manufacturer, being the leader in the game, is able to exploit retailers' information to its advantage ("direct effect"), and retailers are able to infer competitors' information from manufacturer's price ("leakage effect").However, when the manufacturer is allowed to compensate retailers for information disclosure, information sharing can be achieved when 1) the information each retailer has is relatively more informative in a statistical sense; or 2) there is sufficiently large number of retailers.Li also showed that complete demand information sharing reduces both the expected total social benefits and the expected consumer surplus.The purpose of this paper is to show that the key results of Li regarding the conditions when information sharing will occur, social benefits, and consumer surplus depend critically on the type as well as timing of the in-formation-sharing contract entered into by the manufacturer and retailers.Specifically, we show that if the manufacturer offers an appropriate schedule of discount on the wholesale price to retailers then full information sharing will occur under all conditions rather than only under the conditions given in Li.In addition, consumer surplus and total social benefits increase under full information sharing in the discount-based contract.
The intuition that underlies our result becomes clear by further analysis of the reasons for the direct and leakage effects described by Li.The direct effect occurs because the manufacturer acts as the leader and sets the wholesale price based on retailers' information.When retailers' information signals a demand higher than the mean demand, the manufacturer increases the wholesale price from what it would have charged in the absence of that information.When retailers' information signals a demand smaller then the mean demand, the manufacturer charges a lower wholesale price.However, retailers lose more from a higher wholesale price in the high demand scenario than they gain from a lower wholesale price under the low demand scenario.Consequently, retailers' expected profits decrease under information sharing.In high as well as low demand scenarios the manufacturer benefits because it is able to set the price that maximizes its profit based on the information.This insight suggests that if an information-sharing contract is designed such that retailers do not lose in the high demand scenario and gain in the low demand scenario then it is clear that the retailers as well as the manufacturer will be better off under information sharing.One such contract is based on discounts on the wholesale price when demands are expected to be low.
The leakage effect occurs because the wholesale price reveals the signals of retailers sharing the information to other retailers.This puts those retailers that share information at a disadvantage compared to those who do not.In Li's model, the manufacturer announces the wholesale price after receiving the signals from retailers that have entered into information sharing agreement.In the contract we propose, the manufacturer announces only the discount schedule.The discount a specific retailer gets is private to the manufacturer and that retailer.This contract is consistent with industry practice 1 .Thus the discount based contract scheme eliminates the leakage effect of information sharing.
We briefly present the model and key results of Li in Section 2. We present our model and derive the principal results in Section 3. We conclude with a summary in Section 4.
Li's Model and Principal Results
Li considers a two-level supply chain with one manufacturer and n retailers that sell a homogenous product.The inverse demand function for the downstream market is given by , where p is the price Q is the total sales level in the downstream market.
That is
, where q i is the level of sales at retailer i, .The marginal cost of production is assumed to be constant and zero.The manufacturer is the Stackelberg leader and first offers a price, P. Then the retailers decide on their sales quantities, q i , and the manufacturer produces the quantity Q.The manufacturer is obliged to meet the retailers' orders and has the capacity to do so.
Li analyzes two kinds of uncertainties: demand and cost.In this paper we focus only on the demand uncertainty.We can easily extend our analysis to the case of cost uncertainty and show similar results.Each retailer possesses some private information about the uncertainty.In each of these cases, the sequence of events and decisions are as follows.
1) Each retailer decides whether to disclose his information and the manufacturer decides whether to acquire such information.
2) Each retailer observes his signal and the manufacturer observes only those signals shared by the retailers.
3) Based on the available information, the manufacturer sets the wholesale price.
4) The retailers choose sales levels after receiving the wholesale price.
5) The manufacturer produces to meet the retailers' sales levels.
In demand uncertainty, the downstream demand curve is assumed to be p a Q Under the above assumptions, the following holds when k retailers share their information with the manufacturer.
,
where Li derives the following optimal quantities, wholesale price, and profits when k retailers share their information with the manufacturer.
1 The 1936 Robinson-Patman Act precludes sellers "from giving different terms to different resellers in the same reseller class" and any proffered discount schedule must be functionally available to all retailers.Our proposed contract does not violate the act.This discount scheme is similar to quantity discount schedule widely used in practice. where Using the above expressions, Li shows in Proposition 4 that the manufacturer is better off by acquiring information from more retailers, and each retailer is worse off by disclosing his information to the manufacturer in all circumstances.Therefore, no information sharing is the unique equilibrium.Li then proceeds to analyze whether information sharing can be achieved when the manufacturer is allowed to compensate retailers for information disclosure.Li considers the following contract signing game in the first stage.In the contract, the manufacturer offers a payment to each retailer's private information.
All retailers simultaneously decide whether to sign the contract.Under this contract, Li shows in Proposition 5 that there exists a such that complete information sharing equilibrium Pareto dominates no information sharing equilibrium if and only if That is, information sharing equilibrium Pareto dominates the no information sharing equilibrium only when s is sufficiently small and/or n is very large.When n = 2, information sharing equilibrium does not Pareto dominate no information sharing equilibrium.In Proposition 7, Li shows that complete information sharing reduces both the expected total social benefits and the expected consumer surplus given by
Our Model
It is worth noting that the contract of Li is based on a fixed payment of and not on the wholesale price.
However, it is well known that the profits of the manufacturer, retailers, and the overall supply chain depend critically on the wholesale price because of the double marginalization effect [2].In a deterministic demand situation, a higher (lower) wholesale price increases (decreases) the manufacturer profit but reduces (increases) retailers' and the supply chain's profits.Li shows the intuitive result that information sharing will occur only when the supply chain profit increases as a result of information sharing.When the manufacturer sets the wholesale price first to maximize its own profit, the supply chain profit improves from information sharing only under certain conditions.When these conditions are satisfied, the manufacturer can indeed use the contract proposed by Li and realize higher profits.However, when the conditions are not satisfied, information sharing is not achieved under the side payment contract.We show in the following paragraphs that if the manufacturer uses a contract based on the wholesale price then information sharing equilibrium can be achieved, and the manufacturer as well as retailers benefit as well.
The intuition for the contract we propose is based on a simple proposition 2 .If the manufacturer and retailers enter into a contract such that neither the retailer nor the manufacturer is worse off when information is shared compared to when information is not shared under all realizations of the random signals observed by the retailer then information sharing equilibrium will Pareto dominate the no information equilibrium under all circumstances.For any set of realizations of the signals, a higher wholesale price under information sharing benefits the manufacturer and hurts retailers.Consequently if the manufacturer agrees to not increase the wholesale price from what he would have charged under no information sharing, retailers will not be worse off.As for the manufacturer, if the manufacturer deems that it will benefit from giving a discount after the information is shared, it will benefit by offering the discount.If the manufacturer neither gives a discount nor raises its price based on information shared by retailers, the manufacturer will not be worse off compared to the no information sharing scenario.Such a contract results in a win-win situation for both manufacturer and retailers.We formally state our wholesale price scheme based on discounts as follows.
Discount scheme: where D 0 is the discount rate if the shared signal is Y 0, and D is equal to 0 if Y is not shared or Y > 0.
We show that there exists a discount rate D such that when the manufacturer offers this discount schedule all retailers will share information and that both retailers and 2 It should be emphasized that the contract we propose is not the only possible wholesale price based contract to achieve information sharing equilibrium.Also, several other contracts based on wholesale price as well as side payments exist that can achieve this equilibrium.Our choice of the wholesale priced contract is based on the fact the proposed contract is simple to implement and captures discounts, a commonly employed method to "buy" retailer information.
the manufacturer in the information sharing equilibrium than the no information sharing equilibrium.We also use the following sequence of actions in our analysis in order to make the above schedule available to all retailers prior to their making the decision on whether to share information.
1) The manufacturer offers the discount price schedule.
2) Each retailer decides whether to disclose his information and the manufacturer decides whether to acquire such information.
3) Each retailer observes his signal and the manufacturer observes only those signals shared by the retailers.
4) Based on the discount price schedule, the manufacturer offers the discount to those retailers that shared the information.
5) The retailers choose sales levels after receiving the wholesale price.
6) The manufacturer produces to meet the retailers' sales levels.
The rest of the model remains the same as that of Li.
Analysis of Our Discount Based Price Scheme
We first derive the optimal sales quantities when k 0 retailers share their information with the manufacturer.In the last stage of the game, the expected profit for retailer i, given his information, is The equilibrium sales quantity must satisfy the firstorder condition: As in Li, we use Bayesian Nash equilibrium to derive the optimal sales levels.A Bayesian Nash equilibrium is a set of strategies and a set of conjectures such that 1) each firm strategy is a best response to its conjecture about the behaviors of its rivals; and 2) the conjectures are correct [3].We assume that each retailer conjectures that each of the other retailers' sales quantity is a linear function of its own signal, and we shall show that this conjecture is correct in equilibrium.That is, let Then, Equation (12) becomes Note that our conjecture that the sales quantity for any retailer is a linear function of its own signal is correct in the equilibrium.The equilibrium sales quantity for retailer is then given by i The manufacturer's expected profit in the preceding stage given her information ( ) Note that the equilibrium wholesale price and sales quantity for a retailer are dependent only on that retailer's information, and whether the information is shared with the manufacturer.Specifically, they do not depend on how many retailers share their information.Consequently, leakage effect from information sharing is eliminated.
The retailer profits can now be computed as 1 Now, we can show the following result about the number of retailers that will share information in the equilibrium under our discount price schedule.
Proposition 1: For any D 0, full information sharing in which all retailers share their information is the unique equilibrium.
Proof: The proof is straightforward.It follows from the fact that irrespective of the number of retailers that already share information, a retailer that does not share his information can earn higher profit by sharing its information.That is, The following result shows that the manufacturer as well as retailers prefers the full information sharing scenario compared to the no information sharing scenario.
Proposition 2: There exists a D such that both the manufacturer and retailers are better off under full information sharing than under no information sharing.
Proof: The proof for Proposition 1 shows that the profits of retailers are higher under the full information scenario than no information scenario.We can easily show that for Y i < 0, .Consequently, the manufacturer' profit is higher under full information sharing scenario.Q.E.D.
Having shown that full information sharing in which both the manufacturer and retailers are better off is the equilibrium under our discount price based schedule, an interesting question for the manufacturer is which type of contract, discount based or fixed payment as in Li, will the manufacturer prefer.Under the contract analyzed in Li, the manufacturer realizes an additional profit of under information sharing ([2], p. 1204).Note that this additional profit is non-negative if and only if In our model, the additional profit is always non-negative and is given by The manufacturer will prefer the discount-based contract when either of the following two conditions are satisfied. 1) 2) Observe that under condition 1), information is not shared in Li's model.However, under our discount scheme both the manufacturer and retailers benefit from information sharing.Condition 2) follows a comparison of the manufacturer's benefits from information sharing under Li's side payment mechanism (i.e.,
and our discount scheme ).
The conditions imply that when the number of retailers is small and/or the signal accuracy is large so that fixed payment based contract is unprofitable or when the mean demand is sufficiently large, the manufacturer should use discount on the wholesale price to induce retailers to share information sharing.
Finally, we also analyze the effect of discount-based scheme on the social welfare and consumer surplus.The consumer surplus CS under our discount scheme is given by The social welfare is given by C S e consu rmation sh ) .i M n ount scheme, Proposition 3: Under the disc th mer surplus and social welfare are higher under full information sharing than no information sharing.
Proof: The consumer surplus under full info aring is given by Equation (16).Consumer surplus under no information sharing is given by it follows that consumer surplus is higher under informaial welfare is higher under informatio Since , The result that soc n sharing follows from the results that the manufacturer profit, retailers' profit, as well as consumer surplus are higher under information sharing.
Discussion
We showed in this paper that many of the key results of Li, especially those related to the conditions under which vertical demand information sharing will occur in a supply chain with horizontal competition and the effect of demand information sharing on consumer surplus and
|
v3-fos-license
|
2017-10-28T03:01:09.357Z
|
2015-11-01T00:00:00.000
|
12249311
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.veterinaryworld.org/Vol.8/November-2015/7.pdf",
"pdf_hash": "4ed63d2981f8d53fb55e87952e592f88ce7c0118",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45646",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "4ed63d2981f8d53fb55e87952e592f88ce7c0118",
"year": 2015
}
|
pes2o/s2orc
|
Comparative efficacy of different estrus synchronization protocols on estrus induction response, fertility and plasma progesterone and biochemical profile in crossbred anestrus cows
Aim: To evaluate estrus induction response and fertility including plasma progesterone and biochemical profile following use of three standard hormonal protocols in anestrus crossbred cows. Materials and Methods: The study was carried out on 40 true anestrus and 10 normal cyclic cows. 10 anestrus cows each were treated with standard intravaginal controlled internal drug release (CIDR) device, Ovsynch (GPG) protocol, and Norgestomet ear implant with fixed-time artificial insemination (FTAI). 10 anestrus cows were kept as untreated control while 10 cows exhibiting the first estrus within 90 days postpartum without any treatment served as normal cyclic control. Blood samples were obtained from treated cows on day 0, 7, 9 (AI) of treatment and day 21 post-AI, and from control groups on the day of AI and day 21 post-AI for estimation of plasma progesterone, protein, cholesterol, calcium, and inorganic phosphorus profile. Results: The use of CIDR, Ovsynch, and Norgestomet ear implant protocols resulted in 100% estrus induction with conception rates at induced estrus of 60%, 50%, and 50%, and the overall of three cycles as 80%, 80%, and 70%. In untreated anestrus control (n=10), only three cows exhibited spontaneous estrus within 90 days of follow-up and conceived giving the first service and overall conception rates of 66.66% and 30.00%, respectively. In normal cyclic control (n=10), the conception rates at first and overall of three cycles were 50% and 80%. The overall mean plasma progesterone (P4) concentrations in anestrus cows studied on day 0 (initiation), 7 (prostaglandin injection and/or removal of implant), 9 (FTAI) of treatment and on day 21 post-AI revealed that the values on day 7 and 21 were significantly (p<0.01) higher than other two periods in all three groups. The concentrations were significantly (p<0.05) higher in conceived than non-conceived group on day 21 post-AI in CIDR (4.36±0.12 vs. 1.65±0.82 ng/ml) and Ovsynch (4.85±0.62 vs. 1.59±0.34 ng/ml), but not in Norgestomet ear implant (4.50±0.53 vs. 3.02±1.15 ng/ml) or normal cyclic group (5.39±0.67 vs. 3.13±0.37 ng/ml). The cholesterol and protein levels were significantly higher, but not the calcium and phosphorus, in normal cyclic control than in anestrus groups. The influence of treatment days and pregnancy status was not significant for any of the biochemical constituents in any of the groups. Conclusion: Ovsynch and/or CIDR synchronization protocol can be effectively used to improve fertility up to 80% in anestrus cows, as compared to 30% in anestrus control, combined with plasma progesterone to delineate the reproductive status before and after treatment.
Introduction
Genetic upgradation of indigenous cattle population through crossbreeding is the only immediate approach to meet the challenges of milk production demand in India. The optimum reproductive efficiency of these animals is equally important for economic productivity. Infertility is one of the pathological conditions qualified as disease of production. It is widespread in modern farming, especially in crossbred animals. Infertility negatively affects productivity and return on investment of the farmers. Anestrus forms the major condition constituting about 2/3 rd of the infertility problems in crossbred cattle [1]. Various hormonal preparations and protocols are being practiced by the field veterinarians to treat postpartum anestrus, the most prevalent reproductive problem, in dairy animals, but the results are inconsistent [2][3][4]. Hormonal therapies have good therapeutic value to enhance reproductive efficacy in infertile animals only with good nutritional status [2,3,5]. The variable results obtained following hormonal treatments by different workers may be largely due to nutritional status, faulty management, ovarian changes, endocrine events, and even uterine infection. Use of hormonal protocols like Ovsynch, controlled internal drug release (CIDR) device and Norgestomet ear implant can be helpful in inducing and synchronizing estrus and getting better conception rate in anestrus dairy bovines with lesser Available at www.veterinaryworld.org/Vol.8/November-2015/7.pdf number of services per conception and making acyclic ones to cycle normally, thereby achieving ideal inter-calving interval of 12-13 months [1,2,6].
The progesterone hormone is responsible for stimulation of cyclicity, follicular development and also for maintenance of pregnancy.The plasma protein, cholesterol and minerals profile denote the nutritional status of animals and are related with their fertility [5,7]. The cholesterol being precursor of steroid hormones play an important role in steroidogenesis; while calcium tones up the genitalia, and protein and inorganic phosphorus are involved at the cellular level in metabolic processes. These hormonal and nutritional profiles are being disturbed by many metabolic and environmental factors and hamper normal physiology of the animal body [7].
The comparative studies involving the use of different estrus induction/synchronization protocols at a time under an identical environment in crossbred cows are, however, meager and are mostly based on clinical response only without plasma progesterone or biochemical evaluation [7][8][9]. Hence, this study was planned to evaluate the comparative efficacy of CIDR, Ovsynch and Norgestomet ear implant protocols in anestrus crossbred cows under field conditions in terms of estrus induction response, fertility enhancement, and their influence on plasma progesterone and biochemical profile.
Ethical approval
The prior approval from the Institutional Animal Ethics Committee was obtained for use of farmers animals in this study.
Selection and treatment of animals
This study was carried out during November, 2013 to March, 2014 under middle Gujarat agro-climatic condition. 40 postpartum (>90 days) anestrus crossbred cows and 10 normal cyclic cows of the average body condition score were selected from villages of Amul and Panchamrut milk-shed areas of Gujarat. The cows were screened gynaeco-clinically for their reproductive status. Detailed history and rectal palpation findings were recorded. Anestrus cows were confirmed by palpating small smooth inactive ovaries per rectum twice 10 days apart. All the selected cows were dewormed using ivermectin, 100 mg S/C and were supplied with multi-mineral boluses one bolus daily for 7 days. They were randomly subjected to following three estrus induction/synchronization protocols (viz., CIDR, Ovsynch and Norgestomet ear implant, n=10 each) with fix-timed artificial insemination (FTAI) [2,5,7,10].
Treatment protocols
In 10 true anestrus crossbred cows, CIDR (1.38 g of progesterone in the silastic coil, Pfizer Animal Health, Mumbai) was inserted intravaginally on day 0. It was removed on day 7 together with I/M injection of prostaglandin F2α (PGF 2 α) 25 mg (dinoprost tromethamine,). Injection GnRH 10 μg (buserelin acetate,) was administered I/M on day 9 and FTAIs were performed twice on day 9 and 10, as shown in Figure-1.
In another group of 10 anestrus cows, crestar implant (containing 3.3 mg norgestomet, Intervet India Pvt. Ltd.) was inserted S/C in the outer face of the ear-base together with 2 ml Crestar injection I/M (injection containing 3 mg norgestomet and 5 mg estradiol valerate) on day 0. The implant was removed on day 7 together with I/M injection of 25 mg PGF 2 α dinoprost tromethamine and injection buserelin acetate 10 μg I/M was given on day 9 followed by FTAIs twice at 0 and 24 h later ( Figure-3). Signs of estrus and rectal palpation findings were recorded for animals of all the groups at AI.
Another 10 anestrus cows were kept as anestrus control without hormone therapy and 10 normal cyclic cows that expressed spontaneous estrus within 90 days postpartum and inseminated served as normal cyclic control group. Cows in spontaneous or induced estrus were inseminated using good quality frozen-thawed semen. Animals detected in estrus subsequent to FTAI were re-inseminated up to two cycles and in non-return cases pregnancy was confirmed rectum 60 days of last AI. Available at www.veterinaryworld.org/Vol.8/November-2015/7.pdf
Blood sampling
All the hormonally treated/untreated true anestrus and normal cyclic cows were studied for their reproductive status and plasma progesterone and biochemical profile. For this, jugular blood samples were collected in heparinized vacutainers 4 times from true anestrus animals, i.e., on day 0 -just before treatment (on diagnosis), on day 7 -at the time of PGF 2 α administration, on day 9 -induced estrus/FTAI (FTAI done twice 24 h apart, i.e., on day 9 and 10 after initiation of treatment) and on day 21 post-AI. Blood sampling for control groups was done on the day of spontaneous estrus if any, and on day 21 post-AI. The samples were centrifuged at 3000 rpm for 15 min and plasma separated out was stored deep frozen at −20°C with a drop of merthiolate (0.1%) until analyzed.
Plasma assay
Plasma progesterone profile was estimated using the standard radio-immuno-assay (RIA) technique of Kubasic et al. [11]. Labeled antigen (I 125 ), antibody coated tubes and standards were procured from Immunotech, France. The sensitivity of the assay was 0.1 ng/ml. The intra-and interassay coefficients of variation were 5.4% and 9.1%, respectively. The concentrations of plasma total protein, total cholesterol, calcium and inorganic phosphorus were determined by standard procedures and assay kits procured from Analytical Technologies Pvt. Limited, Baroda, on chemistry analyzer.
Statistical analysis
The data on estrus response, conception rate (by Chi-square test) and plasma profiles of progesterone and biochemical constituents were analyzed statistically through analysis of variance [12] using online SAS software version 20.00.
Estrus induction and conception rates
All the cows (100%) under CIDR, Ovsynch, and Norgestomet ear implant protocols exhibited induced estrus with varying intensity similar to normal cyclic control group within 42-72 h from the time of PGF 2 α injection. The conception rates obtained at induced estrus in cows under these three protocols were 60.00%, 50.00%, and 50.00%, respectively, with corresponding overall pregnancy rates of three cycles as 80.00%, 80.00%, and 70.00%. Results were better with CIDR and Ovsynch protocols as compared to Norgestomet ear implant (Table-1). In untreated anestrus control group, only 3 out of 10 cows exhibited spontaneous estrus within 90 days of follow-up and two conceived at first AI (CR, 66.66%) and third at 3 rd AI giving overall pregnancy rate of only 30.00% (3/10). In the normal cyclic control group (n=10), the conception rates at the first cycle and overall of three cycles were 50.00% and 80.00%, respectively, with mean service period of 98.77±6.84 days ( [7,13,14] and buffaloes [3,5] using such types of protocols. Comparatively shorter interval was however reported by others [6] in heifers and multiparous cows. Chaudhari et al. [2] reported this interval to be much shorter as 25.41±0.94, 21.95±0.20 and 22.68±1.46 h using Crestar, Crestar + 500 IU PMSG, and Crestar + Receptal in Kankrej heifers. Following removal of implant resumption of follicular development and maturation is due to the flux of the gonadotropin from the pituitary gland. Although behavioral estrus in case of Norgestomet ear implant was observed because of direct effect of both exogenously administered estradiol and the high endogenous estradiol on the hypothalamus [15]. The first service conception rate of 50.00% with Norgestomet ear implant in the present study is comparable with the earlier result of Nak et al. [6] as 41.40% in anestrus heifers. The findings with CIDR and Ovsynch protocols are also in line with Patel et al. [7] as 50.00% and 30.00% in anestrus crossbred cows, respectively, and Bhoraniya et al. [10] [16] reported overall conception rates of 44.00% and 53.85% in Norgestomet and PRID groups, respectively, which are relatively lower than the present findings with Norgestomet ear implant and CIDR. Relatively inferior pregnancy rate as 33.33% was also reported by Chaudhari et al. [2] with Norgestomet ear implant. Lower first service conception rates of 40.00% and 30.00% [7], and 36.84% and 29.41% [9], respectively, with CIDR and Ovsynch protocols, are also documented by others. With Ovsynch and Norgestomet ear implant, Nak et al. [6] reported overall conception rate of 42.18% and 29.60% in non-cycling cows and 44.07% and 41.4% for heifers. Martinez et al. [17] also reported that the addition of progestin to the Cosynch or Ovsynch regimen resulted in significantly improved pregnancy rates in heifers but not in cows. El-Zarkouny et al. [4] reported that anestrus dairy cow treated with Ovsynch plus CIDR had a higher pregnancy rate (64%) than anestrus cows treated with Ovsynch alone (27%). However, cycling cows receiving Ovsynch plus CIDR had a pregnancy rate similar to that of cycling cows receiving Ovsynch alone. Stevenson et al. [18] reported that pregnancy outcomes showed larger increases when cows were treated with Ovsynch plus CIDR than with Ovsynch alone because more anestrus cows conceived. However, our results showed similar results with CIDR and Ovsynch protocols but Norgestomet ear implant group showed less overall pregnancy outcomes than other two protocols. The reduced fertility at norgestomet induced estrus may be owing to the luteal dysfunction [19], which may be due to insufficient luteinizing hormone production following implant withdrawal [20]. Although better conception rate was obtained by Rentfrow et al. [13] in Synchro-Mate-B treated Brahman heifers (18.2%) and Singh et al. [21] in anestrus heifers and cows (40%).
Further, around 30% conception rates were obtained at the second and third cycle in anestrus cows induced to cycle, which is near to normal cycling cows (40%and 33%). This proved that all the protocols induced and synchronized the estrus and then established normal cyclicity in treated animals, resulting into conceptions in subsequent cycles like normal breeding cows. These observations further supported the previous observations on the use of similar protocols in anestrus cows and buffaloes by many workers [2,7,9,10,15,22].
Thus, estrus could be induced in true anestrus cows within 2-3 days from the day of PGF 2 α injection in each protocol and made them pregnant within a period of 10-12 days (95-100 days postpartum) in comparison to 149.52±5.67 days of service period recorded in untreated control group, indicating a huge curtailment (around 1.5-2.0 months) in the waiting period of anestrus animals to evince estrus and become pregnant. The pooled conception rates of three treatment protocols obtained (76.66%) in anestrus cows indicated the positive contributory role of handling the problem of acyclicity in cows, nearly at par with normal cyclic control cows (80.00%).
Plasma progesterone profile
The mean levels of plasma progesterone recorded on day 0, 7, 9 (AI) of treatment and on day 21 post-AI in anestrus cows under CIDR, Ovsynch, and Norgestomet ear implant protocols, and on the day of AI and day 21 post-AI in the normal control group are presented in Table-2. The data show that the mean plasma progesterone (ng/ml) concentrations were low toward basal values on day 0, i.e., on the day of initiation of treatment in all three groups, suggesting that the animals were in anestrus phase. These levels, subsequently, rose significantly (p<0.01) to the peak values on day 7 (5.58±0.98, 4.10±0.78 and 1.92±0.23 ng/ml), particularly in animals under CIDR and Ovsynch protocols, i.e. just before implants were removed and PGF 2 α was injected. Thereafter, the mean progesterone levels dropped suddenly and significantly within 48 h of PGF 2 α injection and/or implant removal to the basal values coincident to induced estrus when FTAIs were done. These levels again increased significantly (p<0.01) on day 21 post-AI in all the groups (3.27±0.54, 3.22±0.64, and 3.76±0.65 ng/ml) due to estruses being ovulatory with development and maintenance of corpus luteum (CL) and the establishment of pregnancy in some animals. In normal cyclic control group also the mean plasma progesterone concentration was the lowest (0.43±0.17 ng/ml) on the day of spontaneous estrus/AI, which rose significantly (p<0.05) on day 21 post-AI (4.98±0.45 ng/ml) due to the establishment of pregnancy in four cows in that cycle.
The mean plasma progesterone concentrations in conceived and non-conceived groups in all three treatment protocols and in normal cyclic control group were found to be statistically similar on day 0, 7 and even on day 9 (AI), but on day 21 post-AI, the conceived cows had significantly higher mean plasma progesterone concentrations as compared to non-conceived ones only in CIDR (4.36±0.12 vs. 1.65±0.82 ng/ml) and Ovsynch (4.85±0.62 vs. 1.59±0.34 ng/ml) protocols (Table-2). These findings on plasma progesterone profile with respect to effect of CIDR and Ovsynch protocols and/or in normal cyclic group closely corroborated with the earlier observations in anestrus cows [7,9,10] and in anestrus buffaloes [3,5,23] under such protocols. The levels of plasma P 4 on the day of beginning of treatment protocol helped delineate the reproductive and endocrine status of the animals and thereby predicting the possible response to the therapy. The higher plasma P 4 recorded on day 21 post-AI in conceived cows of all the groups was due to establishment of pregnancy and maintenance of CL function, while significantly low yet variable plasma P 4 noted on day 21 post-AI in non-conceived cows could be due to their return to next estrus at varying intervals on account of probable irregular or long cycle length, early embryonic mortality after day 17 or uncoordinated, unexplained hormonal changes in some of them. These findings corroborated with the observation of Nakrani et al. [5] using same three protocols in buffaloes and of Ayad et al. [24] using Norgestomet ear implant in cattle.
The mean plasma progesterone levels obtained on the day of initiation of CIDR and Ovsynch treatments in the present study corroborated with the earlier findings in zebu and crossbred cows [7,9,10,25]. Significant rise observed in plasma P 4 profile on day 7 of treatments in the present study with CIDR and Ovsynch protocols (4.97±1.68 and 3.75±0.47 ng/ml) over initial (0 day) values, with sudden drop to almost basal values on induced estrus within 48-60 h after PG injection (Table-2), has also been reported earlier in anestrus cows [9,10,22] by employing CIDR and Ovsynch protocol. The apparently higher mean levels of progesterone found on day 21 post-AI in non-conceived cows covered under Norgestomet ear implant protocol and normal control group (3.02±1.15 and 3.13±0.37 ng/ml, respectively) are suggestive of possibility of either prolonged cycles due to extended luteal phase/delayed luteal regression and/or delayed embryonic death. Significantly higher mean plasma progesterone level (5.58±0.99 ng/ml) recorded on day 7 in CIDR group might be due to the continuous release of the exogenous progesterone from the progesterone molded silastic coil inserted in the anterior vagina of the cows. In the Ovsynch protocol the rise in mean progesterone level (4.10±0.78 ng/ml) noted on day 7 might be due to luteinization of some of the growing follicles and/or ovulation of dominant follicle and formation of accessory CL under the influence of GnRH, simulating diestrum phase, while in the Norgestomet ear implant protocol the mean plasma progesterone level (1.92±0.23 ng/ml) did not show rise in the value probably due to presence of synthetic progestagen in that implant, which is not detected by 17α-hydroxyprogesterone RIA.
Biochemical and mineral profile
The results of biochemical analysis did not reveal significant variations in plasma total cholesterol, total protein, calcium, and inorganic phosphorus profiles between days and periods of the treatment in any of the groups or between conceived and non-conceived cows. Although the cholesterol concentration was non-significantly higher, and protein was lower in conceived as compared to non-conceived cows. However, the pooled values of cholesterol and protein were significantly higher in normal cyclic cows than in anestrus cows of CIDR and Ovsynch groups (Table-3). Similar results of cholesterol and protein were observed in anestrus cows and buffaloes by earlier researchers [7,26]. However, others [27] reported that the conceiving cows and buffaloes had significantly higher levels of plasma cholesterol and protein as compared to non-conceiving ones. Earlier the higher mean plasma total cholesterol levels at induced estrus and 22 nd day post-AI than that of pre-treatment level in GnRH treated anestrus buffaloes have been documented [28], and the high level of cholesterol increased the estrogen synthesis resulting in manifestation of heat [28]. The higher levels of cholesterol in cyclic as compared to acyclic cows and buffaloes are however also reported by previous workers [7,29,30]. Patel et al. [7] in crossbred cows, Ramakrishnan et al. [25] in Gir cows, and Nakrani et al. [5] and Savalia et al. [26] in buffaloes,however, recorded significantly higher total protein in conceived than non-conceived and in cyclic than anestrus animals. Gentile et al. [31] opined that serum protein level was not related with fertility in dairy cows. However, as has been noted in the present study Patel et al. [7] also opined that the crossbred cows having Available at www.veterinaryworld.org/Vol.8/November-2015/7.pdf a high level of total protein had good reproductive performance.
A non-significant influence of CIDR, Ovsynch and/or Norgestomet ear implant protocols in anestrus cows and buffaloes on calcium and phosphorus levels ( Table-3) as observed in the present study has also been recently documented by researchers, including normal cyclic control groups [5,7,25,27]. Savalia et al. [32] obtained higher mean calcium levels in conceived as compared to non-conceived buffaloes under CIDR, Ovsynch, and even normal cyclic control groups, which is in contrast to the present findings. The marginal deficiency of phosphorus is opined to be enough to cause disturbances in pituitary-ovarian axis, without manifesting specific systemic deficiency symptoms [33]. Savalia et al. [32] did not find appreciable variation in the mean plasma inorganic phosphorus levels on the day of GnRH and/or PG treatment, at induced estrus and on day 22 post-AI in anestrus or sub-estrus buffaloes. In the present study, the calcium concentration was non-significantly lower and inorganic phosphorus was higher in conceived than non-conceived cows. This present insignificant differences observed in plasma inorganic phosphorus profile between different phases of the cycle and even conceived and non-conceived groups corroborated with the earlier reports in dairy cows [22,34,35].
Conclusion
From the results, it can be inferred that the hormonal protocols used, particularly Ovsynch and CIDR protocol, improved conception rates in anestrus crossbred cows under field conditions, and also influenced the plasma progesterone profile significantly, but not the biochemical profile, in a manner of normal cyclic animals, hence can be used by the practicing veterinarians in anestrus rural crossbred cows to improve their reproductive efficiency and thereby the farmers economy.
Authors' Contributions
AJD planned and designed the study. The experiment was conducted by AJD, BBN, KKH and JAP, while laboratory work was done by BBN, KKH and RGS. All authors participated in data analysis, preparation of draft of the manuscript, and read and approved the same. Means bearing uncommon superscripts within the column differ significantly between protocols (p<0.05). The variations between periods and between pregnancy status were not significant in any of the groups, hence not shown here. AI=Artificial insemination, CIDR=Controlled internal drug release
|
v3-fos-license
|
2022-05-12T13:22:30.562Z
|
2022-05-11T00:00:00.000
|
248700846
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2022.863594/pdf",
"pdf_hash": "cb0c85f8a4191cd4a2749de93291fd4c5461397b",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45649",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "cb0c85f8a4191cd4a2749de93291fd4c5461397b",
"year": 2022
}
|
pes2o/s2orc
|
Uterine Fibroid Patients Reveal Alterations in the Gut Microbiome
The gut microbiota is associated with reproductive disorders in multiple ways. This research investigated possible differences in gut microbiome compositions between patients with uterine fibroids (UFs) and healthy control subjects in order to further provide new insight into its etiology. Stool samples were collected from 85 participants, including 42 UF patients (case group) and 43 control subjects (control group). The gut microbiota was examined with 16S rRNA quantitative arrays and bioinformatics analysis. The α-diversity in patients with UFs was significantly lower than that of healthy controls and negatively correlated with the number of tumorigeneses. The microbial composition of the UF patients deviated from the cluster of healthy controls. Stool samples from patients with UFs exhibited significant alterations in terms of multiple bacterial phyla, such as Firmicutes, Proteobacteria, Actinobacteria, and Verrucomicrobia. In differential abundance analysis, some bacteria species were shown to be downregulated (e.g., Bifidobacteria scardovii, Ligilactobacillus saerimneri, and Lactococcus raffinolactis) and upregulated (e.g., Pseudomonas stutzeri and Prevotella amnii). Furthermore, the microbial interactions and networks in UFs exhibited lower connectivity and complexity as well as higher clustering property compared to the controls. Taken together, it is possible that gut microbiota dysbiosis has the potential as a risk factor. This study found that UFs are associated with alterations of the gut microbiome diversity and community network connectivity. It provides a new direction to further explore the host–gut microbiota interplay and to develop management and prevention in UF pathogenesis.
INTRODUCTION
Uterine fibroids (also known as leiomyomas or myomas) are the most common benign neoplasms of the uterus. It is estimated that women (in the USA) have an up-to-75% lifetime risk of developing uterine leiomyomas (Commandeur et al., 2015), a pathology characterized by substantial extracellular matrix (Stewart et al., 1994;Baird et al., 2003). Symptoms related to fibroids include abnormal uterine bleeding, pelvic pain, urinary frequency, and constipation, which vary with the size and location of the fibroids (Wallach et al., 1981). Chromosomal damage associated with parity was relatively overrepresented in uterine leiomyomas (Kuisma et al., 2021). In addition, fibroids have been associated with infertility and poor obstetrical outcomes due to the abnormal uterine cavity shape and the compression of the fallopian tube (Bajekal and Li, 2000;Coronado et al., 2000). Therefore, it imposes a considerable burden on women of reproductive age and on society as a whole (Marsh et al., 2018).
Imbalances in the gut microbiota have been widely reported to have complex associations with human health, specifically by immune responses and nutrient metabolism (Behary et al., 2021;Leyrolle et al., 2021). On one hand, the host local immune system as well as the gut barrier function is affected by altered microbial interactions. Alterations contribute to the disruption of the intestinal homeostasis and result in the development of several human diseases, including chronic infectious diseases, gastrointestinal diseases, metabolic diseases, and even malignant tumors (Zhu et al., 2018;Mouries et al., 2019;Sims et al., 2019;Oh et al., 2020). On the other hand, the human gut is colonized with a vast community of indigenous microorganisms that have co-evolved with the host in a symbiotic relationship and also can alter among populations depending on the host's dietary habits, gender, ethnicity, geographical environment, health status, etc. (Ma et al., 2020;Wei et al., 2020;Dwiyanto et al., 2021). For these reasons, the gut microbiota is now considered a potential source of novel therapeutics and interventions to improve the health status. To date, our understanding of the composition and functions of the human gut microbiota and possible pathogenic mechanisms has increased exponentially. Gut dysbiosis plays an important role in multi-system diseases. A growing body of evidence points out a link between diet and common reproductive pathologies (e.g., polycystic ovary syndrome, infertility, endometriosis, and/or deregulated ovarian functions) (Skoracka et al., 2021). On one hand, we hypothesize that unhealthy diets lead to gut dysbiosis, which is related to the development of UFs. On the other hand, the uterine fibroid is a sex hormone-related disease. Moreover, the gut microbiota regulates the levels of sex hormones via interactions among its metabolites, the immune system, and chronic inflammation (He et al., 2021). However, there is still no study that clearly shows any abnormalities of the gut microbiota in UF patients as compared to the control subjects.
In this study, we applied 16S rRNA quantitative microarrays, a novel high-throughput microarray technology, rather than conventional culture-based techniques to compare the gut microbiology differences between healthy individuals and patients with UFs. Furthermore, we explored the potential correlation and deciphered the interplay between the gut microbiome and UFs.
Study Design and Sample Collection
The participants with UFs (n = 42) and the control participants (n = 43) were recruited at The Third Xiangya Hospital of Central South University from December 2020 to May 2021. The UF patients were diagnosed by the Gynecology Department of The Third Xiangya Hospital according to the clinical practices (Stewart, 2015). The exclusion criteria included severe chronic diseases (e.g., metabolic disorders, heart failure, cirrhosis, and gastrointestinal, neurological, and/or autoimmune diseases).
Moreover, individuals with a history of probiotic interventions, diarrhea, and taking antibiotics or NSAIDs within 3 months before prior to collection were also excluded. None of the female subjects was premenarchal or postmenopausal.
This study was approved by the Ethics Committee of The Third Xiangya Hospital of Central South University and was conducted under the relevant guidelines and regulations (IRB number 22003). Written informed consent had been obtained from the participants before the research, and all samples and questionnaires were voluntary. All UF subjects retained stools at the time of the diagnosis of the disease without initiating any treatment. Moreover, all fecal samples were collected after menstruation. Fresh stool samples were collected in sampling tubes with the preservative solution and stored at -80°C until further processing.
DNA Extraction and Labeling
Bacterial DNA was extracted from stool samples using the Stool DNA Extraction Kit (Halgen, Ltd.) according to the procedures described in the manufacturer's instruction. Primers F44 (RG TTYGATYMTGGCTCAG) and R1543 (GGNTACCTTKTTA CGACTT) were used to amplify the DNA of the V1-V9 regions of the 16S rRNA gene. Approximately 20-30 ng of the extracted DNA was used in a 50-µl PCR reaction under the following cycling conditions: 94°C for 3 min for an initial denaturing step; then, followed by 94°C for 30 s, 55°C for 30 s, 72°C for 60s, altogether for a total of 30 cycles; and followed by a final extension step of 72°C for 3 min. Agarose gel electrophoresis was then applied to check if the PCR amplification was successful. Finally, the PCR products were directly labeled using a DNA labeling kit (Halgen Ltd., Zhong Shan, China) and further processed for microarray hybridization.
Microarray Hybridization
The human gut bacterial microarrays used were designed and manufactured by Halgen Ltd. The arrays use its proprietary oligo-array technology and cover more than 95% of the culturable gut microbial species found in different populations. Probes were selected from all the variable regions of bacterial 16S rRNA. The length of each probe was designed to be approximately 40 bp. The hybridization mixture consisted of 500 ng of Cy5-labeled test sample DNA and 50 ng of Cy3-labeled reference pool. Then, the hybridization buffer and the Cy3-and Cy5-labeled samples were added to a final volume of 150 µl, heated to 100°C for 5 min, and cooled on ice for 5 min. All hybridization mixtures were placed in a hybridization box and then hybridized at 37°C for 3.5 h in a hybridization oven. Finally, the slides were washed in 2× saline sodium citrate, 0.25% Triton X-100, 0.25% sodium dodecyl sulfate, and 1X Dye Protector for 15 min at 63°C. Then, the slides were rinsed in 1X Dye Protector until they were clear of water droplets after immediate withdrawal from the solution. The slides were immediately scanned using a dual-channel scanner.
Data Analysis
The Cy5/Cy3 ratio measured by the respective channels was used to calculate the percentage of each microbial species, which is presented as the relative abundance value. R (v.4.1.2) was used in this study. Alpha diversity (a-diversity) and beta diversity (bdiversity) indices were analyzed to characterize species diversity within and among habitats respectively, to evaluate their overall diversity in an integrated manner. a-Diversity includes richness, Shannon-Wiener diversity, Gini-Simpson diversity (obtained by subtracting the value of the classical Simpson index from 1), and Pielou's evenness. They were measured using the function diversity in the package "Vegan" based on unpumped flat OTU table; the beta diversity indices of the microeukaryotic communities were calculated using the function vegdist after data were pumped. ANOSIM was chosen to test for significance between groups (Wang et al., 2008). For non-metric multidimensional scaling (NMDS), stress less than 0.2 indicates that the results of the NMDS analysis were scientifically credible (Legendre and Legendre, 1998). The species composition at the phylum level between the two groups was visualized using the ggplot2 package. The DEseq2 package was used to analyze species difference and marker species (Topper et al., 2017). The differential expression matrix and the P-value matrix of species composition were obtained through the function DESeqDataSetFromMatrix. The significance level was P <0.05, and the absolute FoldChange value was greater than 2; the volcano map was drawn using the ggplot2 package. The coexistence network of the two groups was established based on Spearman correlation matrix and corrected by P-value matrix using the igraph package; the Benjamini and Hochberg false discovery rate was used to correct the P-value. Modules are divided according to the high intra-module connectivity and the low inter-module connectivity; the Spearman correlation coefficient and corrected P-values were 0.4 and 0.05, respectively (Yuan et al., 2021). Then, the bacteria coexistence network was constructed in Gephi software (https:// gephi.org/). A random network with the same number of nodes and edges as the real network was constructed by using the erdos.renyi.game function (Erdos and Renyi, 1960). The classified information of species in network modules was presented using ggplot2. The redundancy analysis (RDA) of the effect of age, body mass index (BMI), and other body indices on the distribution of samples and the distribution of species was performed by ggplot2's built-in vegan package.
SPSS (version 26.0) was also used in this study. Continuous data were reported by median with range (minimum-maximum) or mean ± standard deviation (SD) and were appropriately analyzed with Wilcoxon test or t-test. The categorical data were described with the number and percentages and were analyzed with c 2 test or Fisher's exact test as appropriate. P < 0.05 was considered to be statistically significant.
Clinical Characteristics of the Study Subjects
The demographic characteristics of the UF group (case group) and non-fibroids (control group) are shown in Table 1 and Supplementary Figure S1. There is no difference between the case and control groups in terms of BMI, menstrual history, previous history, and childbearing history, and a difference in average age was detected (P = 0.02) between the two groups.
The Diversity of the Gut Microbiota
All samples were sequenced to sufficient depth and dilution curves, which were calculated and recorded after 5 replicate random samples ( Figure 1A). The control group had higher indices than the case group in terms of all a-diversity indices: richness ( Figure 1B), Shannon-Wiener ( Figure 1C), Gini-Simpson ( Figure 1D), and Pielou ( Figure 1E). b-Diversity was compared using both the algorithm of NMDS and principal coordinate analysis (PCoA), which demonstrate significant differences between the two groups. The PCoA results showed that the distribution of cases and controls was scattered between groups and clustered within groups with a smaller area in the case group and the ANOSIM test (P = 0.001). The difference between groups was greater than that within groups, implying a significant difference in diversity between the case and control groups (Figure 2A). The NMDS analysis results were similar to those of PCoA and stress = 0.159 (<0.2) ( Figure 2B). The adiversity index of gut microbes was further investigated in patients with different number and locations of tumorigenesis. Interestingly, our results revealed that the gut microbial diversity of patients decreased with the increasing number of tumors (P < 0.01). Some differences were also observed in gut microbial adiversity indices depending on the location of tumorigenesis ( Figures 1F, G and Supplementary Figure S2).
The Composition and Biomarkers of the Gut Microbiota
There were commonalities and differences in species composition between the case and the control groups. At the phylum level, 16 phyla were in the case group, while 17 phyla were in the control group ( Figure 2C). At the genus level, 337 genera were in the case group, 370 were in the control group, and 321 were common to both groups ( Figure 2D). In total, 866 species in the case group and 959 in the control group were at the species level. Among them, 764 were common, while 195 were unique to the control group and 102 to the case group ( Figure 2E). In order to demonstrate the differences in taxonomy composition between the case and the control groups, we compared the differentially expressed species between the two groups at the phylum and the species level, respectively. The results suggest that the composition of the fractions was similar, but the abundance percentage of the components varied at the phylum level ( Figures 3A, B). Specifically, the relative abundance of Firmicutes, Proteobacteria, Actinobacteria, Cyanobacteria, Dictyoglomi, and Spirochaetes, respectively, was significantly lower in the case group than in the control group (P < 0.05) ( Figure 3C). Besides this, among all components, only Verrucomicrobia showed the opposite trend ( Figure 3C). Furthermore, we analyzed the differential abundance of species based on statistical differences (Metastats), and the multiplicity of differences characterized the biomarkers (DESeq2). For Metastats analysis, we presented the top 20 differentially expressed species in relative abundance (P < 0.05) ( Figure 4A). In total, 17 biomarkers were found, 3 of which (marked in red) were upregulated, while 14 (marked in blue) were downregulated in the case group relative to the controls ( Figure 4B).
The Microbial Interactions and Networks Between Gut Microbiotas
We performed a network co-occurrence analysis to unravel the relationships among microorganisms. The resulting case network consisted of 863 nodes linked by 17,786 edges, with a much higher number of strong positive correlations (17,311, 97.33%) than negative ones (475, 2.67%), and the average number of edges per node was 2.978. The control network consisted of 958 nodes linked by 23,105 edges, also with a much higher number of strong positive correlations (21,665, 93.77%) than negative ones (1,440, 6.23%), and the average number of edges per node was 2.798 (Supplementary Table S1). The results suggest that microbial networks were made up of closely connected nodes and formed a kind of "small world" topology (Supplementary Table S1). Compared with the topological properties of the random network with the same number of nodes and edges (Supplementary Figure S3 and Supplementary Table S2), the network of the case group exhibited a scale-free characteristic (P < 0.001, Supplementary Figure S4), and the control group also exhibited a scale-free characteristic (P < 0.001, Supplementary Figure S5), indicating that the network structure was nonrandom. Both the gut microbial interaction networks of the case group and the control group were divided into seven modules ( Figures 5A, B). The average degree of the case group is 41.219, which is lower than that of the control group (48.236, P < 0.001, Figure 5C), and the number of sides forming triangles is also lower (P < 0.05, Figure 5D). This suggests that the total connectivity and complexity between gut microbes was higher in the control group than in the case group. The average path length is 2.978 in the case group and 2.798 in the control group (Supplementary Table S1); the average clustering coefficient of the case group is 0.593, which is significantly higher than that of the control group (0.525, P < 0.001, Figure 5E). These results manifest that the average "clustering property" of the whole network between gut microorganisms in the case group was higher than that in the control group. We drew a doughnut to show the taxonomy composition of each module of the case and the control group networks at the phylum level ( Figure 5F). The results show that a difference in the components and the abundance percentage exists in each module. Given the abovementioned findings, it can be concluded that there are differences in the gut microorganism interaction network between the case group and the control group. Compared with the control group, the case group network has lower connectivity and complexity and higher clustering property.
DISCUSSION
It was found that gut microbiome in UFs was altered in composition, ecological network, and functionality compared with healthy women. We identified the differences of UF group in gut microbiota, also explored the potential correlation, and deciphered the interplay between the gut microbiome and UFs. The a-diversity in patients with UFs was significantly lower than that of healthy controls and negatively correlated with the number of tumorigeneses. The microbial composition of the UF patients deviated from the cluster of healthy controls. Stool samples from patients with UFs exhibited significant alterations in terms of multiple bacterial phyla, such as Firmicutes, Proteobacteria, Actinobacteria, and Verrucomicrobia. In differential abundance analysis, some bacteria species were shown to be downregulated (e.g., Bifidobacteria scardovii, Ligilactobacillus saerimneri, and Lactococcus raffinolactis) and upregulated (e.g., Pseudomonas stutzeri and Prevotella amnii). Furthermore, the microbial interactions and networks in UFs exhibited lower connectivity and complexity as well as higher clustering property compared to controls.
Imbalance in gut microbiota composition is associated with a series of non-communicable diseases, including gastrointestinal disorders (inflammatory bowel diseases, liver cancer, colorectal cancer), metabolic diseases (type 2 diabetes, obesity, malnutrition, atherosclerosis, metabolic liver disease), and neurodegenerative diseases (Alzheimer's disease, Parkinson's disease) (Addolorato et al., 2020;Kim et al., 2020), all of which are characterized by a decreasing microbial diversity. In our study, the a-diversity of the gut microbiota in the control group was significantly higher than that of the case group (P < 0.01). In addition, the PCoA analysis of microbiota composition indicated that there was a distinct clustering pattern between samples from UF individuals and healthy controls. These results were also in line with previous research on reproductive endocrine and metabolic disorders, which found that the alpha diversity in polycystic ovary syndrome was lower than that in healthy people (Qi et al., 2019;Jobira et al., 2020;Jiang et al., 2021). Interestingly, in our study, the alpha diversity of microbiota was negatively correlated with the number of tumorigeneses. However, further experiments are needed to verify and explore the possible mechanisms in benign UFs. Taken together, these observations may indicate that a low level of richness and evenness may lead to gut flora dysbiosis, which is associated with increased risk for UFs in women. However, some microbiome studies on endometriosis, a sex hormone-related disease, showed different alterations (Yuan et al., 2018;Ata et al., 2019). By analyzing differential abundance, we observed that the upregulated species were Prevotella amnii and Pseudomonas stutzeri, while the downregulated species were Lactobacillus saerimneri and Lactococcus raffinolactis among UF patients. Prevotella amnii was reported to be enriched in patients with breast cancer, as it was involved in regulating or responding to host immunity and metabolic balance (Zhu et al., 2018). Pseudomonas stutzeri is widely distributed in natural environments, and this species could be considered an opportunistic pathogen that is more abundant in bone and urinary tract infections, especially in patients with acquired immune deficiency syndrome (Lalucat et al., 2006). Moreover, Lactococcus raffinolactis is associated with aldehyde dehydrogenase, an alcohol metabolism-related enzyme, and this species has the potential to be a promising dietary supplement probiotic (Konkit et al., 2016). Our finding was in line with previous research which suggested that Lactobacillus saerimneri had higher relative abundance in a healthy and younger population and was associated with potent tumor necrosis factor-inhibitory activity (Ma et al., 2021). In other words, our study showed a significant decrease in probiotics and an increase in pathogenic bacterial species among UF subjects, indicating their reduced ability to maintain homeostasis and the increased risk of disease.
The ecological network of gut microbiota is considered critical to host health because it indicates that beneficial symbionts and their associated functions are maintained over time (Lozupone et al., 2012;Relman, 2012). Dysbiosis of the intestinal microbiota is reflected not only at the level of changes in the abundance of flora members but also in the altered relationships of microbial interactions (Chen et al., 2020). Our network analysis demonstrated lower connectivity and complexity and higher clustering in the case group network compared to the control group. Microbial communities showing high cooperation were regarded as less stable compared with a competitive community (Coyte and Rakoff-Nahoum, 2019). The gut microbiome in Chinese women with UFs was altered in composition, ecological network, and functionality compared with healthy women. Associated factors for the prediction of UFs were also identified. However, certain limitations of the present study should also be considered. Firstly, dietary characteristics, which are potential confounders, were not described (Gershuni et al., 2021). Secondly, no precise mechanism was involved in the present study, including host estrogen-gut microbiome axis, immune regulation, and metabolism. Thirdly, although a bare age difference should not be totally ignored, this confounder can be explained from data analysis and clinical practice. On one hand, RDA analysis showed that four factors (age, BMI, menses, and menstruation) accounted for less than 4.52% of the differences in community structure (Supplementary Figure S1). On the other hand, individuals with UFs always have a long-term follow-up history before surgical treatment on admission, which indicates that age was unlikely to have been a confounding factor in this cohort. Therefore, further studies can clarify whether the association is causal and whether dysbiosis leads to UFs or the disease leads to gut dysbiosis. Furthermore, the whole bowel microbial environment may not be provided or reflected by fecal microbiota, which is closely related to the systemic status, but sampling multiple sites in the human intestine is healththreatening and unethical. It is feasible to refine the inclusion and exclusion criteria.
In conclusion, our preliminary study provided distinct evidence on the imbalance of gut microbiota in UF patients. Our results can lay the foundation for subsequent studies on microbiota biomarkers to predict UF risks. Additionally, the alterations may be used to guide the development of probiotic supplements that alleviate gut dysbiosis in UFs. Topological features of the network: degree (C), triangles (D), and clustering coefficient (E) (Wilcoxon test, *P < 0.05, ***P < 0.001). Node connectivity (degree) shows how many connections (on average) each node has to the other nodes in the network. Triangles are the number of vertex triangles in a network diagram, reflecting connectivity. The global aggregation coefficient is a parameter that reflects the closeness of nodes in a network, also known as transferability. (F) The doughnut shows relative abundance in 7 modules at the phylum level between the two groups.
DATA AVAILABILITY STATEMENT
The microarray data reported in this paper have been deposited in the Gene Expression Omnibus 291 (https://www.ncbi.nlm.nih.gov/ geo/), under accession number GSE197904.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by The Third Xiangya Hospital of Central South University and performed under the relevant guidelines and regulations (IRB number 22003). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
DX and ZY conceived the study. XM and XP performed the experiments and analyzed the data. XP, XM, XZ, and QP wrote and edited the final manuscript. All authors contributed to the article and approved the submitted version.
|
v3-fos-license
|
2022-07-06T06:16:42.426Z
|
2022-07-01T00:00:00.000
|
250283670
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0042-1749159.pdf",
"pdf_hash": "c1652533e42815e7157e5bb43f2c062bec97557a",
"pdf_src": "Thieme",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45650",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "bab4b9fdef357b0e32d52d211e41d1382f7fc4ae",
"year": 2022
}
|
pes2o/s2orc
|
Immunohistochemical Evaluation of the Pathological Effects of Diabetes Mellitus on the Major Salivary Glands of Albino Rats
Abstract Objectives Diabetes mellitus is a notorious chronic disease characterized by hyperglycemia. Our study aimed to determine the expression of cytokeratin 17 (CK17) in all major salivary glands of diabetic albino rats to provide more information about the pathological effects of DM on the intracellular structures of the gland parenchyma. Materials and Methods Twenty male adult albino rats were utilized in the experiment and divided into two equal groups, group 1 (control rats) and group 2 (diabetic rats). The animals were sacrificed 45 days after diabetes induction. The major salivary gland complex of all groups was dissected and prepared for evaluation by histological and immunohistochemical expression of CK17. Results Histological results prove that the salivary gland parenchyma of diabetic group undergo gland atrophy characterized with the presence of degenerated acini, dilated duct system, and presence of duct-like structure with predominance of fibrous tissue compartment and discrete fat cells. Immunohistochemical expression of CK17 of major salivary gland of control group revealed negative to diffuse mild expression in all duct cells and some serous acinar cells, whereas mucous acini were negatively stained. On the other hand, major salivary gland parenchyma of diabetic group demonstrated mild to strong expression of duct cells more concentrated at their apical part with moderate to strong expression of some serous acini of diffuse type, whereas mucous acini of both submandibular gland and sublingual gland were negatively stained. Conclusion The severity and prevalence of CK 17 in our results are predictive of the pathological influence of the DM that interferes with saliva production and/or secretion leading to dry mouth. The results also showed clear changes in the cytokeratin expression of diabetic sublingual salivary gland, although it had little effect in the routine histological study with hematoxylin and eosin, confirming that routine studies are not sufficient to form a definitive opinion.
Introduction Salivary glands
Salivary glands are a group of both major and minor exocrine glands that drain saliva into the oral cavity. The salivary glands of rats consist of three pairs of major glands, parotid gland (PG), submandibular gland (SMG) and sublingual gland (SLG), as well as a lot of minor salivary glands. 1 Minor salivary glands are distributed throughout most parts of the oral cavity, and their secretions directly bathe the oral tissues. 2 Saliva that secreted by the acinar cells and modified by the duct cells, play an important role in maintaining the healthy state of the oral tissues, namely the teeth, gingiva and oral mucous membrane. 3 Saliva consists mainly of the secretions of SMG (65%), PG (23%), SLG (4%) and the remaining 8% being provided by the minor numerous glands. 1 Salivary secretion is composed mainly of water, electrolytes, and biologically active proteins, including growth factors and cytokines. 4 A series of salivary ducts was discovered in the seventeenth century by Nils Stensen (1638-1686), Thomas Wharton (1614-1673), and Caspar Bartholin (1655-1738) and through these ducts saliva pours into the oral cavity. 5 Saliva plays many diverse roles through its digestive, mastication, swallowing, antibacterial, buffering, lubricant, and water balance functions. 6
Diabetes Mellitus
Diabetes mellitus (DM) is a generalized metabolic disease characterized by hyperglycemia results from abnormalities in carbohydrate metabolism. 7 The long-term prognosis of the diabetics is based on the consistency of residual fasting plasma glucose levels above 126 mg/dL. 8 According to the World Health Organization, the Kingdom of Saudi Arabia ranked second in terms of the incidence of DM in the Middle East countries, with seven million diabetics among its citizens. 9,10 Due to the high incidence of DM in humans, the induction of diabetes in animal models has been performed on a large scale to study its pathological effects on different organ systems. The most common alterations of DM at the oral and dental level, include periodontal disease, dental caries, looseness, tooth extraction, poor wound healing, 11 dry socket, 12 candidiasis, tongue disorders, 13 inability to eat, and taste disorders. 14,15 All of the above signs and symptoms are associated with dry mouth or hyposalivation, as diabetes is the most common metabolic disease that damages the salivary glands by altering their tissue structure and/or mechanism of salivary secretion. 16,17 Many authors state that the decreased salivary flow rate in diabetic patients is caused by increased frequent urination, this causes the extracellular fluid to drop notoriously, which the salivary glands need to produce saliva. 18,19 Morphologically, the parotid glands of diabetic animals were decreased in size and characterized by intracellular lipid accumulation in both acini and intralobular ducts. 20 Hand et al 1984 recorded the presence of small lipid droplets in the basal cytoplasm of acinar cells as the first detectable change for induced diabetes on the first day and peaked at 4.5 months, when acinar cells contained large lipid vacuoles. 21 Anderson et al 1994 reported that the diameter and number of granular ducts were reduced in diabetic animals, but acinar cells was only affected 6 months after the induction of diabetes. 22 Histochemical staining of the tissue suggested that the intracellular lipid within the acini was mainly a triglyceride which may accumulate by decreased utilization in the synthesis of secretory granules. 23 Also, Piras et al 2010 concluded that diabetes causes specific changes in secretory protein expression in human salivary glands, which contribute to the altered oral environment. 24 Cytoskeleton It is known that the cell cytoplasm contains a three-dimensional network of filaments forming the cytoskeleton which consists of three main types, microtubules, microfilaments, and intermediate filaments.
Immunohistochemistry
Immunohistochemistry is a technique for detecting an intracellular component (e.g., filaments) which act as antigen by injecting it to an animal which respond by the production antibodies to this specific antigen forming antigen-antibody reaction. After injection, the intracellular component (antigen) bears one or more antibody binding sites, which are highly specific regions called epitopes. The animal mounts humeral immune response to this specific antigen and produce antibody specific to this epitope termed polyclonal antibody which can be isolated from the animal. 28 Monoclonal antibodies are produced in the laboratory by cell culture methods. Cytokeratin constitute an important biomarker because they are stable, relatively resistant to hydrolysis, formalin-fixed and paraffin-embedded. Also, cytokeratin shows great fidelity in expression and is highly antigenic. 25 The distribution of cytokeratin 17 (CK17) in the normal PG parenchyma is associated with cells of the duct system, whereas serous acinar cells have little or no cytokeratin in their cytoplasm. 22 Our study aimed to determine the expression of CK17 within the parenchymal elements of major salivary glands of both normal and diabetic albino rats to provide more information about the effects of DM on salivary glands structure that led to xerostomia.
Materials and Method
Grouping This research was conducted on twenty adult male albino rats (Sprague Dawley strain) with body weight ranging between 150-175 grams. All animals were housed in polycarbonate cages under 8 to 16 dark-light cycles. A mixture of hard and soft foods was given with unrestricted access to water. Rats were maintained in an animal health care facility under the supervision of the local ethical committee in a laboratory animal colony, Faculty of Veterinary Medicine, Cairo University, Cairo, Egypt. Rats were divided into two equal groups (control group one and diabetic group two).
Induction of Diabetes Mellitus
Rats of group two (fasted 12 hours before) were intraperitoneally injected with a single dose of 150 mg/kg body weight of alloxan tetrahydrate (Sigma Chemical Company, St. Louis, Missouri, United States) dissolved in physiological solution saline (0.9% NaCl). Ten days later, blood glucose concentration was determined using enzymatic colorimetric test on the bases of trend reaction. Animals presented a glucose level at or above 200 mg/dl were included in the diabetic group. The diabetic rats maintained neither diet nor drug and feed like the control animals. Control rats were injected with sterile saline to mimic the prick injection with the diabetes group.
Tissue Preparation
On the 45th day after diabetes induction, rats of both groups were sacrificed by anesthesia with diethyl ether. The salivary gland complex of each animal was cut into small portions (4 Â 4 Â 4 mm) and fixed in Bouin's fixative for 3 days. Fixed tissues were washed and then dried with ascending degrees of alcohol, cleaned in xylol, and infiltrated with molten paraffin wax to build up a block. Serial tissue sections with a thickness of 5 μm were mounted on a glass slide to be stained with hematoxylin and eosin for routine histological examination.
Immunohistochemical Staining
The tissue sections for immunohistochemistry were mounted on special slide coated with polyL-lysine recommended for staining procedures that necessitate handling with a target retrieval solution. Paraffin sections (5 μm thick) were immersed in 0.3% HO/methanol for 30 minutes to block endogenous peroxidase action and rinsed with phosphatebuffered saline. Sections were incubated with anti-CK17 E3 monoclonal antibody on streptavidin biotin method and hematoxylin counter stain. The positive staining reaction appeared as brown staining which reflects the intracellular distribution of CK17 intermediate filaments within the tissue compartments. Tissue sections were evaluated semiquantitatively and grades as negative (0), weak (1), light (2), medium (3), and intense (4) staining.
Statistical Analysis
Data analysis was done using the package of SPSS, version 23 (IBM Inc., Chicago, Illinois, United States). The quantitative data were calculated as mean, standard deviation, and ranges when their distribution found parametric by test of normality. The comparison between each two independent groups of the same gland (control vs. diabetic) was done by using an independent t-test for the equality of means and Levene's test for equality of variances. Therefore, the p-value was considered significant at a level 0.05.
Histopathological Evaluation
The PG of control group revealed a parenchymal lobular tissue filled with closely packed serous acini and duct system. These parenchymal elements are supported by connective tissue stroma that divides the gland into lobes and lobules. The SMG of control group consist of predominant serous acini, as well as lesser number of mucous acini, normal branching duct system and granular convoluted tubules. The SLG of control group consist of predominant mucous acini, many covered with serous demilune and few serous spherical acini with normal branching ducts system.
During surgery, there is a great reduction in the size of salivary gland complex of diabetic group in relation to the control group. The glandular elements of diabetic group revealed atrophic changes characterized by decrease in the parenchymal elements of all major glands accompanied by increase in the amount of fibrous stroma in both PG and SMG (►Figs . 1, 2, 3). The parenchymal elements consist of small serous acini with unspecified lumen. Both SMG and SLG showed an increase in the mucous acini among the persisted serous one. The duct systems showed an enlarged and dilated lumens with the presence of a duct-like structure. Moreover, many acini have been replaced by adipose tissue (►Fig. 1).
Immunohistochemical Evaluation
Examination of major salivary glands of the control group incubated with anti-cytokeratin E3 antibody against CK17 using immunoperoxidase technique revealed that the duct cells showed diffuse weak to mild expression of CK17 (►Figs. 4 and 5). In some sections, both intercalated and Examination of major salivary glands of the diabetic group revealed that both intercalated and striated duct cells displayed mild to strong cytoplasmic expression of CK17, the staining pattern was either strong at the apical part of cytoplasm with mild staining at their basal part or diffused throughout the cell cytoplasm. The main excretory ducts lined by stratified squamous epithelium demonstrated strong expression at the luminal cell layer with mild to moderate expression at the remaining layers. Granular convoluted tubular cells in SMG showed mild to moderate staining reaction of diffused pattern. Many serous acini revealed a mild to moderate expression of CK17 of diffuse type, whereas mucous acini were negatively stained in both SMG and SLG (►Figs. 6 and 7).
The results of the statistical studies indicated that there were statistically significant differences between the concerned groups (►Tables 1-4). The most extreme of these differences were in duct cells PG (¼ 0.004) followed by duct cells from diabetic SMG (¼ 0.008), while the least effects were on acinar cells from diabetic SMG (¼ 0.036) (►Fig. 8-9).
Discussion
In general, damage to the major salivary glands is a known consequence of DM in both human and experimental models. Caldeira et al 2005) noted that Morphological changes in salivary gland are detected not only in uncontrolled diabetes but also in glycemic control. 17 Our results were recorded once on the forty-fifth day after confirming the occurrence of DM, and the results were evident in both acinar and ductal cells, in contrast to what Anderson and others said in 1994 that the gland acini was not affected until six months after induction of diabetes. 22 The results of the current study reported that DM caused structural changes ranging from a reduction in the acinar volume to severe atrophy of the gland parenchyma which was replaced by either fibrous or fatty tissue with proliferation of ductlike structures, this findings explain the occurrence of dry mouth with failure to perform the secretory activity. In the opposite direction to the atrophic changes of the parenchymal elements, the fibrous stroma interacts through a proliferative activity, illustrating the differences in tissue interaction of both epithelial and connective tissues. Anderson and Suleiman, (1989) suggest that the replacement of parenchymal cells with fibrous connective tissue is difficult to reconcile with the normal physiological responsiveness of the gland. 23 The fibrous tissue that replaced the degraded gland components in both PG and SMG appeared very extensive suggesting permanent changes with the glands unable to regenerate later. Contrary to our interpretation, Mata et al reported that persistent acini found in glandular tissues have been suggested to be involved in the gland's ability to regenerate. 16 In several samples of diabetic group, the presence of several normal and diminished acini indicates that the gland is still performing its secretory capacity but to a minimal degree. The results of our study are unable to make any attempt to distinguish between duct-like structures and duct system. This result was supported by Takahashi et al who reported that duct-like structures appear to be increased due to the proliferative activity of duct system cells. 31 All major salivary glands of the control group revealed CK17 expression with moderate intensity within the duct cells, while the serous acini showed weak expression and negative in the mucous acini as reported by Makino et al. 32 These observations may be due to highly differentiated acinar cells with a reduced amount of filamentous structure. Abbreviations: CK, cytokeratin; df, degrees of freedom; SMG, submandibular gland. *Significant at a level 0.05. Several authors agree with this finding that CK17 in salivary gland cells plays an important role in cell structure and the intensity of expression is closely related to the differentiation status of the parenchymal cells. 10,28 The different patterns of CK17 distribution are thought to be related to the functional activity of the gland where the diffuse pattern of staining indicates the nonsecretory state, while the decrease of CK17 in the luminal portion was associated with the active state of secretion leaving the area for exocytosis. The expression pattern of CK17 focused at the basal cell part may be associated with an increase in the tensile force of acinar cells facing myoepithelium resulting in an increased pressure capacity to drive saliva through the lumen into the duct system. The salivary glands of the diabetic group revealed significant CK17 staining in both acinar and ductal cells with two different appearances, lumen center or diffuse, which is opposite to the normal, dominant distribution of the control group. It is thought that both patterns of CK17 distribution may interfere with the secretory capacity of acinar cells resulting in xerostomia. Also, the luminal pattern within the duct cells may disturb the modulation procedure for primary secreted saliva. On the other hand, the diffuse pattern of CK17 indicates a cellular deleterious effect throughout acinar or ductal cells leading to apoptosis.
|
v3-fos-license
|
2021-06-12T05:19:49.960Z
|
2021-06-10T00:00:00.000
|
235401310
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0252819&type=printable",
"pdf_hash": "b77fb72b45d3267c5219bc27881c4e94f0620d8c",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45653",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "4686bcba747eb1a6427d6b510a0b62c905322723",
"year": 2021
}
|
pes2o/s2orc
|
The burden of drug resistance tuberculosis in Ghana; results of the First National Survey
Resistance to Tuberculosis drugs has become a major threat to the control of tuberculosis (TB) globally. We conducted the first nation-wide drug resistance survey to investigate the level and pattern of resistance to first-line TB drugs among newly and previously treated sputum smear-positive TB cases. We also evaluated associations between potential risk factors and TB drug resistance. Using the World Health Organization (WHO) guidelines on conducting national TB surveys, we selected study participants from 33 health facilities from across the country, grouped into 29 clusters, and included them into the survey. Between April 2016 and June 2017, a total of 927 patients (859 new and 68 previously treated) were enrolled in the survey. Mycobacterium tuberculosis complex (MTBC) isolates were successfully cultured from 598 (65.5%) patient samples and underwent DST, 550 from newly diagnosed and 48 from previously treated patients. The proportion of patients who showed resistance to any of the TB drugs tested was 25.2% (95% CI; 21.8–28.9). The most frequent resistance was to Streptomycin (STR) (12.3%), followed by Isoniazid (INH) (10.4%), with Rifampicin (RIF), showing the least resistance of 2.4%. Resistance to Isoniazid and Rifampicin (multi-drug resistance) was found in 19 (3.2%; 95% CI: 1.9–4.9) isolates. Prevalence of multidrug resistance was 7 (1.3%; 95% CI: 0.5–2.6) among newly diagnosed and 12 (25.0%; 95% CI: 13.6–39.6) among previously treated patients. At both univariate and multivariate analysis, MDR-TB was positively associated with previous history of TB treatment (OR = 5.09, 95% CI: 1.75–14.75, p = 0.003); (OR = 5.41, 95% CI: 1.69–17.30, p = 0.004). The higher levels of MDR-TB and overall resistance to any TB drug among previously treated patients raises concerns about adherence to treatment. This calls for strengthening existing TB programme measures to ensure a system for adequately testing and monitoring TB drug resistance.
Introduction
Tuberculosis (TB) caused by Mycobacterium tuberculosis complex (MTBC) is a global threat. Worldwide, it remains the number one cause of death from a single infectious agent. In 2018, 10 million people were diagnosed with the disease resulting in close to 1.5 million deaths [1]. The number of TB cases with resistance to rifampicin (RR-TB), the most effective first-line drug, was estimated at 558 000 people (range, 483 000-639 000). Of this number, nearly 82% had Multidrug resistance-TB (MDR-TB) caused by MTBC resistant to both rifampicin (RIF) and isoniazid (INH) [2]. MDR-TB poses several challenges similar to those encountered in the pre-chemotherapy era, including the inability to cure TB, excessive mortality and morbidity, uninterrupted transmission resulting in a threat to health care workers, and unsustainably costly treatment [3].
Similar to other countries in sub-Saharan Africa, TB is a major public health problem in Ghana. The recent TB prevalence survey reported a prevalence of smear-positive TB of 111 (95% CI: 76-145) per 100,000 among adult population. The prevalence of bacteriologically confirmed TB was 356 (95% CI: 288-425) per 100,000 population [2]. Several studies have reported the emergence of MDR-TB in Ghana [4][5][6][7][8][9][10]. In 2018, the first patients with extensive drug-resistant (XDR) TB defined as MDR-TB with additional resistance to at least one fluoroquinolone and an injectable agent (amikacin, capreomycin, or kanamycin) were identified [6,11]. While these studies emphasize the importance of drug susceptibility testing (DST), they do not provide nationally representative estimates. Such estimates are urgently needed to inform guidelines and policies.
To build evidence or generate data to support decision making, the National TB Programme (NTP) in Ghana performed a National TB Drug Resistance Survey (DRS) following the methodology recommended by the World Health Organization (WHO) to establish nationally representative estimates for drug resistance among newly diagnosed and previously treated TB patients. It was also important to investigate the possible risk factors for TB drug resistance.
Study design and sample size estimation
This first DRS was a nation-wide cross-sectional study using cluster randomised sampling informed by the WHO guidelines and recommendations for conducting national TB surveys [12]. The sample size was based on the number of patients newly diagnosed with sputumsmear positive pulmonary TB in 2013 (n = 11793), an assumed rifampicin resistance (RR) prevalence of 1.7% among this group, a design effect of 2 to account for clustering, inflation of 15% to account for losses of samples for reasons such as insufficient volume or contamination, and a desired absolute precision of 1.2% around the estimate of prevalence. The calculated sample size was, therefore, 1100 newly diagnosed sputum smear-positive pulmonary TB patients.
Since the number of previously treated patients is relatively small, all previously treated patients presenting to the selected diagnostic sites were enrolled until enrollment of new smear-positive cases (29 cases) was completed [12].
Cluster/Site selection
The number of clusters included in the survey was set at 33 for logistic reasons. These were selected using a probability-proportional-to-size (PPS) approach as per WHO guidelines [12] based on the number of new sputum smear-positive pulmonary TB cases notified in each diagnostic facility in 2013. Sites that diagnosed less than 10 positive smear cases in 2013 were excluded. The target cluster size was 29 new sputum-smear positive patients. The participating sites for the Ghana national TB DRS survey are shown in S1 Table. Piloting A pilot study was conducted between October 2015 and March 2016 to assess the workflow and tools used in the study. A total of 90 smear positive sputum samples collected from seven TB diagnostic sites were included in the pilot. None of the pilot sites were included in the final survey, but they had comparable characteristics to the sites selected for the main study. Workflow piloted included sample transportation and laboratory analysis while all data capturing tools were pretested during this period. A copy of the main questionnaire used for the survey is attached as S1 File.
From April 2016 to June 2017, patients presenting to any of the survey sites/clusters with features of pulmonary TB underwent clinical and sputum examination. At the health facility, two sputa (each with a volume � 3mL); one taken on the spot and the other after one hour, were collected from eligible patients. For patients who were smear-positive, they were included in the study, after which a questionnaire designed for surveys [12] was administered to them. Samples were transported in a cold box using a local transport system and tracked via a designed social media platform to the National TB DRS designated Laboratory, the Kumasi Centre for Collaborative Research in Tropical Medicine (KCCR) at KNUST, for processing [13].
The samples were accompanied by filled questionnaires, specimen transfer forms containing information about the date of sputum collection, participant number, laboratory serial number, and sputum smear-positive quantified results from the examination at the site laboratory.
Inclusion and exclusion criteria
Adult patients (�18 years) with signs and symptoms of pulmonary TB, with sputum smear positive by microscopy at the designated cluster were deemed eligible for the study. These included both new and previously treated TB smear-positive patients. On the contrary, new smear-positive patients who had been on TB medications for more than seven days were excluded. Further, children below the age of 18 years were excluded. Cases of extra-pulmonary TB were also excluded.
Laboratory analysis
At the KCCR, samples were re-examined by microscopy using Ziehl-Neelsen staining and graded as either scanty, 1+, 2+, 3+, or negative [14]. The GeneXpert MTB/RIF assay (Cepheid, CA, US) was also performed on the samples [15] at KCCR. All Samples were cultured using the BD BACTEC Mycobacterium Growth Indicator Tube (MGIT) [16]. Briefly, the samples were decontaminated with 4% N-Acetyl-L-Cysteine-Sodium-Hydroxide (NALC-NaOH) and neutralized with 1X phosphate-buffered saline (PBS). Following this, 0.5 ml of the pellet were inoculated in BACTEC™ MGIT 960 tubes (BD Diagnostics, Sparks, MD, USA) at 37˚C for 42 days maximum. Phenotypic Drug Susceptibility Test (DST) for rifampicin (RIF), isoniazid (INH), streptomycin (STR), ethambutol (EMB) and pyrazinamide (PZA) were performed with MGIT SIRE and PZA (BD, USA) method on only positive tubes that had MTBC confirmed based on the BACTEC™ MGIT 960 method [16]. Mycobacterium tuberculosis strain H37Rv was used as a sensitive control for susceptibility testing. The flow chart for sample processing and laboratory analysis is shown in Fig 1.
Data management and statistical analysis
Trained health workers used standardized questionnaire in each of the 33 clusters to collect demographic and clinical information from eligible and consented patients including HIV status and previous TB treatment history. These were manually double entered into CSpro (U. S Census Bureau, USA) at KCCR, validated and verified. Smear microscopy, GeneXpert MTB/ RIF testing, culture and DST were grouped in Microsoft Excel file and exported into STATA (version 12.0; Stata Corp LP, College Station, TX, USA) for further statistical analysis. Chisquare analysis was used to test for significance between risk variables drug resistance. Factors associated with drug-resistant TB were investigated using univariate and multivariate logistic regression models. All statistical tests with Alpha values or p-value less than 0.05 (p�0.05) were deemed significant, and clustering difference was adjusted in the analysis. Science and Technology (KNUST) and the Komfo Anokye Teaching Hospital, Kumasi (CHPRE/AP/328/15). We also obtained approval from the Ethics Review Committee of the African Region of the World Health Organization (AFR/ERC/2016/02.01). Written informed consent was also obtained from each participant at the time of recruitment through signatures and thumbprints.
Socio-demographic characteristics of the study participants
A total of 927 participants with sputum-smear positive pulmonary TB were enrolled at 33 selected diagnostic sites. Of the 33 sites, eight (8) did not reach target (n = 29). The majority of participants were males 645 (69.6%). The median age for the study subjects was 41 years (interquartile range [IQR]: 18,22). Less than half of all cases had results of HIV testing available (388/927, 41.9%). Of those with HIV results, 77 were HIV positive (19.8%).
A total 860 (92.8%) of patients enrolled were newly diagnosed cases, while 67 (7.2%) were previously treated cases. Table 1 shows the key socio-demographic characteristics of the patients enrolled in the study. The other socio-demographic is shown in S2 Table.
Patterns of TB drug resistance in Ghana
Due to logistical challenges, a total of 595 MTBC isolates underwent DST. The proportion of patients who showed resistance to any TB drugs tested was 25.2% (n = 150/595; 95% CI; 21.8-28.9). The most frequent resistance was to Streptomycin (STR) (12.3%; 73/595), followed by With respect to any of the TB drugs tested, there was a statistically significant difference between new and previously treated cases (p = 0.041). Here, while 132 (24.1%; 95% CI: 20.6-27.9) of the new cases were resistant to any of the TB drugs, 18 (37.5%; 95% CI: 24.0-52.6) of those who had been previously treated with TB drugs showed resistance ( Table 2).
As Rifampicin resistance is considered a surrogate marker for MDR-TB, we further estimated the distribution of all RIF resistance among the MTB isolated. Of the 14 (14/595 (2.4%)) RR by Xpert, all were RIF resistant by culture and all of them were MDR-TB. For these, the previously treated cases and new cases accounted for 12 (25.0%; 95% CI: 13.6-39.6) and 7 new (1.3%; 95% CI: 0.5-2.6) respectively. Apart from MDR (INH+RIF) plus STR, we did not observe any of the patients as MDR in addition to any other second or third drug in the new cases. However, in previously treated cases, patients were MDR plus a second drug (either EMB or PZA) or MDR plus EMB and a third drug (STR), or MDR plus PZA and STR. We did not observe MDR in addition to resistance to either EMB and PZA or MDR in addition to EMB, PZA and STR. We also did not observe cases resistant to all the five TB drugs.
We found monoresistance highest for STR with a prevalence of 8.7% (n = 52; 95% CI: 6.6-11.3) and lowest for RIF; 0.8% (n = 5, 95% CI: 0.3-2.0). Though we did not observe a statistical significance, monoresistance to any of the TB drugs was higher in the new TB cases compared to the previously treated cases.
We observed other resistances among the TB patients, mainly the new cases. Patterns included resistance to STR and EMB, STR and PZA, EMB and PZA and resistance to three TB drugs minus RIF and INH (STR, EMB and PZA). All drug resistance patterns detected during the National TB drug resistance survey in Ghana are shown in Table 2.
Risk factors for drug resistance TB
We further analysed the influence of various factors on drug resistance in this study, the results of which are summarised in Table 3.
At both univariate and multivariate analysis, MDR-TB was positively associated with previous history of TB treatment (OR
Discussion
This study is the first national representative TB drug resistance survey in Ghana and one of the studies done in sub-Saharan Africa to estimate the burden of resistance to selected TB drugs on a national scale. We estimated an overall prevalence of 25.2% resistance to any of the five TB drugs in Ghanaian patients. We also detected an overall multi-drug resistance (MDR) prevalence of 3.2%, with a rate of 1.3% and 25.0% among new and previously treated patients, respectively. Globally, 3.5% of new TB cases and 18% of previously treated cases have been notified to have MDR TB [17]. With the detected MDR rate during the survey, we are tempted to conclude that MDR-TB prevalence among new TB patients in Ghana is low. This is because settings with an MDR-TB prevalence of less than 3% among new patients are classified as having a low MDR-TB burden [18]. On the contrary, MDR-TB among previously treated TB cases was relatively high (25.0%). Other nation-wide surveys in the sub-region have observed varying levels. For example, in Uganda, lower rates of 1.4% and 12.1% among new and previously treated patients were recorded during their national TB drug-resistant survey [17]. In nearby Burkina Faso, levels of 3.4% in new cases similar to what we detected, but very high levels in previously treated patients (50.5%) have been reported [19]. Similarly, in Côte d'Ivoire, the proportion of patients with rifampicin resistance was estimated to be 4.6% (95% CI: 2.4-6.7) and 22% (95% CI: 13.7-30.3), respectively, for new and previously treated patients [20]. In Tanzania, the prevalence of any resistance among new and previously treated patients was 8.3% and 20%, respectively [21]. A recent review has reported a pooled prevalence of 2.1% MDR-TB in new patients in sub-Saharan Africa [22] with the same level as observed in this survey in Kenya (1.3%) [23], and levels as high as 5.2% in Somalia [24], and much higher levels of 17.6% in Nigeria [25].
There seems to be limited information on the prevalence of MDR-TB in new and previously treated TB patients in Ghana. The few studies available have been limited to previously treated patients and have recorded reported pan resistance levels between 17.9% [6] and 83% [26] among chronic TB patients from a teaching hospital. A large study conducted in two large regions in Ghana between 2000-2004 recorded an overall primary drug resistance prevalence of 23.5% [4]. This is very similar to the overall primary drug resistance of 25.2% detected during the national drug resistance survey amongst TB patients in Ghana. Even though both studies used different diagnostic methods, and of course, the previous study was limited in terms of nation-wide coverage, the similarity in terms of the burden of resistance to any of the TB drugs is quite surprising. On the contrary, a more recent study aimed at establishing the prevalence of human immunodeficiency virus (HIV) and TB in Ghana did not observe any MDR among the TB patients; a finding the authors attributed to the inclusion of mainly new TB patients in their study [27]. Only with the conduct of a national survey can we find the true burden of MDR-TB to enable policy makers to chart suitable paths towards the management of drug-resistant TB. Thus, the MDR level of this national survey is representative of the entire country. To this end, the data from this first Ghana TB DR national survey show that while MDR among newly diagnosed smear-positive TB patients is low, the level among previously treated cases is high (which is usually the case). Apart from an efficient NTP, this low detection of MDR among treatment naïve patients can potentially be attributed to the Directly Observed Treatment Short-course system in Ghana since the 1980s. This system adopts judicious use of rifampicin only during the first 2 months (2EHRZ/6EH) for new TB cases that are known to contribute over 90% of the disease burden. Despite reports indicating the failure/non-adherence of the DOTS strategy [28], low rates of initial drug resistance have been reported in countries where the DOTS has been successfully implemented. This gives a hint that adequate use of standardized treatment regimens under DOTS may potentially limit further emergence of drug resistance. However, whether this will substantially reduce the current degree of resistance observed in Ghana, especially in new TB cases, needs to be evaluated. The relatively high degree of mono-resistance to Streptomycin and Isoniazid and the seemingly low level to Rifampicin (RIF) observed in the survey has been reported in Ghana [4][5][6]9]. Since RIF is used as a surrogate for MDR-TB on GeneXpert, and importantly, this is what drives patient management and treatment regiments, this trend, in terms of resistance to TB drugs especially to Streptomycin and Isoniazid in Ghana calls for attention. There is no gainsaying that with the increasing use of GeneXpert for simultaneous detection of TB and resistance to RIF, a growing number of RIF resistant-TB cases (without further testing for isoniazid resistance) are being detected and notified. However, one is unsure whether GeneXpert detection of low MDR-TB (RIF resistant-RR) would imply low resistance to other TB drugs. This has important implications if GeneXpert is used as a proxy for detection of MDR-TB. Under such circumstances, most INH-mono-resistant cases would obviously not be detected and may be treated as susceptible to first-line regimen containing INH. This can render the first-line regimen ineffective, especially in previously treated patients. It is, however, gratifying that the WHO has a special treatment regimen recommended for such patients. Usually, once RR-TB is detected by GeneXpert, health workers are expected to conduct cultures and DSTs to ensure resistance patterns to the other anti-TB drugs. To this end, for all the samples, we observed very high concordance between detection by smear microscopy, MTB detection, and RR by GeneXpert and phenotypic DST. However, we are careful in exaggerating the superiority of one diagnostic method over the other bearing in mind that all patients included in the survey had to be positive before being included in the survey. This notwithstanding, the National TB control program in Ghana has recently scaled up the use of GeneXpert for the detection of MDR (RR) TB. A major advantage of this move is that this will allow for the rapid initiation of treatment while awaiting culture and DST. While this strategy is recommended by the WHO [29], second-line drugs and culture and DSTs are not always readily available in Ghana. There are only five TB laboratories that have been equipped to perform culture and phenotypic DSTs. With such a limited number of laboratories and other challenges, culture and DSTs may not be performed on GeneXpert MTB/RIF detected RR. Additionally, transportation challenges, poor quality of specimens, specimens that are never collected or because the patient was not tracked may render culture and phenotypic DSTs ineffective thereby potentially contributing to the spread of the disease. The allocation of resources to detect and treat MDR-TB in low-resource settings remains controversial [30].
WHO recommends universal DST for at least RR as part of the End TB Strategy [31], but whether this is adhered to is another issue. Whereas some advocate that priority is given to the effective treatment of drug sensitive disease, thus preventing the emergence of drug-resistance [32], others argue that drug resistant cases should be detected and treated both for the good of the individual and to reduce ongoing transmission of drug resistant disease [33]. Indeed, control of drug resistant tuberculosis requires a strong health infrastructure to ensure testing of samples, the delivery of effective therapy coupled with surveillance and monitoring activities.
These would, in turn, enable timely intervention to limit transmission and spread of the disease.
The higher rates among previously treated TB patients, as we saw in this study, have been attributed to the stepwise selection of mutants due to drug resistance-conferring genes [34]. Apart from this gene conferring theory, such high levels of MDR-TB (25.0%) and resistance to any drug (37.5%) among previously treated patients raises concerns about adherence to treatment. While poor quality anti-TB drug prescriptions have virtually been eliminated in Ghana, some incidents of poor case management related to adherence, may partly explain the high level of emergence of drug resistance TB among previously treated TB patients. In the case of Rifampicin, apart from the TB control program that uses it for the management of TB, its use is very restricted in Ghana. On the contrary, there seem to be a high detection of Streptomycin resistance during the survey as observed in previous studies [4,6]. With such high rates, it is not surprising that the Ghana national Tuberculosis Control Programme has since several years ago removed Streptomycin from the list of antiTB medication because of ototoxicity.
Our results on possible risk factors of MDR TB indicated previous treatment as the strongest determinant. The high risk associated with previous treatment implies that the common practice of re-treating TB cases with first-line drugs may generally be ineffective in Ghana. Thankfully, the NTP in Ghana has halted this practise a couple of years ago. Several studies have also shown previous drug treatment as the strongest determinant of MDR-TB [35][36][37]. Depending on the country, it is known that the prevalence of MDR-TB in retreatment cases is between 30% and 80% [38]. Since this is yet to be reported for Ghana, a concerted effort is needed in coming up with a possible revised treatment regimen for patients with history of TB treatment. Added to this will be an uninterrupted supply of second-line drugs and a robust system that ensures rapid testing for drug resistance for all patients with TB. This may warrant further studies in other aspects of treatment such as the drugs used and the length of treatment as these may contribute to improving control programmes.
Limitations
Our survey has some limitations. Firstly, phenotypic DST was not performed for nearly half of the patients, and so we are careful in extrapolating the results to the entire study population. Secondly, while there are several private hospitals in Ghana, this survey only represented patients diagnosed through the NTP supervised TB diagnostic facilities. These private laboratories have policies for referral which indicates the need to refer TB cases to the public sector. The weakness is that this system is not well supervised. As such, it does not account for drug resistance patterns among TB patients who do not have access to these health systems. Thirdly, although the survey was conducted using the most recent WHO guidance [12], smear-negative patients were not included in the survey. However, there is no evidence for different drug resistance patterns among smear-negative TB patients. Further, the sampling frame for this survey was based on TB case notification in 2013 in Ghana. A number of changes in the healthcare delivery system, such as the deployment of several GeneXpert machines, the establishment of new regions and districts, and new health facilities, which did not make part of the sampling frame but shared the patients with the included facilities. Despite these limitations, our results highlight the urgent need for efforts to address drug-resistant TB, especially the use of anti-TB drugs (Streptomycin) in Ghana.
Conclusions and lessons learnt
This first Ghana nation-wide TB drug resistance survey has provided compelling evidence that the prevalence of RR-TB in Ghana is relatively low. However, we estimated a relatively high burden of MDR-TB among previously treated patients. This will require, among other things, improvements in both overall detection and coverage of diagnostic DST. This means that Ghana needs to establish a continuous surveillance system based on universal DST for at least RIF. Further, strengthening laboratory capacity and wider introduction and uptake of new rapid diagnostics such as Line Probe Assays and testing for second-line drugs need to be incorporated into existing TB diagnostic systems. Moreover, patient adherence to first-line drug treatment may need strengthening. Active and frequent monitoring of TB drug resistance is necessary throughout the country, including the non-NTP regulated sectors, using routine surveillance.
This survey has improved the national laboratories' proficiency in undertaking culture and DST (first-line). This is expected to increase patient coverage of DSTs in Ghana, including previously treated TB patients who, as in many other countries, harbour a substantial part of the MDR-TB caseload. Further, the current ongoing expansion in the use of GeneXpert MTB/RIF nation-wide in Ghana is expected to improve access to patient testing. We
|
v3-fos-license
|
2021-09-09T05:29:03.947Z
|
2021-09-03T00:00:00.000
|
237440795
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10693-021-00358-9.pdf",
"pdf_hash": "491c38b897c55fd62e10f6fec76c939926ffa2b0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45661",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "1e33aee11aae47f6d0d492669e3a6f245908b72c",
"year": 2021
}
|
pes2o/s2orc
|
Real Estate Markets and Lending: Does Local Growth Fuel Risk?
Real estate price growth affects credit risk for several reasons: it provides input for economic forecasts as it’s closely tied to economic growth; when used as collateral by banks, rising real estate prices may decrease both expected and actual losses; and banks may become less risk averse in lending practices in the presence of rising property prices. Therefore, we analyze these effects on loan portfolios’ estimated and realized risks on a local level. Using data of 390 German savings banks, however, we find that real estate prices have little or no impact on savings banks’ credit portfolio risk or risk precautions.
Introduction
Real estate markets and investments in real estate have gained increased attention in the aftermath of the financial crisis 2007-2008. Among other reasons, the steady lowering of interest rates has made real estate investments increasingly attractive, as they are not only highly leveraged but are also frequently considered safe investments. This has led to increased real estate prices in Germany, in a few cities in particular (Siemsen and Vilsmeier 2017). Due to the high price and ubiquity of real estate, as well as its economic relevance, property markets cannot work without proper loan markets. In the worst case, this relationship can lead to assigning an increasing number of loans to riskier borrowers who collateralize their debt using real estate.
In order to deter borrowers from defaulting, banks demand collateral (Stiglitz and Weiss 1981), with real estate being the most commonly used collateral device in lending (Niinimäki 2009). Pledging more collateral may be used as signal of lower borrower risk (Agarwal et al. 2015), while demanding collateral may be an indicator of lazy banks in the spirit of Manove et al. (2001). Collateral thus has a high potential for inducing banks to issue loans to risky 1 3 borrowers. Banks not only consider collateral-pledging borrowers to be less risky per se, but may also perform less monitoring when loans are backed by properties. This tends to act as an accelerant on the relationship between property and loan markets: When loans are collateralized with real estate, banks could avoid losses when property prices rise; but if they drop, loan losses are more severe as market values of recoveries are below the exposure at default (EAD) (Niinimäki 2009). Lower capital reserves held by banks with large real estate portfolios (Blasko and Sinkey 2006) could exacerbate the problem. Furthermore, borrowers whose loans were overcollateralized at the beginning may have an incentive to default if the price of the pledged real estate drops below the outstanding amount of credit (Herring and Wachter 1999).
Banks can also be suborned to use current or past real estate prices as indicators for current and future economic development or future real estate prices. Banks expecting high growth of real estate prices in the near future might be willing to accept more high-risk borrowers whose loans are collateralized by real estate.
Taking both of these arguments into consideration, banks anticipate rising trends of real estate values by observing current prices, and therefore may be willing to lend to risky borrowers today in the expectation that the same borrowers will be wealthier in the future, decreasing their default risk via incentives and their expected loss given default (s. Landvoigt 2017).
These issues could be even more pronounced in the case of banks that are regionally constrained and depend on the economic well-being of their surrounding business area. If banks additionally face limitations on their investment policies, they may develop an even stronger dependency on real estate price development. This is, indeed, the case for German savings banks; because they are heavily engaged in real estate related lending (see Fig. 1), they are particularly vulnerable to taking on additional risks when local real estate prices are high. As can be seen from Fig. 1, savings banks have been originating one third of housing loans for over 40 years, which underpins their high relevance for the German real estate market. Including also regionally based Credit cooperatives, about half of German housing loans are originated by local banks. The issue therefore is closely linked to locally based banks and their connection to local lending markets, which has been a competitive advantage for many years. Yet, with the number of branches shrinking and the reduction of personal contact which has been fueled by the COVID-19 pandemic, this advantage could parish. Additionally, real estate prices have soared lately (see Fig. 2), which offers such banks additional opportunities in lending. Therefore, the combination of loss of informational advantage and seemingly increasing profits and lower risk in real estate lending could induce regional banks to switch their lending strategies towards more transaction-based lending. Consequently, real estate price growth, indicating collateral value growth, could affect loan portfolio risk strongly. Thus far, however, micro-evidence on real estate's impact on risk taking in lending has been scarce.
Therefore, we focus on whether local real estate price growth affects savings banks' loan portfolio risk. We suggest that strong local real estate price growth could induce banks to be over-optimistic and hence underestimate loan risks. Analyzing this, we use micro level data of 390 German savings banks from 2011 to 2018 with Blundell-Bond-estimators being the method in use. However, we find no robust evidence that savings banks' loan portfolio risk is driven mainly by real estate price growth or expectations on real estate prices. Results suggest, rather, that savings banks' loan portfolio risk is affected by bank-specific variables and overall regional and national economic environment. That is, there is a direct link between loan portfolio and local economic conditions, and only an indirect channel via housing prices.
The paper proceeds as follows: In section two, we review the literature on the topic and present the hypotheses that will be tested. Here, we differentiate between the potential effects of real estate price growth on lending and risk-taking behavior of banks. Section three presents the data and discusses the characteristics of German savings banks. Section four presents the results of the empirical investigation, which comprises an analysis of the effects of real estate and loan growth and a second part analyzing the impact of real estate on the risk of banks' loan portfolio with micro data from German savings banks using dynamic panel data methods. Section five presents conclusions and implications of the study.
Literature review
The causal relationships between real estate price growth, lending and risk are complex. Our aim is to disentangle these interrelations by identifying four mechanisms that explain how the growth in property prices affect lending behavior. First, higher house prices not only require higher loan nominal amounts, but owners' property values also increase. This enables lenders with real estate collateral to obtain higher loan amounts. If banks believe that this growth is sustainable, they will increase their lending. If they do not, price growth has no effect on lending volumes, which slows down real estate price growth. Research on this topic already has been conducted with data on national levels (e.g. Gerlach and Peng 2005), while analyses on local levels (e.g. Favara and Imbs, 2015;Defusco 2018) that take spatial heterogeneities within countries into account when it comes to lending, have been scarce. Yet, due to the heterogeneity of real estate and differences in banks' lending behaviors (national vs. international vs. local lending), analyses on a large-scaled geographic area could lead to misleading results when it comes to understanding lending practices of regionally based banks. As business areas of those banks are geographically limited, they cannot smooth negative real estate price developments enlarging their business area to include regions with positive real estate price growth. To the best of our knowledge, this issue has been neglected so far when it comes to analyzing regional banks.
The consequences of lending with overvalued collateral have been discussed by e.g. Herring and Wachter (1999) and Siemsen and Vilsmeier (2017). Additional empirical evidence has been provided by Koetter and Poghosyan (2010), who analyze the impact of deviations of real estate prices from fundamental values on banking stability.
Second, high real estate prices might additionally reduce banks' monitoring efforts and the perceived riskiness of a loan. This can happen either when real estate price growth is used as a predictor for future economic performance or when properties are used as collateral and expected price growth counteracts expected losses. Bester (1985) argues that borrowers with high risk prefer loans with low collateral requirements and are willing to accept higher loan rates. Thus, collateral can offer insights into borrowers' analysis of risk. Real estate, which per se reduces risk compared to uncollateralized loans, can act as a signal of high-quality borrowers. To the best of our knowledge, empirical evidence on the impact of local real estate prices on lending risk is scarce, yet theoretical analyses have been published, e.g. by Niinimäki (2009) and Bian and Liu (2018).
Third, higher collateral values reduce losses given default (LGD) ex post, which is directly linked to ex post risk. Anticipating lower LGDs, lenders may be induced to lend to riskier borrowers, which in turn leads to an ex ante constant risk but an ex post higher risk, i.e. more realized losses. Prior empirical research mainly has focused on other effects on lending, e.g. GDP (Salas and Saurina 2002), unemployment rates (Balasubramanyan et al. 2017) or interest rates (Delis and Kouretas 2011) or issues that are directly attributable to a single loan such as collateral (e.g. Berger and Udell 1990).
Fourth, expected losses are not only based on current information, but also on expectations regarding real estate price growth. Real estate prices are publicly observable, which is not the case for other local indicators of economic performance such as GDP or figures on unemployment. Therefore, banks might base their expectations on future economic and real estate price growth on current and past property prices. Suspecting economic growth, banks might be willing to assign risky loans as they expect borrowers' solvency to increase on average. If banks expect continued real estate price growth, their expected losses from collateralized losses will decrease. Risky lending, therefore, might increase, potentially creating a large gap between ex post and ex ante risk measures. As Gerlach and Peng (2005) point out, the relationship between loans and real estate prices is evident in several aspects. With real estate frequently serving as collateral, higher housing prices enable borrowers to apply for higher credit amounts. A number of studies have found that financially constrained firms increase their borrowing if the value of their collateral increases (e.g. Agarwal et al. 2015;Cvijanovic 2014;Dougal et al. 2015). Landvoigt (2017) found that households will increase their leverage if real estate prices have increased. Similarly, Defusco (2018) suggested that households will try to smooth their consumption if the values of their homes increase, allowing them to post higher collateral (Koetter and Poghosyan 2010), and thereby reducing LGDs. Furthermore, increases of real estate prices generate profits for borrowers that improve their ability to repay (Zhang et al. 2018).
Real estate prices and loan volumes
Additionally, banks' own real estate assets increase in value and charge-offs of loans decrease with increasing property prices (Herring and Wachter 1999). Leaving other aspects constant, this increase in a bank's wealth and the lower expected losses strengthens bank capacity to extend of credit. Contrarily, lower credit constraints due to a possible substitution of monitoring with collateral could fuel demand for mortgage or other real estate related loans.
Empirical evidence on the two-way causality between property prices and lending volumes is mixed. According to Gerlach and Peng (2005), banks increased their mortgage lending in Hong Kong after increased competition due to deregulation of the banking industry. The authors further found evidence that extended lending did not have an impact on property prices, but that the causality ran in the other direction. In contrast, Favara and Imbs (2015) found that banking branch deregulation led to a greater volume and higher values of loans, which caused real estate prices to rise.
Although competition and low interest rates fuel extension of credit and could cause higher real estate prices, real estate price growth per se allows for higher collateral amounts. Therefore, we suggest that:
H1: Savings banks' loan volumes grow with local real estate prices.
Existing studies for the most part have analyzed data on single entities (countries) using time-series techniques. Yet, real estate prices are highly heterogeneous between regions, hence aggregating loan and real estate data on a national level could miss a number of insights. Investigating data from a cross-section of spatial entities could provide additional understanding of the market power of banks, bank-specific loan growth in preceding periods and dependency on local economic development along with house price growth. Stable property price growth has special relevance, as non-sustainable price increases due to deviations from fundamental values can threaten banks' solvency if their estimation of expected losses is based on current market prices that could be corrected in the future when loans are due. As a consequence, otherwise risky loans have similar expected payments as safe loans, such that loan loss provisions can hardly be correctly determined.
The impact of a strong correction of house prices can be significant for the German financial sector. According to Siemsen and Vilsmeier (2017), a drop in housing prices can lead to losses of several billion Euros, considering only less significant institutions (LSIs). Likewise, Koetter and Poghosyan (2010) found that banks located in areas with 1 3 high deviations in house prices from their fundamental value have a higher probability of being distressed. Yet, a quick adjustment to fundamental values is no simple matter in real estate markets given that investors generally do not have the possibility of going short. For this reason, real estate markets are considered to be driven by optimists (Herring and Wachter 1999).
To detect some exuberance of real estate prices that might threaten financial stability, the deviation from real estates' fundamental value is often considered an appropriate measure, as opposed to observed prices (Koetter and Poghosyan 2010). Deviations from fundamental values are more easily noted in smaller entities where this information is readily observable. Because savings banks are very familiar with local markets, they are in a better position to recognize exaggerated prices and thus reject loan applications with offers of over-priced collateral. Instead, they might increase efforts to monitor local markets, with the latter decreasing overall loan volume due to fixed input factors in the short term, thus: H2: If local house price increases are not fundamentally driven, savings banks will decrease their lending.
Ex ante risk: economic expectations, collateral, and monitoring
If exaggerated property prices do not result in an extension of credit volumes, loans might only take place when there is an increase of real estate prices and a simultaneous reduction of risks, i.e. decreasing LGDs (Koetter and Poghosyan 2010). Due to low LGDs, banks' willingness to lend collateralized real estate loans is high (Zhang et al. 2018). This is a consequence of the ability to separate the borrower's risk from the loan's risk: a risky borrower could obtain credit if pledging a collateral whose value of recourse exceeds the loan amount (Berger and Udell 1990). Yet, because risky borrowers understand their own lending quality, they will tend to avoid pledging collateral. From a lender's point of view, this collateral is the most valuable. 1 In turn, borrowers with low default risk will prefer higher collateral over higher interest rates (Besanko and Thakor 1987). Yet, as observably risky borrowers face stronger demands to provide collateral for a loan, the higher demand for collateral suggests that the probability of default increases with collateralization (Inderst and Mueller 2006). Consistent with the work of Besanko and Thakor (1987) and Niinimäki (2009), Agarwal et al. (2015 found that, when interpreting upfront payments of mortgage loans as collateral, it was younger borrowers in particular who, in spite of having a lower score and lower income, made lower upfront payments on average. As an alternative to demanding collateral, banks could thoroughly screen and monitor their borrowers, even though this is more time-consuming and costly. Manove et al. (2001) suggested that banks acting in perfect competition preferred to use collateral in lending than screening because it was less costly. The cost efficiency of substituting screening with collateral is even higher for low quality borrowers (Keys et al. 2010). 2 Positive expectations with respect to local real estate prices make collateral even cheaper compared to screening, resulting in a stronger reliance on collateral in areas with real estate prices forecast to increase.
3
Screening a borrower using all available information when lending leads to an expected loss, which is the foundation of ex ante risk. Furthermore, write-offs or non-performing loans (NPLs) capture the realized risk of a loan portfolio, i.e. ex post risk (Berger and Udell 1990). We argue that real estate prices have an impact on ex ante risk via expectations on a future willingness to pay and values of collateral. If banks observe current real estate price growth, they are able to predict rising prices in the future. Hott (2011), for example, argued that banks might prefer to stick to momentum forecasts than fundamental real estate values, as they are well diversified, having only minimal risk exposure toward fundamental factors. He also found that banks' myopic strategies did not take real estate cycles into account. Therefore, banks could consider loans less risky in the future as their collaterals increase, accepting higher risks at the present. This effect is even more pronounced on a local level, as real estate prices vary across regions. Additionally, as a result of an increase of property prices, the wealth of risky borrowers increases and former collateral barriers are no longer considered a deterrent. We expect these effects to be stronger for locally-based savings banks. Thus, the ex ante risk of loans will, on average, decrease.
H3: Real estate price growth reduces ex ante risk.
Expectations regarding the future economic condition of a household represent an important aspect of the estimation of a loan's risk. As real estate price increases are caused by various economic factors (Gerlach and Peng 2005), property prices can serve as indicators for overall local economic growth. Compared to other economic variables (e.g. GDP or unemployment rates), real estate prices are observable and indicate the wealth of a region. Therefore, higher household incomes and resulting higher capacity to repay loans could result in growth in local housing prices. According to Hott (2011), banks' optimism concerning the wealth of a household has a significant impact on real estate prices and can lead to circular effects when it comes to lending. If projecting recent price growth of real estate markets to future prices is not sustainable (Herring and Wachter 1999), worrisome overvaluation would be a consequence and either ex ante risk and/or realized risk will be higher.
Ex post risk: default and realization of real estate collateral
As Hott (2011) found that lending tightens in response to defaults rather than in anticipation of them, ex ante risk measures could be erroneous. Alterations of economic conditions or ratings during the lifetime of a loan are, instead, reflected by ex post risk measures. These include real estate price growth, which could have a significant contribution to the performance of loans and especially the ex post risk of a loan portfolio. This could happen either by a collateralization effect, i.e. by a reduction of losses given defaults (similar to Koetter and Poghosyan 2010) or by an incentive effect, i.e. by increasing borrowers' incentives not to default as losses would increase with real estate prices.
Borrowers' incentives not to default are higher when the value of their collateralized property increases or price increases are expected. Additionally, the possibility of securing collateral reduces agency costs and information asymmetries, as well as alleviates the funding of borrowers (Cvijanovic 2014). Here, again, we stress the prevalence of provision of collateral by highly creditworthy lenders; As collateral commonly has a higher value for borrowers than for banks and as borrowers may be limited in their use of the pledged asset, pledging collateral can be regarded as costly for the borrower (Agarwal et al. 2015;Coco 2000). Different costs may also arise with the use of collateral, depending on whether outside or inside collateral is in use (for differentiation, s. Niinimäki 2009). Pledging outside collateral is costly for the borrower according to Bester (1985), whereas for Besanko and Thakor (1987), the lender incurs the costs of collateral. Inside collateral is explicitly without costs for the borrower (Niinimäki 2009).
Turning to the effect of collateralization, banks incur lower ex post risk if prices of real estate collateral increase, given that larger fractions of defaulted loans can be covered. This reduction of realized losses is observable in banks' charge-offs of loans. Yet, collateral 1 3 does not decrease a loan's risk per se. Berger and Udell (1990) found that loans with fixed interest rates and collateral had below average performance, which the authors took as evidence that securing collateral was insufficient to eliminate a loan's risk.
Furthermore, the on average higher risk premia and higher charge-offs for banks holding more real estate collateralized loans, suggest that loan risk cannot be fully covered by collateral and that these loans bear some degree of high risk. Similarly, Blasko and Sinkey (2006) found that banks that are highly engaged in real estate lending over several years have lower net loan losses. This is partly confirmed by Zhang et al. (2018), who found a negative relationship between local growth of real estate investments and NPLs, i.e. ex post risk is decreased, possibly due to the higher collateralization of loans. The authors also found that a reduction in the level of real estate market activity renders banks that are strongly engaged in real estate lending unstable. Salas and Saurina (2002) argued that the impact of GDP growth on Spanish savings banks' ratio of problem loans is weaker than that of commercial banks, which have more customers dependent on business cycles. Thus, savings banks' lending success may be attributed to local economic factors that are more independent of national business cycles. As real estate prices are strongly linked with the local economy and economic cycles, our results should be similar to those of Salas and Saurina (2002).
We expect past real estate prices or expectations of real estate price growth to decrease ex post risk via the collateralization and the incentive channel: H4: Real estate prices decrease ex post risk.
Default forecasts and over-optimistic expectations of real estate prices
Not only are real estate investments commonly regarded as having little risk, but local lending is also perceived as relatively low risk due to spatial proximity, or 'home bias' in lending. Both factors contribute to the potential underestimation of the risk of lending with real estate collateral in a local setting. Therefore, expectations of risk and future property prices as determinants of LGDs could be mediating factors between real estate prices and lending behavior.
Several studies deal with potential links between current real estate prices and banks' expectations of future risks. Hott (2011) argues that positive income shocks may have an impact on price expectations, leading to current price increases. As banks base their expectations on current prices, expected prices then increase and continue the feedback process. According to Zhang et al. (2018), the threat to financial systems posed by falling real estate investments has the potential to be severe if it becomes a correction to the housing market. Recent experiences in house price development influence not only lenders' expectations of future prices, but also those of borrowers, who are therefore able to make higher upfront payments for mortgage loans (Agarwal et al. 2015) or apply for loans, even when unable to signal low risk and receive a favorable contract. Thus, there might not only be a higher offering of credit, but also demand from risky borrowers.
Indeed, in an analysis of data on U.S. homeowners, Defusco (2018) found that loans obtained by extracting equity, i.e. using the increase in property value to obtain additional loans, are riskier than comparable loans, with the risk being measured using foreclosures.
In the case of expected increasing real estate prices, banks attempting to maintain a constant aggregate net present value (NPV) for loans are willing to grant loans to borrowers with higher probabilities of default. Banks will continue lending to risky borrowers as long 1 3 as the annual increase in the overall default probability is less than the expected annual growth of collateral value. This tendency may be exacerbated when relying excessively on collateral as opposed to screening; the deteriorating survival probabilities of loans cannot be observed directly, while properties can be appraised on an ongoing basis. Given the potential for unanticipated losses (see section 2.2), engaging in riskier lending behavior when relying on estimates of future collateral values may have considerable impact on banks.
Therefore, on average, a lower ex post risk than suggested by ex ante risk measures occurs if banks have not anticipated the actual positive real estate price growth. When real estate price growth is weaker than expected, the ex post risk will be higher than suggested by the ex ante risk. In both scenarios, the gap between the two measures widens with recent real estate price growth.
We consider this gap between loan risk provisions and realized loan risks as a potential scenario for German savings banks. Legal limitations on their business activities and their embeddedness in local economies may lead to overconfidence in forecasting local real estate prices. As political impacts concerning property price and economic growth may reinforce local forecasting (Illueca et al. 2014), we formulate our final hypothesis: H5: Higher recent real estate price growth weakens banks' risk forecasting ability.
German savings banks
Savings banks are a major constituent of the German banking system and are highly relevant for business financing and retail customers. Their loan volumes to non-monetary financial institutions (MFIs) grew by almost 24% from the beginning of 2011 to the end of 2017, leading to a loan volume toward non-MFIs of 951 billion Euros (Data: Deutsche Bundesbank). Loans toward non-MFIs originated by savings banks constituted about 24% of all loans originated in Germany toward non-MFIs by the end of 2018, starting from 19% during the onset of the financial crisis of 2007/2008 (Data: Deutsche Bundesbank). This underlines the high relevance of savings banks within the German financial system. Aside from being highly relevant in Germany, savings banks are a commonly used representative for locally based banks: The focus of their business is deposit-lending; investment banking activities play a negligible role for most savings banks. This strong similarity of basic investment policies is an additional advantage for the analysis, as there should not be systematic variations of risk-taking (Conrad et al. 2014). Furthermore, their dense network of branches allows to draw consequences from the results for regional financial institutions where -due to the connection towards the region -funding and lending practices are highly correlated. Koetter and Poghosyan (2010) found that savings banks have an even lower probability of becoming distressed than small-sized and regionally-based cooperative banks, which may be a partial reflection of their aversion to business risk.
German savings banks are also restricted geographically in terms of their area of operation and they are present throughout the whole country. These business areas typically coincide with urban or rural districts or cities (similar to Metropolitan Statistical Areas). Because these banks are particularly sensitive to local economic variables (e.g. Reichling and Schulze 2018), they are convenient for analyses that may include business areas and local factors (Conrad 2008 1 3 Salas and Saurina (2002) found that bank level characteristics, such as market power and local indebtedness of borrowers, have a high explanatory power for the growth of problem loans in the case of Spanish savings banks. Thus, notable differences between results of analyses on an aggregate and individual level could be based on the sensitivity of locally operating banks toward their particular economic environment.
Given savings banks' mandate to guarantee a supply of funding within their business areas, they are likely to have a higher exposure to riskier borrowers than more transaction-based-lenders. 3 As local lenders, they are likely to demand higher collateral and lower interest rates for loans than transaction-based lenders (Inderst and Mueller 2006). 4 Savings banks are also likely to be more dependent on mortgages due to their limited investment opportunities, rendering them especially vulnerable to changes in local real estate markets.
Data
Using micro-level data of regions with varying real estate growth permits us to analyze whether the relationship between property investment and NPLs is sensitive to property cycles (Zhang et al. 2018). While there are advantages to using land prices to gauge real estate price development (e.g. Cvijanovic 2014), real estate data is available only on a local level through recorded transactions. The data, including number of transactions, size, prices and location of sold properties, may also vary across time and between jurisdictions. County-level real estate data were obtained from empirica ag's quarterly database, using offered buy and rent prices of house and condo prices with hedonical adjustments. Hedonic house price indices use data from actual transactions and offers and therefore include a variety of information sources that cover a large portion of local real estate markets. Their correction for individual property characteristics overcomes bias in the data as a result of low transaction volumes and due to different qualities of real estate transactions. To have greater comparability and ability to calculate price-rent ratios, we use prices and rents for houses and condominiums of all ages.
We collected information on the entities that are included in savings banks' official jurisdictions from their annual accounting reports, the addresses of their branches, their statutes or other reports published on their homepages. We cross-referenced this information with county data on real estate price growth and brought to a single value by calculating averages.
Unemployment rates within business areas are publicly available from the German Federal Employment Agency.
The core of the empirical investigation uses micro-level data on German savings banks, obtained by Orbis Bank Focus. Bank-level data include data from balance sheets and profit and loss statements, including information on NPLs, loan loss reserves and loan net chargeoffs on the portfolio level, which can represent ex ante or ex post risk measures. The unbalanced panel dataset comprises 390 savings banks with observations spanning from 2011 to 2018. Performance differences of loans might occur even when controlling for hard facts due to variations in the use of soft information (e.g. Keys et al. 2010). Because this type of 1 3 information is frequently used by small local banks when granting loans, it may also affect savings banks, which exhibit some similarities to privately managed banks.
Empirical investigation
In her empirical investigation, Cvijanovic (2014) assumed that a firm's real estate assets were located in the same MSA as the firm's headquarters. Similarly, we assumed that mortgage lending and collateralization only took place in the savings banks' jurisdictions and in the areas of their supporting agencies. We took into account real estate prices from the counties where the bank has branches or in cities where the banks' supporting agencies operated. As the exact geographic origins of loans and deposits were not documented, we refrained from spatial weighting methods.
We used dynamic panel data estimation as risk-taking might have some persistence due to long-term relationships with borrowers, competition, and other external circumstances. Delis and Kouretas (2011), using a Blundell-Bond estimator, found that risk-taking is highly persistent for the first lag. Furthermore, savings banks often assign long-term loans, for which the risk transfers from the preceding period. Additionally, there could be autocorrelation in NPL ratios as found by Zhang et al. (2018).
As dynamic panel data estimation implies endogeneity via construction, ordinary least square procedures would produce biased results. Endogeneity can also potentially be found in several explanatory variables, such as efficiency (Conrad et al. 2014), real estate prices, and others. The use of GMM estimators is a common method to overcome the problem of first differencing resulting in a short panel bias (Behr 2003;Flannery and Hankins 2013). To model persistence of ex ante and ex post loan risks and the impact of real estate prices, Arellano-Bond-Estimators (Salas and Saurina 2002) and Blundell-Bond System-GMM are commonly used (Delis and Kouretas 2011;Zhang et al. 2018). The use of the latter is justified by persistence of the explained variable. In such cases, the first differences as employed by the Arellano-Bond estimator are rather weak instruments (Baltagi 2008, p. 160f). As first estimations yielded some high persistence of our measures of ex ante and ex post risk, we employed the System GMM estimator in what follows. We employed Windmeijer's finite-sample correction (Windmeijer 2005; also used e.g. by Olszak et al. 2018) to guarantee the robustness of the estimation results.
Estimation and variables
In this section we briefly present the variables used in the following estimations.
Following Salas and Saurina (2002) and Olszak et al. (2018), we used the following estimations to grasp the impact of real estate price growth on loan risk: Where we varied the lags of the explanatory variables, considering different points of time of ex ante and ex post risk. For the sake of simplicity, bank and business area specific variables are both denoted by i, with real estate price growth as business area variable being measured in different ways. Variable definitions and data sources are displayed in Table 1.
We used several variables as bank specific controls as suggested by Delis and Kouretas (2011). Banks' equity ratios capture the deliberations between an increase in risk necessary for capital requirements. Furthermore, equity ratio has an impact on banks' risk taking itself. To take this into account, we included banks' capital ratios (TCAR it ), similar to Olszak et al. (2018). As Reichling and Schulze (2018) found German savings banks located in wealthier regions to be more efficient, we included the cost income ratio (CIR it ) to take this into account. We used net interest margins (NIM it ) to account for banks' ability to generate earnings by assigning new loans and their general profitability (s. Blasko and Sinkey 2006).
As Hott (2011) pointed out, past profits affect banks' optimism regarding future earnings. Hence, lagged profits may impact not only lending volumes, but also the riskiness of loans in several ways. On the one hand, low profits might lead banks to engage in more and/or riskier lending. Contrarily, it could also lead them to reduce risky loans to prevent further losses. Here, profits it is defined as
Profits and losses before taxes it
Total assets it . 5 We included branch growth as an explanatory variable for loan volume growth ( b ranches it ) (Illueca et al. 2014). As business areas of savings banks are bindingly defined and rather small (commonly equal to one or two counties), we did not include branch growth in the additional risk estimations as geographical expansions into other market areas are not conducted.
Furthermore, banks with a high portion of real estate lending have higher loan to asset ratios, which may also affect their provisions for loan losses (Blasko and Sinkey 2006). Thus, this ratio was also included (Sinkey and Greenawalt 1991) (Kick and Prieto 2015).
A further issue worth noting is a potential deterioration of monitoring activity (Manove et al. 2001). Thus, monitoring could be endogenous as well, with real estate price growth having a strong impact on it. 6 Another common explanation of bank risk taking is competition between financial intermediaries (e.g. winner's curse) (Forssbaeck and Shehzad 2015). For locally-based savings banks, measures of competition take into account several dimensions of local lending and borrowing, including local wealth, the share of county deposits, the number of branches within an area, interest income per branch, etc. Yet, most of this information was either not available at all or had low explanatory power due to a variety of factors, such as different hierarchies of branches of commercial banks. Furthermore, classical measures of market concentration, such as the Herfindahl-Hirschman-Index (HHI) on deposits on a county level, are often not bank-specific variables, but rather locally dependent and 1 3 their effects can be proxied by county-dummies. Thus, as a common index on a bank-level, we employed a Lerner-Index (Lerner it ) based on the procedure described in Berger et al. (2009) (see Appendix).
Similarly, the growth rate of gross costumer loan volume ( Ĝ CL it ) could be an indicator of whether higher loan losses were due to an unequal growth of loan quality and quantity regarding ex post risk. Higher ex ante risk with higher loan growth, however, might indicate a market expansion or confidence in future returns, possibly induced by growing real estate prices.
As savings banks should have similar standards and techniques in lending, loan loss reserves relative to gross costumer loans for each bank and year (LLR it ) were used as the measure of ex-ante risk to capture how observed credit risk was priced before actual losses occurred. Ex post risk was gauged using the banks' impaired loans, divided by gross customer loans (impaired it ). This wider definition of ex post risk captures most of the credit that belongs to problem loans in the spirit of Salas and Saurina (2002).
We used three different measures for real estate price growth: Two of them were growth of house prices within counties, matched with savings banks' business areas ( ĥ ouseP it ) as well as growth of condo prices ( ĉ ondoP it ). Sinkey and Greenawalt (1991) found regional economic factors, proxied by dummy variables, to explain only a very small fraction of loan loss variation of banks. The authors concluded that loan loss rates were instead driven by managerial abilities. This stresses the relevance of managers' perception of real estate markets and their estimates. 7 To take this into account, we used price-to-rent-ratios on the county level matched with business areas (PRR it ). Note that price-to-rent-ratios not only capture potential deviance from fundamental values, but also future expectations considering real estate prices. Price-to-rent-ratios therefore serve as an observation of local market expectations whereas past price developments are input data for individual expectations. Thus, PRR it captured the effects of the incentive channel (i.e. a borrower's incentive to repay her loan in order not to lose her collateral with expected price growth).
Summary statistics for the dependent and real estate variables can be found in Table 2. The data were not trimmed or corrected for outliers, and the means are in line with those in other studies. For example Balasubramanyan et al. (2017), using US-based data from 1997 to 2011, found that all loan loss reserves represented 1% of total assets on average, while in our study it was 0.844%.
In order to determine the extent to which real estate prices reflected local economic development, we employed growth of unemployment rate in the banks' business areas ( ûnemp it ). It should be emphasized that the dynamic panel analysis focused on local real estate price development, thus overall national real estate price growth/decline was only considered within year dummies, which also controlled for the effects of low interest rates and a higher stock market turnover. These parameters have a high stake at determining banks' risk-appetite. Delis and Kouretas (2011) analyzed risk-taking behavior of banks in 16 Euro-zone countries from 2001 to 2008 and found that banks in a setting of low interest rates shifted their business to more risky investments, as well for ex ante risk (captured by risk assets total assets ) as ex post risk ( NPL gross loans ). Furthermore, banks redistributed their assets to more risky and non-standard banking assets in the presence of low interest rates.
Loan growth
The first estimation is additional micro-evidence to previous studies based on aggregate levels in order to detect regionally-based causal relationships between loan volume and housing price growth. With regard to the following estimations, higher loan volumes or extension of credit in response to increases in real estate prices could forego higher loan risks if good borrowers already have obtained credit without extension of loans.
We included current (yearly) real estate price growth in order to identify correlations (Gerlach and Peng 2005) that may be caused by the stated current observability of real estate price growth, as opposed to e.g. GDP growth, and the time interval in years that allows for an impact of current values. 8 As an additional explanatory variable, we used ex post risk of two previous periods to check whether past negative experiences concerning credits had a negative effect on current loan growth. The results are displayed in Table 3.
The results indicate that the major drivers of loan growth were losses and impaired loans of the previous period (i.e. recently made experiences in lending), monitoring efforts, the relevance of lending for the bank's business, and branch growth. None of the real estate price growth variables, nor unemployment growth as a proxy for regional economic development, were statistically significant in any of the estimations.
The results disprove the first hypothesis on a local short term (yearly) level. We suspect that savings banks' reactions to real estate prices were not notable on an overall loan volume level, which could be due to a rather inelastic loan supply. A positive correlation on an aggregate level as graphically suggested by Fig. 3 therefore cannot be confirmed.
The coefficients on price-to-rent-ratio were not significant and changed signs, indicating that market expectations on real estate price growth did not have an impact on loan volumes. We also checked whether the reverse direction of causality would apply to the data and estimated the equation using loan growth as explanatory and house price growth and price-to-rent-ratio as dependent variables (including their lags as right hand side variables). This neither produced significant coefficients nor superior overall results. As loan volume is only available for savings banks, not for the whole county, results of this estimation could be biased.
For testing hypothesis two, the change of loan growth and deviance of prices from the fundamental value, we conducted regressions with the same control variables using priceto-rent-ratio growth, squared price-to-rent-ratios, a dummy, if the current and lagged priceto-rent-ratio exceeds the yearly averaged ratio by more than 10 %, and interaction terms with this lagged dummy variable and house price growth (Table 4). None of these were significant, although this result does not necessarily indicate that savings banks did not react to the exuberance of real estate markets; higher ratios might be justified in certain locations and thus not represent an exaggeration of prices.
Ex ante risk
Ex ante risk cannot be determined unambiguously, as higher loan loss provisions can either indicate higher expected loan losses or lower underwriting quality (Dou et al. 2018). Olszak et al. (2018) argue that large banks are more procyclical and more prone to moral hazard due to too-big-to-fail thinking. Hence, we additionally included bank size as an explanatory variable, measured as natural log of total assets (LTA it ). Yet, as savings banks 1 3 are supported publicly and are not excessively large, those problems are not expected to be especially relevant for the estimation.
As can be seen from Table 5, savings banks' loan loss reserves decreased with current house price growth and price-to-rent ratios, i.e. future market expectations. As stated initially, house prices -in contrast to GDP -can be observed continuously. Thus, current house price growth includes all observations during the year, technically enabling them to have a simultaneous impact on end-of-the-year loan loss rates. The effects were not highly significant, which is partly due to the Windmeijer correction. Further lagged house prices did not have a significant impact, which is partially due to the use of lagged loan loss rates, which reflect the explanatory power of the lagged house prices.
Although the results of estimation (5) indicate that banks reduced their loan loss provisioning (i.e. ex ante risk) in the face of increasing property prices, the effect was not robust. As can be seen from estimation (8), the coefficient of unemployment growth was stronger in terms of statistical significance. Local house price growth may reflect some degree of overall local economic growth effects, which is supported by the insignificance of condo price growth.
The negative coefficient of the price-to-rent-ratio, indicating lower ex ante risk, was small in economic significance, and the changing sign for the coefficient of the preceding year indicated weak robustness. We will therefore perform a closer analysis of the effects of deviations of real estate prices from the fundamental value in the latter sections.
Additional non-dynamic panel system GMM estimations using L LR it as the dependent variable did not produce valuable insights or different results concerning the real estate variables.
Ex post risk
Several ex post risk measures have been used in the literature. Sinkey and Greenawalt (1991) employed net charge offs it net loans+charge offs it . Berger and Udell (1990) used loan risk premia as ex ante risk measures while ex post risk was gauged by others through loan charge-offs, overdue 30 days or 30-89 days, and renegotiated. Turning to hypothesis four, we instead follow Delis and Kouretas (2011) who used the ratio of risk assets 9 to total assets and the ratio of NPLs to gross loans as proxies to evaluate banks' risk taking. As the authors argued, these measures are better suited to measure banks' risk taking than a z-score (Mohsni and Otchere 2014), which evaluates the probability of bank insolvency rather than risk engagement. Thus, we use impaired loans it gross costumer loans it as the variable to describe problem loans. Following As Balasubramanyan et al. (2017) pointed out, estimating NPLs using loan loss provisions (LLPs) can be biased, as LLPs may be based on expectations for NPLs, and thus LLPs are not independent from future loan performance. Endogeneity, therefore, has another stake in estimating the equation. 10 The results of the estimation can be found in Table 6.
3
In addition to efficiency, the most obvious finding was the persistence of loan portfolio riskiness, with the lagged dependent variable close to 100%. This variable explained a high share of the variation of the following impaired ratio, rendering at least some of the other lagged variables without individual explanatory power, but rather bundling their effects.
The second finding is that there was a robust positive impact of loan growth on loan risk. Riskier borrowers could be the consequence of the bank already having saturated high quality borrower markets and being forced to lend to bad borrowers due to competitive pressure or bank strategy (closely related to winner's curse effect). This result is especially meaningful, as we did not find robust effects of loan growth on loan loss provisions, hence the realized losses associated with previous loan volume growth seem to be unanticipated. This impression was reinforced by the even higher coefficient of the lagged 1 3 growth variable, suggesting weaker predictability of loan losses due to a longer time horizon between loan approval and default. We expected banking competition to have a positive impact on NPLs, as in Zhang et al. (2018). This is also in line with Herring and Wachter (1999) who argued that disaster Table 6 Results of Blundell-Bond-Estimation of ex post appropriateness of loan loss reserves. Ex post risk appropriateness of loan loss reserves is measured by loan loss reserves it impaired loans it . The estimation uses different lag lengths for level and difference instruments and employs Windmeijer's robust standard errors. Growth variables are denoted by circumflex Significance levels are indicated by * p < 0.10, ** p < 0.05, *** p < 0.01 myopia could be increased through competition, as non-myopic banks are unable to withstand the pressure that arises if risk premia are too small. This decreases returns and banks increase their leverage. In fact, there is some evidence from our estimations that savings banks with higher market power have lower ex post risk. This result contrasts with the findings of Salas and Saurina (2002), who found that market power increased problem loans for Spanish savings banks. Finally, current and past house price growth, used as a proxy for expectations on future house price growth, did not appear to influence the ex post risk of savings banks' loans. Unreported regressions, using net charge offs it gross costumer loans it as the dependent variable, as in Sinkey and Greenawalt (1991), confirmed that real estate price growth did not have a significant effect on savings banks' loan portfolios' ex post risk. This is line with the results of Koetter and Poghosyan (2010), who found that savings banks were on average less likely to default as a consequence of deviations of housing prices from fundamental values.
Monitoring, ex post loan loss reserves and deviations from fundamental values
A number of factors can explain the results with regard to ex post risks. As we found that loan growth did not affect risk provisions but realized losses, special attention should be paid to hypothesis 5. The term ex post loan loss reserves used in what follows compares loan loss reserves to realized credit risk, i.e. high/low values indicate bad loan loss reserves policy, which could be due to either speculations on rising values of collateral or to overall economic conditions. We tested how real estate price growth affects savings banks' estimation of risks and their optimism using LLRIMP it , defined as loan loss reserves it impaired loans it . As loan loss reserves and impaired loans are affected by different lags of the explanatory variables, we included up to three periods of each variable. Again, we estimated a dynamic panel model, as the two components of the dependent variable were endogenous -loan loss reserves and impaired loans were highly persistent. In order to examine the effects of monitoring without including real estate prices, we estimated parsimoniously instrumented models without considering condo price growth as before, but rather estimating a baseline model without real estate variables.
The results in Table 7 suggest that monitoring had an effect on the ratio with varying signs concerning the lags. Current efficiency and market power seemed to have positive impacts on the reserves/losses ratio, which would contradict a too-big-to-fail moral hazard problem.
Most importantly, the regression shows that local real estate prices did not induce banks to be over-optimistic. Hypothesis 5 thus is rejected. With regard to the previous estimations, this undermines the finding that savings banks' loan portfolio risk was not determined directly by real estate prices, but rather by economic factors, which in term helped determine real estate growth. In unreported estimations, we found that the effects of population growth in certain places were even stronger than unemployment rate growth.
As savings banks are backed by public entities and are exposed to less pressure to achieve high gains in the short run, they might unintentionally be less prone to engage in riskier lending. Furthermore, savings banks have strong links with local real estate markets, providing them with local knowledge and enabling them to observe the risks that stem from real estate related lending.
3
Therefore, besides using price-to-rent-ratios as signal of house price deviations, we calculate fundamental house prices using Pooled-Mean-Group (PMG) estimation, as described by Pesaran et al. (1999) and used by e.g. Kholodilin et al. (2007) and Koetter Table 7 Results of Blundell-Bond-Estimation of loan growth with price-to-rent-ratio related explanatory variables. PRR Dummy_it is a dummy variable, indicating whether the price rent ratio in the business area was at least 10 % higher index than the average over all business areas in the respective year. The estimation uses different lag lengths for level and difference instruments and employs Windmeijer's robust standard errors. Growth variables are denoted by circumflex Significance levels are indicated by * p < 0.10, ** p < 0.05, *** p < 0.01 and Poghosyan (2010) and checked whether savings banks increased their ex post loan loss reserves in response to potential bursting of a housing price bubble. Our model uses population growth, income gauged by gdp per employee and population density as explanatory variables. 11 The resulting error correction representation of the model therefore is Due to availability, the data for the estimation span from 2005 to 2017. The results are displayed in Table 8. Deviations from fundamental house prices are calculated as We additionally plotted the results using German fringe counties to highlight the differences between calculated fundamental real estate prices and price-to-rent-ratios (s. Fig. 4).
We estimated the impact of this alternative specification of real estate price developments on loan portfolio risk of savings banks using previously used model specifications.
The results are displayed in Table 9. As can be seen, as previous measures, deviations from fundamental house prices do not seem to explain savings banks' loan portfolio risks. Yet, from the results we learn that local variables have a significant impact on house prices. Therefore, impacts of house price developments on loan portfolio risk could be driven by local economy variables in the first place.
Panel vector autoregression
As additional check of the robustness of the results, we investigate whether the local economy has a direct impact on loan risk or the former affect house prices which then pass these effects on to loan portfolios. Loan risk hence would be impacted by local economy rather than price increases, thus dismissing explanations via collateral. We estimated a panel vector autoregressive model where we used per capita GDP growth within the business area, unemployment rates, population density as indicator of local urbanization, house price growth, deviations from fundamental house prices and loan growth as endogenous variables besides the mentioned loan portfolio risk variables and, used as additional ex post risk indicator, net charge-offs to average gross loans (NCO it ). The results can be found in Table 10. As can be seen, regional variables only have little explanatory power, except for local GDP growth and house price variables, which is surprising, as we would have expected house price growth to have less impact than e.g. unemployment rates. Yet, as GDP has proved to be an important determinant of real estate prices (see Table 8), the effect of real estate price growth could be highly impacted by GDP growth.
To analyze this relationship closer, we additionally calculated orthogonalized impulse response functions (OIRFs), adopting the procedure described by Sigmund and Ferstl (2021). Therefore, we ordered the local economic variables in the beginning of our estimation as we supposed that effects of local economy are more likely to be passed on to house prices which then affect loan portfolio risk. The results for GDP growth, which proved to have the strongest impact on loan portfolio risk measures, and house price growth and deviations from fundamental values are shown in Fig. 5.
While impulses of house price growth tend to reduce over the course of time, the effects of GDP growth are much more persistent, although on an economically low level. Furthermore, it becomes evident that house price and GDP growth do not move parallel, but in parts even in the opposite direction. The rather pronounced effect of house price growth in the short run is additional evidence that real estate prices could be used as first indicator when making loan decisions, rather than only reflecting local economic figures, which are the basis of loan decisions. Yet, while the effects of real estate growth decline quickly, GDP as underlying factor has a steadier effect on loan portfolio risk measures.
Fig. 4
Quintiles of price-to-rent-ratios and deviations from fundamental house prices in % of current house prices in 2017. Data grouped by German fringe counties. Own representation based on data from Empirica AG, own calculations and shape data provided by the German Federal Agency for Cartography 1 3
Conclusions
This paper studies the effects of real estate prices on German savings banks' risk taking in lending. It contributes to the existing literature in several ways: It analyzes the impact of real estate price growth on loan risk, uses a micro-level perspective to do so, and uses a forward-looking metric to reflect market expectations. In contrast to Koetter and Poghosyan (2010), which is to the best of our knowledge the most similar study that has been published, we do not focus on real estate price deviations from fundamental values on banks' default probability, but on the effects of real estate price growth on loan portfolio quality. The consequences of a positive impact of real estate price growth on banks' loan portfolio risk would be severe: On the one hand, local lending would be systematically distorted: While banks lend to risky borrowers in economic prosperous areas with growing real estate prices, banks located in economically lower performing areas with stable or decreasing property prices would act more conservative. Therefore, adverse selection problems in lending obtain a spatial dimension, e.g. when deposits (credits) are shifted to non-risky (risky) banks and thus capital allocation would be impeded. On the other hand, lending to risky borrowers inflates real estate prices, which in turn leads to additional and even riskier lending. Extensive real estate lending, having a Orthogonalized Impulse Response Functions of selected variables. Besides loan and house price growth we used loan loss reserves as ex ante risk measure and impaired loans to gross costumer loans and net charge-offs to average gross loans as ex post variables 1 3 higher loan to asset ratio than other lending (s. Bian and Liu 2018;Blasko and Sinkey 2006) would aggravate risks. Bursting bubbles cannot only result in borrowers' defaults, but contagion effects of bank defaults could aggravate arising problems quickly. As real estate bubbles frequently are distributed unevenly across spatial entities, micro-evidence on loan portfolio risk and real estate price growth is highly relevant. This not only holds true for German housing and lending markets, but for many countries inheriting banks with politically and geographically limited business.
Overall, there was no robust evidence that real estate price growth has an impact on savings banks' ex ante or ex post loan portfolio risk. There was only some slight evidence that loan loss reserves were affected by past and current price developments of regional real estate markets, but this effect was dominated by overall economic development. This result is in line with the findings of Koetter and Poghosyan (2010) who find savings banks to be on average less prone to default whereas house price deviations increase banks' probability of default. Additionally, there was a high persistence of loan factors, which has already been noted as justification for the usage of system GMM. This underscores the relevance of loan maturities when determining NPLs and the significance of collateral in lending. Including those data might offer additional insights into the riskiness of loans collateralized by real estate. Furthermore, as lending practices of savings banks are strongly depending on local economic conditions, which determine real estate prices, there could be some indirect link between loan portfolio risk and real estate markets. Analyzing the impacts of various economic indicators we do not find robust evidence that either local economic conditions have significant impacts on loan portfolio risks.
This result is subject to some limitations with regard to data availability. The observed time span may have been too short to represent a full real estate cycle. The study's results do not exclude loan losses or the absence of caution by savings banks, as long-term real estate developments were not investigated. Additionally, there has been a steady real increase of real estate prices for virtually all German regions since the onset of the European sovereign debt crisis. Expected loss and loss given default estimates thus still should be monitored thoroughly by banking supervisors.
Another issue is that national economic factors, such as overall economic development or interest rates, have greater explanatory power than regional factors. This may be due to their higher relevance for deposit-lending in the case of interest rates and the consecutive attractiveness of other over-regional business fields like equity investments. As the time dummies were significant in many of the equations, this is a plausible explanation.
The empirical results reject a strong reliance of savings banks on house price growth rates when it comes to lending. This result could not only be the consequence of lending techniques, but also due to legal issues: Banks are legally obliged to consider haircuts in their estimations of LGD (European Parliament, 2013, Capital Requirements Regulations (CRR) I, Art. 181 (1) e)) 12 because borrower risk does not depend on the development of the value of the collateralized real estate (European Parliament, 2013, CRR I, Art. 125 (2) b) and Art. 126 (2) b).
Additionally, due to their geographically-restricted business areas, savings banks are closely connected with their borrowers, allowing them the use of soft information (Berger and Black 2011). While this does not fully replace collateral, local information may help 1 3 savings banks to correctly forecast borrowers' future economic conditions and to soften their lending techniques by taking into account factors other than collateral for their loan decisions.
Future developments in business administration and management might thus be crucial for local banks and their environment: Shifting their techniques from relationship to transaction-based lending could not only decrease costumer relationships and induce banks to engage in higher risks, but also reduce the availability of funding in regions with weaker economic performance. The economic consequences of the ongoing pandemic-induced de-personalization of interaction and business could thus have long lasting effects on underperforming areas and widen interregional economic gaps.
Appendix: Lerner index
We briefly present choice and calculation of the measure of competition using the Lerner Index. For locally based savings banks, measures for competition that take into account several dimensions of local lending and borrowing include local wealth, share of county deposits, number of branches within an area, interest income per branch, etc. Yet, most of this data was either not available at all or had low explanatory power due to a variety of factors, such as different hierarchies of branches of rival commercial banks. Furthermore, classical measures of market concentration, such as Herfindahl-Indices on deposits on a county level, are very sensitive to the non-homogeneity of banks (Forssbaeck and Shehzad 2015). Additionally, variables are commonly not bank-specific, but rather locally dependent, and their effects can be gauged by county-dummies.
Thus, as a common index on a bank-level, we employ a Lerner-Index based on the procedure described in Berger et al. (2009) andFeldkircher andSigmund (2017). The Lerner index is defined as Where marginal costs are derived from With costs stemming from the translog function: Total assets are described by TA it , input costs by F k : F 1 is labor costs, described by staff expenses TA , F 2 are costs of funds ( interest expense total deposits ) and F 3 are costs of fixed capital ( operating expenses TA ). The dependent variable (total costs) is calculated as the sum of total operating expenses and total interest expenses.
The results of the estimation of the equation above can be found in Table 11.
3+k lnF 2 k,it 1 3 As in Berger et al. (2009), we used year fixed effects and robust standard errors (clusters on a bank-level basis). At first glance, there were two issues that would reject the use of year dummies. First, F-Tests suggested that the year dummies were jointly insignificant for some specifications. Second, they captured overall economic conditions, which are supposedly of less relevance for regionally non-systemic banks. They were maintained in all estimations, however, as the effects of (national) interest levels and other economic conditions would otherwise be falsely assigned to local house price growth. Many of the coefficients' values and significances were similar to the results of Feldkircher and Sigmund (2017). 13
3
The obtained coefficients were then used together with the input data described above to calculate marginal costs, which are then used to calculate each bank' s Lerner-Index.
|
v3-fos-license
|
2023-12-04T16:46:13.938Z
|
2022-04-02T00:00:00.000
|
265574043
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "HYBRID",
"oa_url": "https://journal.ami-ri.org/index.php/JTM/article/download/25/25",
"pdf_hash": "be8926ce5a3d9867c00c7e6aa677dcc4aa45c7d3",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45662",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"sha1": "7a6c22925a75727c3093a7026e699e6ff68935c1",
"year": 2022
}
|
pes2o/s2orc
|
Response of growth and salinity tolerance of Nauclea orientalis L. seedlings to arbuscular mycorrhizal fungi
. The purpose of this study was to determine the effectiveness of AMF types on the increasing growth of Lonkida ( Nauclea orientalis L.) plants under salinity stress conditions. This study was carried out in the plastic home of the Indonesian Mycorrhizal Association Southeast Sulawesi branch, Kendari City and Forestry Laboratory, for five months, march - July 2019. This study used a Factorial Completely Randomized Design consisting of 3 replications and three plant units. The first factor included treatment without AMF, Acaulospora sp1. and Clorideglomus etunicatum . The second factor includes Salinity 0 mM, 50 mM, 100 mM, 150 mM and 200 mM. The results showed that the interaction of AMF and salinity was not effective in increasing plant growth. Inoculation of AMF type C. etunicatum can increase height, plant dry weight, root shoot ratio, seed quality index, and root colonization. N. orientalis has a high dependence on arbuscular mycorrhizal fungi. Giving 0 mM salinity increases height, diameter, number of leaves, plant dry weight, and seed quality
INTRODUCTION
Environmental stress is a limiting factor in the process of plant growth.In biological terms, stress means deviations in physiological processes, development, and function of plants that can be harmful and can cause irreversible damage to plant systems (Sopandie, 2013).One form of environmental stress is salinity.Salinity is when the soil experiences excess salt accumulation, especially on the surface.High salt content (saline soil) results from the formation of dissolved salt minerals, salt accumulation from irrigation that carries salt, intrusion of seawater, rivers, or lakes (Mindari, 2009).Salt stress (saline soil) can cause abnormal plant growth by disrupting physiological mechanisms such as decreased photosynthetic efficiency, gas exchange, membrane disturbances, water status, and others (Evelin et al., 2009).
An alternative to overcome problems related to salinity stress for plants is utilizing microorganisms such as mycorrhizae.Arbuscular mycorrhizal fungi (AMF) are roots symbionts in symbiosis with most higher plants and are generally found in terrestrial ecosystems (Smith and Read, 2008).AMF plays a role in increasing the ability of plants to cope with environmental stresses that generally occur in degraded ecosystems (Giri et al., 2003).AMF can reduce the detrimental effects of salinity (Feng et al., 2002) increase plant productivity by about 25-50%, which includes plant health, yield quality, tolerance to water stress, fertilization efficiency and can suppress the development of pathogenic microbes in the soil (Ansiga et al., 2017).
Research conducted by Husna et al. (2015) showed that the AMF treatment had a very significant effect on the variables of height, diameter, number of leaves, and number of plant root nodules.Local AMF inoculation increased the average growth in height and diameter of seedlings with an increase of 139% and 37%, respectively, against the control.In addition, local AMF inoculation of Glomus sp. and Acaulospora tuberculata are also known to increase plant growth under stress conditions such as serpentine soil media and heavy metal uptake (Tuheteru et al., 2017).According to Sundari et al. (2011), Glomus AMF has the most expansive distribution area and is the most tolerant of stress conditions such as soil salinity.
Lonkida (Naucleaorientalis L.) is a tropical tree species that generally live in wetlands and grows naturally in Indonesia.Lonkida plants thrive on three soil types, namely inceptisols, alfisols, and oxisols (Tuheteru et al., 2014).The selection of lonkida plant species in this study was based on the ability of the plant to adapt to various conditions such as drought stress and inundation so that experiments could be carried out under salinity stress conditions.Many studies related to the growth of lonkida in inundation conditions have been carried out but are still limited to salinity stress conditions.
MATERIAL AND METHODS
Location and time of research.This research was carried out at the Mycorrhizal Association Plastic House, Southeast Sulawesi Branch, Kendari City, Southeast Sulawesi Province, and the Laboratory of the Faculty of Forestry and Environmental Sciences, Halu Oleo University, which lasted for five months, from March to July 2019.
Research design.The research design used in this study was a completely randomized design (CRD) consisting of 2 factors, namely AMF inoculation (Control, Acaulospora sp1, and Clarodeoglomusetunicatum) and salinity (0, 50, 100, 150 mM NaCl).Each treatment consisted of 3 replications, and each replication consisted of 3 plant units, so the number of treatment units was 135 experimental units.
Research procedure
Preparation of growing media for seeds and seedlings.River sand and soil are cleaned and then sterilized by heating the media at a specific temperature using a sterilizer.Meanwhile, rice husks are first burned to become husk charcoal.
Lonkida seed preparation and germination.The seeds of N. orientalis were obtained directly from the bottom of the stand.Before planting, the seeds were extracted in running water to separate the seeds from the fruit pulp, and the seeds were air-dried and stored in a cooler to maintain their viability.After that, the seeds are ready to be sown/planted in the sprout tub.
Weaning of seedlings and inoculation of mycorrhizal fungi.It was done when the first or second leaves were fully open.The seedlings were weaned into polybags containing soil, sand, and husk charcoal (6:1:3) and inoculated with mycorrhizal fungi.Each polybag was given 5 grams of AMF and buried in the soil under the roots.
Salinity treatment.Seedlings that had been weaned were grown for six weeks and then treated with salinity by watering with NaCl solution at levels of 0, 50, 100, 150, and 200 mM as much as 49 mL/polybag.Each treatment level is equivalent to 0 grams, 2,925 grams, 5.85 grams, 8.775 grams, and 11.7 grams of NaCl/Liter of water.The following is a formula for converting the level of NaCl solution into grams/liter of water: Watering of NaCl solution on plants is carried out every day for the first week.NaCl is given once a week for two months in the following week.Before applying salinity the following week, regular watering is carried out first to all polybags of about 50 mL/polybag every day to prevent build-up salt outside the experimental concentration.
Analisis data.The results of observations on each unit of observation will be analyzed in advance by analysis of variance (F test).If the test results show a real effect then different treatments will be conducted according to the Duncan Multiple Range Test (DMRT) at a 95% significant level.
Research result
The results of the variance of the effect of AMF treatment and salinity on the observed variables are presented in Table 1.Table 1 shows that the interaction of AMF inoculation and salinity did not significantly affect all observed variables except root colonization and mycorrhizae inoculation effect (MIE).The provision of AMF inoculation had a very significant effect on all observed variables.Meanwhile, salinity has a very significant effect on all observation variables except the seed quality index.
Plant growth
Inoculation of Acaulosporaand C. etunicatum AMF inoculations significantly increased the diameter and number of five-month-old lonkida leaves and significantly different from the control (Table 2).Treatment with 0 mM salinity significantly increased growth in height, diameter and number of leaves and was significantly different from other treatments except for 50 mM salinity in stem diameter.The salinity treatment on the variables of height, diameter and number of leaves showed that the susceptibility index had a negative effect on all treatments The same letter in the same column shows no significant difference according to the DMRT test (α = 0.05), CV (Coefficient of variant), ± Standard Error, mM (Molar Mass), Sign (+) indicates that the susceptibility index has an effect when salinity treatment lower the value of the measured variable compared to the control, and vice versa (-).
Plant Dry Weight
The results showed that C. etunicatum AMF inoculation increased root, shoot and total dry weight of N. orientalis seedlings compared to controls and was not significantly different from Acaulospora sp1 except for root dry weight variables (Table 3).The 0 mM salinity treatment increased root dry weight, shoot dry weight and total dry weight variables and was significantly different from other treatments except for 50 mM salinity treatment for root dry weight variables.Salinity treatment on the variables of shoot dry weight, root and total dry weight showed that the susceptibility index had a negative effect on all treatments. 100 mM 0.48±0.04b(-) 1.66±0.17bc (-) 2.14±0.20 b (-) 150 mM 0.36±0.06c(-) 1.29±0.26d(-) 1.66±0.32c (-) 200 mM 0.43±0.03bc (-) 1.31±0.20 cd(-) 1.75±0.22bc(-) CV 5.28 9.95 9.83 Note: The same letter in the same column shows no significant difference according to the DMRT test (α = 0.05), CV (Coefficient of variant), ± Standard Error, mM (Molar Mass), Sign (+) indicates that the susceptibility index has an effect when salinity treatment lower the value of the measured variable compared to the control, and vice versa (-).
Colonization and Mycorrhizae Inoculation Effect (MIE)
The highest AMF colonization was found in the interaction of C. etunicatum AMF and 50 mM salinity, which was 59.03% and significantly different from other treatments (Table 4).Except for the interaction of Acaulospora sp1 AMF and 150 mM salinity, the highest value for the MIE variable was 80.98% and significantly different from other treatments except for the C. etunicatum AMF interaction and 150 mM salinity.Vol. 1 No. 1 April 2022 (17-28)
Root Shoot Ratio (NPA) and Lonkida Seed Quality Index (IMB)
AMF inoculum type C. etunicatum was able to increase NPA higher than the control and was not significantly different from Acaulospora sp1 (Table 5).Inoculation of Acaulospora sp1 AMF inoculation increased IMB variables better and significantly different than all treatments.The 0 mM salinity treatment increased the root shoot ratio and seed quality index.However, it was not significantly different from the 50 mM and 100 mM salinity treatments on the IMB variable.Treatment of 0 mM salinity showed significantly different results to all treatments on the NPA variable.The salinity treatment on the NPA and IMB variables showed that the susceptibility index had a negative effect on all treatments.The same letter in the same column shows no significant difference according to the DMRT test (α = 0.05), CV (Coefficient of variant), ± Standard Error, mM (Molar Mass), Sign (+) indicates that the susceptibility index has an effect when salinity treatment lower the value of the measured variable compared to the control, and vice versa (-).
Plant Relative Growth
Inoculation of both types of AMF significantly increased the relative growth of plants compared to control (Table 6).The 0 mM salinity treatment significantly increased the relative growth of plants and was significantly different from the other treatments, except for the 50 mM and 100 mM salinity treatments on root dry weight variables.Treatment of salinity on the relative growth of shoots, roots and total relative growth showed that the susceptibility index had a negative effect on all treatments. 0.0020 ab(-) 0.009 b(-) 100 mM 0.0068 bc(-) 0.0020 ab(-) 0.008 bc(-) 150 mM 0.0056 c(-) 0.0018 c(-) 0.007 c(-) 200 Mm 0.0054 c(-) 0.0015 c(-) 0.0074 c(-) CV 0.17 0.04 0.2 Note: The same letter in the same column shows no significant difference according to the DMRT test (α = 0.05), KK (Coefficient of variant), ± Standard Error, mM (Molar Mass), Sign (+) indicates that the susceptibility index has an effect when salinity treatment lower the value of the measured variable compared to the control, and vice versa (-).
Discussions
The results showed that five-month-old Lonkida (Naucleaorientalis L.) seedlings were colonized by Arbuscular Mycorrhizal Fungi (AMF).AMF treatment gave a better effect than treatment without AMF.AMF colonization is characterized by AMF structures in the form of external and internal hyphae in five-month-old Lonkida seedlings.This structure results from fungal infection in plant roots (Sastrahidayat, 2011).Based on the percentage of observations, AMF colonization could increase the growth of 5-month-old Lonkida seedlings compared to controls.Treatment colonization ranged from 0-59.03%.The highest colonization percentage was found in the interaction of C. etunicatum AMF and 50 mM salinity of 59.03% and belonged to the high category (Rajapakse and Miller, 1992).The high rate of AMF colonization in the 50 mM salinity treatment was thought to be due to the higher soil moisture factor than other treatments.This is in accordance with the opinion of Manurung and Kristina (2018) that the percentage of AMF colonization on plant roots generally increases when they are in wet conditions or high humidity.The higher the salinity level in the plant, the faster the plant will experience oxidative stress or water loss.Each type of plant has a different response to AMF.AMF type C. etunicatum contributed significantly to the growth of lonkida plants.
The interaction of AMF treatment and salinity was not effective in increasing the variables of height, diameter, number of leaves, and dry weight of lonkida plants.However, the difference in the salinity treatment level affected the plant's growth and dry weight.Lonkida plants without salinity treatment had higher total plant relative growth (RGRt) than other treatments.The average relative growth of total Lonkida seedlings in the treatment without salinity showed that Lonkida did not have a high tolerance for salinity stress This is different from the research results conducted by Plenchette and Duponnis (2005), which stated that AMF inoculated plants had better growth and biomass than non-mycorrhizal plants, especially under stress conditions such as salinity (Al-Karaki, 2000;Feng et al., 2002).This is thought to be due to the ability of AMF to reduce oxidative stress caused by salt, causing a reduction in root colonization, growth, leaf area, and plant chlorophyll content (Latef and Chaoxing, 2011).
Independently, C. etunicatum type AMF effectively increased plant height of N. orientalis compared to Acaulosporasp1.and control.The increase in plant height was thought to be because AMF type C. etunicatum could spread to plant roots very quickly to form higher root colonization (Ingleby et al., 2007in Wulandari, 2019).Meanwhile, Acaulospora sp1 AMF increased growth parameters such as diameter and number of plant leaves (Table 3).This is in line with research by Delvian (2003), which reported that the application of AMF to Leucaena leucocephala seedlings could increase the height (177.61%),diameter (154.54%),canopy dry weight (174.68%), and plant dry weight.(186.59) when compared with plants without AMF administration.The results showed that 0 mM salinity effectively increased plant height, diameter, the number of leaves, and dry weight compared to 50 mM, 100 mM, 150 mM, and 200 mM salinity treatments.Presumably, the higher salinity level in plants can inhibit the growth and production of plant products.Due to a decrease in water storage capacity by plants, excessive Na and Cl toxicity, imbalances in nutrient absorption, changes, and deviations in leaf shape and anatomy, can interfere the process.Plant physiology includes photosynthesis (Van Hoorn et al., 2001in Nasim, 2010;Parida and Das, 2005).
The presence of AMF in plant roots positively influences plant physiological aspects, and this is because AMF can assist plants in maximizing nutrient absorption.The results showed that the highest mycorrhizal dependence was found in Acaulosporasp1 AMF (80.98%) and C. etunicatum (78.72%) at a salinity level of 150 mM (Table 5).This indicates that the higher the salinity level given, the higher the level of dependence of plants on AMF.According to Setiadi (1992) in Saputri and Suwirmen (2016), plants with a high dependence on AMF inoculation will usually show a significant growth response to AMF inoculation.
On the other hand, plants cannot grow appropriately without symbiosis with AMF.AMF inoculation on five-month-old N. orientalis plants effectively increased shoot dry weight.This is in line with the research of Corkidi andRincon (1997) in Tuheteruet al. (2012) that AMF can increase the growth of four types of tropical plants, especially on the dry weight of roots and shoots against the control.This is thought to be caused by AMF, which can absorb nutrients to be used in plant growth processes and plant metabolic processes such as photosynthesis by increasing the amount of leaf chlorophyll and P uptake (Pradyudyaningsih, 2004In Wulandari, 2019;Tuheteru et al., 2012).P uptake can help the photosynthesis process of plants through the supply of energy in the form of ATP and NADPH, CO 2 acceptors, RUBP (Ribulose bisphosphate), and the ratio of sugar biosynthesis (Rychter andRao, 2005 in Javaid, 2010).
The results showed that the highest shoot-to-root ratio was found in C. etunicatum AMF inoculations with an average value of 4.32 (table 6).According to Zahrul (2018), the high value of NPA indicates that the absorption of nutrients and water by plants is translocated to the shoots for the formation of the vegetative parts of plants.In the application of salinity, the highest root shoot ratio was found at 0 mM salinity, which was 4.44 compared to other treatments.This is presumably because high salinity levels can interfere with plant growth processes by reducing water availability and nutrient absorption.This is supported by the opinion of Ghafoor et al. (2004) in Sopandie ( 2006) that salinity disrupts plant growth and development through a decrease in the osmotic potential of the soil solution so that the availability of water for plants is reduced, spurring imbalances in nutrient metabolism and changes in the physical and chemical structure of the soil.Meanwhile, for the seed quality index (IMB), values ranged from 0.21-0.38where the highest value was found in the Acaulosporasp1 AMF treatment of 0.38 (Table 6).As for the salinity treatment, the highest IMB value was found in the 0 mM salinity treatment (control).According to Junaediet al. (2010) in Farida ( 2019) that seeds with NPA values ranging from (2-5) and IMB values (> 0.09) have met the criteria for planting in the field.
CONCLUSION
Based on the research that has been done, it can be concluded that the interaction of AMF and salinity was not effective in increasing the growth of N. orientalis plants.However, it was effective in increasing the percentage of colonization and the Mycorrhizae Inoculation Effect.AMF inoculation improved AMF inoculation improved plant growth.Independently giving salinity can reduce the variables of height, diameter, number of leaves, plant dry weight, and seed quality index.
Table 2 .
Effect of Treatment on Growth of Lonkida (N.orientalis) Seedlings Age five months.
Table 3 .
Effect of AMF Types of Treatment and Salinity on Dry Weight Variables of Lonkida (N.orientalis) Seedlings at five Months Age.
Table 4 .
Observations of the Interaction of AMF and Salinity on Colonization Variables and MycorrhizaeInoculation Effect of N. orientalis five months old.The same letter in the same column shows no significant difference according to the DMRT test (α = 0.05), CV (Coefficient of variant), ± Standard Error, mM (Molar Mass).
Table 6 .
The Effect of AMF Inoculation Treatment and Salinity on the Relative Growth of 5 Months Lonkida Plants.
|
v3-fos-license
|
2019-04-25T13:03:23.322Z
|
2019-04-14T00:00:00.000
|
129942972
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8247/12/2/57/pdf",
"pdf_hash": "13e985901a0509958bf56f63b60b9194e0b96764",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45663",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "13e985901a0509958bf56f63b60b9194e0b96764",
"year": 2019
}
|
pes2o/s2orc
|
Relevance of In Vitro Metabolism Models to PET Radiotracer Development: Prediction of In Vivo Clearance in Rats from Microsomal Stability Data
The prediction of in vivo clearance from in vitro metabolism models such as liver microsomes is an established procedure in drug discovery. The potentials and limitations of this approach have been extensively evaluated in the pharmaceutical sector; however, this is not the case for the field of positron emission tomography (PET) radiotracer development. The application of PET radiotracers and classical drugs differs greatly with regard to the amount of substance administered. In typical PET imaging sessions, subnanomolar quantities of the radiotracer are injected, resulting in body concentrations that cannot be readily simulated in analytical assays. This raises concerns regarding the predictability of radiotracer clearance from in vitro data. We assessed the accuracy of clearance prediction for three prototypical PET radiotracers developed for imaging the A1 adenosine receptor (A1AR). Using the half-life (t1/2) approach and physiologically based scaling, in vivo clearance in the rat model was predicted from microsomal stability data. Actual clearance could be accurately predicted with an average fold error (AFE) of 0.78 and a root mean square error (RMSE) of 1.6. The observed slight underprediction (1.3-fold) is in accordance with the prediction accuracy reported for classical drugs. This result indicates that the prediction of radiotracer clearance is possible despite concentration differences of more than three orders of magnitude between in vitro and in vivo conditions. Consequently, in vitro metabolism models represent a valuable tool for PET radiotracer development.
Introduction
The application of PET as a tool for molecular neuroimaging is limited by the availability of suitable radiotracers. In radiotracer development, the in vivo performance of a novel compound is determined by numerous physicochemical and pharmacological factors, of which metabolism represents a particularly important one [1]. The metabolic lability of a candidate radiotracer may lead to a rapid decrease of radiotracer plasma concentration, resulting in insufficient brain exposure. This is particularly problematic if longer scan durations are required to properly image the molecular target. Additionally, excessive radiotracer metabolism increases the risk that brain-penetrant radiolabeled metabolites are generated in sufficient amounts to compromise the PET signal. However, metabolic degradation also supports the fast clearance of radioactivity from the blood pool which, on the one hand, improves the target-to-background ratio obtainable during the PET scan and thus the image contrast, and, on the other hand, allows for shorter scan duration [2,3]. These aspects illustrate the importance of a precise adjustment of the metabolic properties of lead compounds during the radiotracer development process to produce promising imaging agents for in vivo application. Various in vitro techniques are available to evaluate the metabolic stability of novel compounds during the preclinical stage. The potential and limitations of these methods have been extensively evaluated in the field of drug discovery and development [4][5][6][7][8]; however, with regard to the development of radiotracers, studies on the physiological relevance of in vitro metabolism models are rare. The in vivo application of PET radiotracers differs greatly from the application of classical drugs, especially in terms of the amount of substance administered. In a typical PET study, the average body concentration of a radiotracer is in the subnanomolar range. Detection of such low concentrations is usually not feasible with the classical analytical techniques employed in metabolic stability assays, especially if structure determination of metabolites is required in addition. Consequently, in vitro radiotracer metabolism studies typically involve substrate concentrations that do not reflect the in vivo scenario, which raises questions about the physiological relevance and predictive power of the generated data that go beyond the fundamental concerns on in vitro system performance arising from classical drug evaluation studies.
In this study, we compared preclinical in vitro and in vivo clearance data of three xanthine-based radioligands for the A 1 AR. Structural analogs of the methylxanthine caffeine are an important class of A 1 AR antagonists [9] which, when labeled with a positron-emitting radionuclide such as 11 C or 18 F, enable the in vivo visualization of the A 1 AR with PET. To date, the 18 Figure 1) [10,11], which was the first radiolabeled A 1 AR ligand used in human PET studies [12], is still considered the gold standard for in vivo imaging of the A 1 AR. Numerous human and animal imaging studies have been successfully conducted using [ 18 F]CPFPX [13][14][15][16]; however, since this radiotracer undergoes rapid metabolic degradation [17,18], continuous efforts have been made to develop metabolically stable analogs that may provide higher image quality during PET scans [19]. Recognizing the C8-substituent at the xanthine core as a main target of metabolic enzymes [17], the development process concentrated predominantly on the synthesis of C8-substituted analogs of [ 18 F]CPFPX. In the present preclinical study, the predictability of radiotracer in vivo clearance from microsomal stability data was evaluated using [ 18 molecular target. Additionally, excessive radiotracer metabolism increases the risk that brain-penetrant radiolabeled metabolites are generated in sufficient amounts to compromise the PET signal. However, metabolic degradation also supports the fast clearance of radioactivity from the blood pool which, on the one hand, improves the target-to-background ratio obtainable during the PET scan and thus the image contrast, and, on the other hand, allows for shorter scan duration [2,3]. These aspects illustrate the importance of a precise adjustment of the metabolic properties of lead compounds during the radiotracer development process to produce promising imaging agents for in vivo application. Various in vitro techniques are available to evaluate the metabolic stability of novel compounds during the preclinical stage. The potential and limitations of these methods have been extensively evaluated in the field of drug discovery and development [4][5][6][7][8]; however, with regard to the development of radiotracers, studies on the physiological relevance of in vitro metabolism models are rare. The in vivo application of PET radiotracers differs greatly from the application of classical drugs, especially in terms of the amount of substance administered. In a typical PET study, the average body concentration of a radiotracer is in the subnanomolar range. Detection of such low concentrations is usually not feasible with the classical analytical techniques employed in metabolic stability assays, especially if structure determination of metabolites is required in addition. Consequently, in vitro radiotracer metabolism studies typically involve substrate concentrations that do not reflect the in vivo scenario, which raises questions about the physiological relevance and predictive power of the generated data that go beyond the fundamental concerns on in vitro system performance arising from classical drug evaluation studies.
In this study, we compared preclinical in vitro and in vivo clearance data of three xanthine-based radioligands for the A1AR. Structural analogs of the methylxanthine caffeine are an important class of A1AR antagonists [9] which, when labeled with a positron-emitting radionuclide such as 11 C or 18 F, enable the in vivo visualization of the A1AR with PET. To date, the 18 Figure 1) [10,11], which was the first radiolabeled A1AR ligand used in human PET studies [12], is still considered the gold standard for in vivo imaging of the A1AR. Numerous human and animal imaging studies have been successfully conducted using [ 18 F]CPFPX [13][14][15][16]; however, since this radiotracer undergoes rapid metabolic degradation [17,18], continuous efforts have been made to develop metabolically stable analogs that may provide higher image quality during PET scans [19]. Recognizing the C8-substituent at the xanthine core as a main target of metabolic enzymes [17], the development process concentrated predominantly on the synthesis of C8-substituted analogs of [ 18 F]CPFPX. In the present preclinical study, the predictability of radiotracer in vivo clearance from microsomal stability data was evaluated using [ 18
Stability in Liver Microsomes
Depletion of CBX, MCBX and CPFPX was evaluated in rat liver microsomes (RLM) at a concentration of 8 µM. Time-courses of substrate disappearance exhibited monoexponential characteristics, as shown in Figure 2 for typical microsomal assays. In vitro t 1/2 and intrinsic clearance (CL int ) values derived from the monoexponential fits differed substantially between the three analogous compounds (Table 1), with CBX being the most stable (t 1/2 = 35.1 min) and CPFPX the least stable analog (t 1/2 = 14.0 min).
Stability in Liver Microsomes
Depletion of CBX, MCBX and CPFPX was evaluated in rat liver microsomes (RLM) at a concentration of 8 µM. Time-courses of substrate disappearance exhibited monoexponential characteristics, as shown in Figure 2 for typical microsomal assays. In vitro t1/2 and intrinsic clearance (CLint) values derived from the monoexponential fits differed substantially between the three analogous compounds (Table 1), with CBX being the most stable (t1/2 = 35.1 min) and CPFPX the least stable analog (t1/2 = 14.0 min).
In Vivo Pharmacokinetic (PK) Studies
The PK profiles of [ 18 Figure 3. The examination of the semi-logarithmic standardized uptake values (SUV) versus time plots (not shown) revealed three distinctive kinetic phases associated with the decline of the radiotracer concentration in plasma. Consequently, a triexponential model was chosen for curve fitting. The plasma clearance, volume of distribution (Vd) and terminal half-life (t1/2,term) were estimated from the fitted parameters (
In Vivo Pharmacokinetic (PK) Studies
The PK profiles of [ 18 Figure 3. The examination of the semi-logarithmic standardized uptake values (SUV) versus time plots (not shown) revealed three distinctive kinetic phases associated with the decline of the radiotracer concentration in plasma. Consequently, a triexponential model was chosen for curve fitting. The plasma clearance, volume of distribution (V d ) and terminal half-life (t 1/2,term ) were estimated from the fitted parameters ( F]CPFPX to rat plasma proteins was determined via ultrafiltration of the spiked plasma samples. The spiked radiotracer concentrations ranged from approximately 0.4-0.6 nM, resembling in vivo concentrations. All three compounds exhibited high plasma protein binding with resulting free fractions of less than 5% ( Figure 5). ranged from approximately 0.4-0.6 nM, resembling in vivo concentrations. All three compounds exhibited high plasma protein binding with resulting free fractions of less than 5% ( Figure 5).
Prediction of In Vivo Clearance from In Vitro Data
The in vivo predicted plasma clearance (CLp) of CBX, MCBX and CPFPX were calculated from microsomal stability data according to Equations 2-4. Corrections were applied for microsomal and plasma protein binding. As can be seen from Table 2, the actual in vivo CLp of the three compounds in rat were accurately predicted by calculated CLp values with an average fold error (AFE) of 0.78 (corresponding to an average fold underprediction of 1.3) and a root mean square error (RMSE) of 1.6. All predictions fell within 1-fold of the observed value. Underprediction was largest for CPFPX (fold error of 0.66) and smallest for CBX (fold error of 0.84).
Discussion
The prediction of in vivo metabolic stability from hepatic cellular and subcellular systems is an integral part of drug discovery. It is widely acknowledged that the reliability and accuracy of in vivo clearance predictions from hepatocyte or microsomal data can be affected by the in vitro assay concentration of the drug. Concentrations around or above the KM typically result in saturation of enzyme active sites and thus in enzyme kinetics that do not reflect the in vivo situation. In the field of radiotracer development, the discrepancy between standard assay concentrations (usually in the lower micromolar range) and in vivo radioligand concentrations (subnanomolar range) is particularly pronounced, which leads to further uncertainty regarding in vitro-in vivo extrapolation. In addition, the in vivo pharmacokinetics of tracer amounts of substance can deviate substantially from that of macro doses due to the existence of saturable enzyme and transporter systems as well as high affinity, low capacity binding sites [20,21]. Although PK dose-linearity has been successfully demonstrated for various pharmaceutical compounds in microdosing studies [22][23][24], the extremely high target affinities (usually nanomolar Kd) exhibited by radiotracers developed for molecular brain imaging could potentially lead to deviations in pharmacokinetics between tracer and macro doses as a result of the long retention of the substance in the brain compartment which in turn reduces its hepatic exposure.
The present study evaluates the quantitative prediction of in vivo clearance from microsomal stability data in the rat preclinical model. The examined xanthine A1AR ligands represent ideal model compounds for in vitro-in vivo extrapolation approaches. As small (MW < 400 Da), neutral compounds of medium lipophilicity (log P: 2.2-2.9), CBX, MCBX and CPFPX can be classified as
Prediction of In Vivo Clearance from In Vitro Data
The in vivo predicted plasma clearance (CL p ) of CBX, MCBX and CPFPX were calculated from microsomal stability data according to Equations (2)-(4). Corrections were applied for microsomal and plasma protein binding. As can be seen from Table 2, the actual in vivo CL p of the three compounds in rat were accurately predicted by calculated CL p values with an average fold error (AFE) of 0.78 (corresponding to an average fold underprediction of 1.3) and a root mean square error (RMSE) of 1.6. All predictions fell within 1-fold of the observed value. Underprediction was largest for CPFPX (fold error of 0.66) and smallest for CBX (fold error of 0.84).
Discussion
The prediction of in vivo metabolic stability from hepatic cellular and subcellular systems is an integral part of drug discovery. It is widely acknowledged that the reliability and accuracy of in vivo clearance predictions from hepatocyte or microsomal data can be affected by the in vitro assay concentration of the drug. Concentrations around or above the K M typically result in saturation of enzyme active sites and thus in enzyme kinetics that do not reflect the in vivo situation. In the field of radiotracer development, the discrepancy between standard assay concentrations (usually in the lower micromolar range) and in vivo radioligand concentrations (subnanomolar range) is particularly pronounced, which leads to further uncertainty regarding in vitro-in vivo extrapolation. In addition, the in vivo pharmacokinetics of tracer amounts of substance can deviate substantially from that of macro doses due to the existence of saturable enzyme and transporter systems as well as high affinity, low capacity binding sites [20,21]. Although PK dose-linearity has been successfully demonstrated for various pharmaceutical compounds in microdosing studies [22][23][24], the extremely high target affinities (usually nanomolar K d ) exhibited by radiotracers developed for molecular brain imaging could potentially lead to deviations in pharmacokinetics between tracer and macro doses as a result of the long retention of the substance in the brain compartment which in turn reduces its hepatic exposure. The present study evaluates the quantitative prediction of in vivo clearance from microsomal stability data in the rat preclinical model. The examined xanthine A 1 AR ligands represent ideal model compounds for in vitro-in vivo extrapolation approaches. As small (MW < 400 Da), neutral compounds of medium lipophilicity (log P: 2.2-2.9), CBX, MCBX and CPFPX can be classified as Class 2 drugs according to the extended clearance classification system (ECCS), for which metabolism is the predominant clearance mechanism [25].
The results from in vivo PK studies showed that, following i.v. administration, the three radiotracers were rapidly distributed to extravascular tissues with volumes of distribution that resembled total body water (approximately 600-700 mL/kg in male rats [26,27]). This indicates that the compounds are mainly subjected to hepatic metabolism and that plasma clearance can thus be assumed to be equal to hepatic clearance. Although the detailed physiological description of radiotracer disposition in the body is beyond the scope of this study, the existence of three distinct kinetic phases suggests radiotracer distribution between three compartments. The xanthine-based radiotracers can be assumed to cross biological membranes readily, which is confirmed by their relatively high V d -values; therefore, radiotracer distribution between a central plasma compartment and two tissue compartments with individual transport and equilibration characteristics appears to be a reasonable explanatory hypothesis.
The aggregation of individual plasma data into a mean data set enabled a more precise and robust estimation of PK parameters from triexponantial fits, since the influence of inherent noise present in the data was substantially reduced. This became particularly evident when calculating V d and t 1/2,term , which are derived from only one microconstant (λ 3 ). The estimation of these parameters from fits of individual plasma curves repeatedly resulted in values which did not fall within physiologically reasonable ranges. The comparison between CL-values derived from mean curves and individual curves (deviation < 4%) clearly indicates that data aggregation is a valid approach in the context of this study.
Using the substrate depletion approach [28] and physiologically mechanistic scaling, in vivo clearance was predicted from in vitro stability data. The correlation between predicted and observed clearance was excellent for all three compounds, with only a slight underprediction of 1.3-fold. For comparison, a recent study which examined a large number of published datasets reporting in vitro CL int and actual in vivo CL of classical pharmaceutical compounds reported an average underprediction of drug in vivo CL in RLM of 2.3-fold [29]. Additionally, when taking plasma protein binding into account, the rank order of in vivo metabolic stability could be accurately predicted from microsomal stability assays. In RLM, the rank order of metabolic stability (expressed by t 1/2 ) was CBX > MCBX > CPFPX. When scaled to predicted CL p , the rank order changed to CBX < CPFPX < MCBX, with MCBX exhibiting higher clearance than CPFPX. This is in accordance with the actual in vivo observations, suggesting a substantial impact of plasma protein binding on the clearance of the model compounds. There is considerable controversy in literature on whether the extent of plasma protein binding correlates with clearance prediction accuracy. While several authors demonstrated a clear trend towards underprediction with highly bound drugs [30][31][32], others reported a lack of correlation between free fraction and prediction bias [29,33] or mixed effects depending on the physicochemical characteristics of the drug (acidic, basic or neutral) [28]. However, for the xanthine derivatives used in the present study, correction for plasma protein binding substantially improved the prediction of both clearance value and rank order. This can be explained by the specific physicochemical and pharmacological properties of these compounds. The combination of high plasma protein binding, moderate lipophilicity (which suggests medium membrane permeability) and relatively low intrinsic clearance (<Q) typically limits the hepatic extraction of a compound, which in turn affects its hepatic clearance [34][35][36].
The PK profiles of the novel cyclobutyl-substituted A 1 AR ligands differed distinctively from that of [ 18 F]CPFPX. In the second and third phase of the curve (10-180 min p.i.), the plasma level of [ 18 F]CBX was approximately twice as high as that of [ 18 F]CPFPX, which corresponds to the considerably longer terminal half-life. In terms of imaging performance, this could potentially result in enhanced radiotracer delivery to the brain, since passive diffusion across the blood-brain barrier is driven by concentration. By contrast, [ 18 F]MCBX showed a faster decline in plasma concentration in the first and second phase of the curve (0-40 min p.i.) than [ 18 F]CPFPX. Although this could possibly lead to reduced brain exposure (depending on the extraction ratio of the radiotracer at the blood-brain barrier), reduced plasma radioactivity also diminishes background noise during the measurement, which improves the quality of the PET image. In view of these results, further evaluation studies should be conducted to assess the brain imaging performance of the novel A 1 AR radiotracers.
In conclusion, the present study underlines the value of in vitro metabolism models for radiotracer development. The data provide unequivocal evidence that accurate in vitro prediction of in vivo clearance is feasible despite concentration differences of more than three orders of magnitude between the in vitro and in vivo situation. This result encourages the implementation of in vitro stability studies as an integral part of the preclinical evaluation of novel PET radiotracers and suggests additional studies on the ability of human liver microsomes to a priori predict human radiotracer metabolism. Moreover, the novel cyclobutyl-substituted [ 18
Animals
All animal experiments were conducted in accordance with the German Animal Welfare Act and approved by the governmental authorities (AZ: 84-02.04.2014.A496). Male Sprague Dawley rats (mean body weight at testing: 503 ± 44 g) were obtained from Charles River Laboratories (Sulzfeld, Germany). They were housed two to three per cage under standard conditions (12-h light/12-h dark cycle, 22 • C) with access to food and water ad libitum.
Data Analysis
Substrate depletion was calculated from the area ratios of the analyte peak, using the value at t = 0 min as 100%. Depletion data were fitted to the monoexponential decay model (Equation (1)) to derive in vitro t 1/2 .
where C 0 is the substrate concentration at time t = 0.
Intrinsic clearance was calculated from in vitro t 1/2 using the equation [38]: mg microsomal protein × mg microsomal protein g liver weight × g liver kg body weight (2) where scaling factors of 60 mg of microsomal protein per gram of liver [39] and 40 g of liver tissue per kilogram of body weight [40] were applied.
The unbound fraction in microsomes was estimated using the following lipophilicity relationship algorithm [41]: where P is the microsomal protein concentration. The blood/plasma concentration ratio was assumed to be equal to 1 for the neutral xanthine compounds. In vivo clearance in plasma was predicted using the well-stirred liver model [42,43]: where f p is the fraction unbound in plasma and Q is hepatic blood flow with a given value of 55 mL/min/kg for rat [40]. The individual prediction accuracy was assessed by calculation of fold error (ratio predicted/observed). AFE (Equation (5)) and RMSE (Equation (6)) were used as measures for overall bias and precision. Underprediction was also expressed as fold underprediction, which is the inverse of AFE. AFE = 10 1 n log predicted observed (5) with n, number of predictions. 200 µL) were collected at regular time intervals throughout the 180-min experiment. The total blood sampling volume was kept below 10% of the circulating blood volume of the animal. Plasma was separated by centrifugation (3,000 rcf, 3 min, 21 • C), weighed and measured in a γ-counter (ISOMED 2100, MED Nuklear-Medizintechnik Dresden GmbH, Dresden, Germany) to calculate plasma radioactivity concentration. Fractions of unchanged radiotracer (parent fraction) and radiolabeled metabolites in plasma were assessed by radio-thin layer chromatography (TLC) analysis. Aliquots (45 µL) of plasma were mixed with 3 volumes of methanol/acetonitrile (50:50, v/v, 4 • C), vortexed (1 min, 21 • C) and centrifuged (20,000 rcf, 5 min, 21 C) to sediment precipitated protein. Aliquots (5 µL) of the supernatants were spotted on a TLC plate (SIL G-25, 10 × 20 cm, Macherey-Nagel, Düren, Germany). The TLC plate was developed with ethyl acetate/hexane, 75:25 (v/v), dried and subsequently imaged for 50 min with an electronic autoradiography system (InstantImager, Canberra-Packard, Rüsselsheim, Germany).
Pharmacokinetic Analysis
PK analysis was performed on decay and metabolite-corrected plasma radioactivity data of 8 ([ 18 F]CBX, [ 18 F]CPFPX) or 9 ([ 18 F]MCBX) individual animals. Data of 2 animals could not be used for PK analysis due to paravenous radiotracer injection. Since interindividual variations in plasma kinetics within the test groups were relatively small, individual plasma data were combined to mean data sets for analysis. Assuming a specific density of 1 g/mL for plasma, radioactivity concentration was calculated and plotted against time. For data visualization, plasma radioactivity concentration was normalized to body weight and amount of injected radioactivity, yielding SUV. PK parameters were derived from the radioactivity concentration-time data via nonlinear regression analysis applying a triexponential model: C p (t) = A 1 e −λ 1 t + A 2 e −λ 2 t + A 3 e −λ 3 t where C p is the plasma radioactivity concentration, t is time, A 1 , A 2 , and A 3 represent the y-intercepts of the distribution/elimination phases of the plasma concentration-time curve and λ 1 , λ 2 , and λ 3 represent the first-order rate constants of the phases. Plasma clearance, volume of distribution and terminal half-life were calculated from the model parameters A and λ according to the following equations [44]: t 1/2,term = ln 2 λ 3 (10) where D is the injected radioactivity and λ 3 is the terminal rate constant.
To validate the results obtained from fitting mean data sets, CL p was also calculated from individual PK profiles.
Plasma Protein Binding
The binding of the radiotracer to plasma proteins was assessed via ultrafiltration, using Microcon-30 kDa centrifugal filter units (Merck Millipore, Darmstadt, Germany). Prior to radiotracer administration, blood plasma (200-300 µL) was sampled from the animal, spiked with 5-6 kBq of the radiotracer solution and incubated for 1 h at 37 • C. Subsequently, 100 µL of the spiked plasma was loaded onto the filter units which were then centrifuged for 20 min at 14,000 rcf and 37 • C. Radioactivity in equal volumes (50 µL) of spiked plasma and filtrate was measured in a γ-counter to calculate free fractions. Significant differences between plasma free fractions were assessed by one-way analysis of variance (ANOVA) followed by a post-hoc Tukey test. The significance level was set to 0.05. Normal distribution of the data and homogeneity of variances were assumed.
|
v3-fos-license
|
2022-03-16T01:15:56.011Z
|
2022-03-15T00:00:00.000
|
247450555
|
{
"extfieldsofstudy": [
"Physics",
"Computer Science",
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ems.press/content/serial-article-files/27834",
"pdf_hash": "250e6da508eeb612fdaad5d67851f4dd41a47ede",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45664",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "250e6da508eeb612fdaad5d67851f4dd41a47ede",
"year": 2022
}
|
pes2o/s2orc
|
Discrete approximations to Dirac operators and norm resolvent convergence
We consider continuous Dirac operators defined on $\mathbf{R}^d$, $d\in\{1,2,3\}$, together with various discrete versions of them. Both forward-backward and symmetric finite differences are used as approximations to partial derivatives. We also allow a bounded, H\"older continuous, and self-adjoint matrix-valued potential, which in the discrete setting is evaluated on the mesh. Our main goal is to investigate whether the proposed discrete models converge in norm resolvent sense to their continuous counterparts, as the mesh size tends to zero and up to a natural embedding of the discrete space into the continuous one. In dimension one we show that forward-backward differences lead to norm resolvent convergence, while in dimension two and three they do not. The same negative result holds in all dimensions when symmetric differences are used. On the other hand, strong resolvent convergence holds in all these cases. Nevertheless, and quite remarkably, a rather simple but non-standard modification to the discrete models, involving the mass term, ensures norm resolvent convergence in general.
Introduction
We study in detail in what sense continuous Dirac operators can be approximated by a family of discrete operators indexed by the mesh size. To investigate spectral properties based on the discrete models, it is essential to know whether we can obtain norm resolvent convergence or only strong resolvent convergence of the discrete models (suitably embedded into the continuum) to the continuous Dirac operators.
In this paper we present a remarkable new phenomenon. In dimensions two and three we cannot obtain norm resolvent convergence of the discrete operators (embedded into the continuum) as the mesh size tends to zero, if we use the natural discretizations based on either symmetric first order differences or a pair of forward-backward first order differences. The models require a simple modification to obtain norm resolvent convergence. In dimension one the discretization using a pair of forward-backward first order differences does lead to norm resolvent convergence, whereas the model based on symmetric first order differences does not.
For mesh size h > 0 the corresponding discrete spaces are denoted by H d h = ℓ 2 (hZ d ) ⊗ C ν(d) , d = 1, 2, 3. The norm on H d h is given by Here | · | denotes the Euclidean norm on C ν(d) . We index u h by k ∈ Z d ; the h dependence is in the subscript of u h . [2, section 2]. We describe the construction briefly, with further details and assumptions given in section 2. Let ϕ 0 , ψ 0 ∈ L 2 (R d ) and assume that {ϕ 0 ( · −k)} k∈Z d and {ψ 0 ( · −k)} k∈Z d are a pair of biorthogonal Riesz sequences in L 2 (R d ). Define ϕ h,k (x) = ϕ 0 ((x − hk)/h), and ψ h,k (x) = ψ 0 ((x − hk)/h), x ∈ R d , k ∈ Z d , h > 0. The embedding operator J h is then defined as Note that here ϕ h,k (x) is a scalar multiplying a vector u h (k) ∈ C ν(d) . To construct the discretization operator, let J h be defined as J h with ϕ 0 replaced by ψ 0 . The discretization operator is then defined as K h = ( J h ) * . For d = 1, 2, it can be written explicitly as acting on H d . The question of interest is in what sense will J h (H 0,h − zI h ) −1 K h converge to (H 0 − zI) −1 as h → 0. We now summarize the results obtained. First we briefly define the operators considered. Let σ j , j = 1, 2, 3, denote the Pauli matrices Let m ≥ 0 denote the mass. To simplify we do not indicate dependence on the mass in the notation for operators. In dimension d = 1 the free Dirac operator is given by the operator matrix on H 1 . We consider two discrete approximations based on replacing −i d dx by finite difference operators. Let I h denote the identity operator on ℓ 2 (hZ). We define Here the forward and backward finite difference operators are defined as In dimension d = 2 the free Dirac operator is defined as on H 2 . As in the d = 1 case, there are two natural discrete models given by Here D ± h;j and D s h;j are the corresponding finite differences in the j'th coordinate. It turns out that these two discrete models do not lead to norm resolvent convergence, so we also define two modified versions. Let −∆ h denote the discrete Laplacian; see (2.4). Then the modified operators are given by The details on the discretizations in dimension d = 3 can be found in section 5. Let K 1 and K 2 be two Hilbert spaces. The space of bounded operators from K 1 to K 2 is denoted by B(K 1 , K 2 ). If K 1 = K 2 = K we write B(K) = B(K, K). In the following theorem we collect the positive results obtained on norm resolvent convergence in B(H d ). We use the convention (−0, 0) = ∅ in the statements of results.
for all z ∈ K and h ∈ (0, 1]. Theorem 1.1 can be generalized to also include a potential, by following the approach in [2]. Let V : R d → B(C ν(d) ) be bounded and Hölder continuous. Assume V (x) is selfadjoint for each x ∈ R d . Define the discretization as V h (k) = V (hk) for k ∈ Z d . Then we can define self-adjoint operators H = H 0 + V on H d and H h = H 0,h + V h on H d h for all the discrete models. The results in Theorem 1.1 then generalize to H and H h , with an estimate Ch θ ′ , where 0 < θ ′ < 1 depends on the Hölder exponent for V ; see section 7.
In the next theorem we summarize some negative results with non-convergence in the B(H d )-operator norm in part (i), and in part (ii) a result using the Sobolev spaces for all z ∈ K and h ∈ (0, 1]. The estimate (1.4) implies results on the spectra of the operators H 0,h and H 0 and their relation, see [2, section 5]. Such results are not obtainable from the strong convergence implied by the estimate (1.5). Thus we are in the remarkable situation that in dimensions d = 2, 3 we need to modify the natural discretizations in order to obtain spectral information. Furthermore, in dimension d = 1 to obtain spectral information we must use either the forward-backward discretizations or the modified symmetric discretizations. Moreover, this is relevant for resolving the unwanted fermion doubling phenomenon that is present in some discretizations of Dirac operators [1].
Results of the type (1.4) were first obtained by Nakamura and Tadano [5] for H = −∆ + V on L 2 (R d ) and H h = −∆ h + V h on ℓ 2 (hZ d ) for a large class of real potentials V , including unbounded V . They used special cases of the J h and K h as defined here, i.e. the pair of biorthogonal Riesz sequences is replaced by a single orthonormal sequence. Recently their results have been applied to quantum graph Hamiltonians [3]. In [4] the continuum limit is studied for a number of different problems. Here strong resolvent convergence is proved up to the spectrum and scattering results are derived.
In [2] the authors proved results of the type (1.4) for a class of Fourier multipliers H 0 and their discretizations H 0,h , and obtained results of the type (1.4) for perturbations H = H 0 + V and H h = H 0,h + V h with a bounded, real-valued, and Hölder continuous potential. Note that the results in [2] do not directly apply to Dirac operators, since the free Dirac operators do not satisfy an essential symmetry condition [2, Assumption 3.1(4)]. In [7] Schmidt and Umeda proved strong resolvent convergence for Dirac operators in dimension d = 2 using the discretization H fb 0,h . They allow a class of bounded non-selfadjoint potentials and also state corresponding results for dimensions d = 1, 3.
The remainder of this paper is organized as follows. Section 2 introduces additional notation and operators used in the paper. Sections 3, 4, and 5 prove Theorem 1.1 and Theorem 1.2(i) in the one-, two-, and three-dimensional cases, respectively. Since some of the arguments are very similar in the different dimensions, we will give the full details in dimension two, and omit parts of the proofs in dimensions one and three that are essentially the same verbatim. Theorem 1.2(ii) is proved in section 6. Finally we show how a potential V can be added to our results in section 7.
Preliminaries
In this section we collect a number of definitions and results used in the sequel.
Notation for identity operators
We use the following notation for identity operators on various spaces: , 1 on C 2 , and 1 on C 4 . In section 5, in the definitions of the operator matrices for the free Dirac operator and its discretizations, 1 denotes the identity on L 2 (R 3 ) ⊗ C 2 and 1 h denotes the identity on ℓ 2 (hZ 3 ) ⊗ C 2 .
Finite differences
The forward, backward, and symmetric difference operators on H 1 h are defined in (1.2) and (1.3). Let {e 1 , e 2 , e 3 } be the canonical basis in Z 3 . The forward partial difference operators for mesh size h are defined by and backward partial difference operators by The symmetric difference operators are given by Note that (D + h;j ) * = D − h;j and (D s h;j ) * = D s h;j . The discrete Laplacian acting on ℓ 2 (hZ d ) is given by with adjoint F * : H d → H d . We suppress their dependence on d in the notation, as it will be obvious in which dimension they are used.
Embedding and discretization operators
We describe in some detail how the the embedding and discretization operators in [2, section 2] are adapted to the Dirac case.
Let K be a Hilbert space. Let {u k } k∈Z d and {v k } k∈Z d be two sequences in K. They are said to be biorthogonal if Assume that {ϕ 1,k } k∈Z d and {ψ 1,k } k∈Z d are biorthogonal Riesz sequences in L 2 (R d ).
To simplify, we omit the dependence on d in the notation for embedding and discretization operators. The embedding operators J h : , the notation above means with an obvious modification in case d = 3. As a consequence of the Riesz sequence assumption we get a uniform bound The operators J h are defined as above by replacing ϕ h,k by ψ h,k in (2.5). Then the discretization operators are defined as K h = ( J h ) * . Explicitly, for d = 1, 2, with an obvious modification for d = 3. We have the uniform bound
Biorthogonality implies that
A further assumption on the functions ϕ 0 and ψ 0 is needed.
. Let ϕ 0 , ψ 0 ∈ L 2 (R d ) be essentially bounded and satisfy Assumption 2.1. Assume further that there exists c 0 > 0 such that
Two lemmas
We often use the following elementary result, where the identity matrix is denoted by I.
Proof. It suffices to prove (2.6). We use the C * -identity in B(C n ) to get The following lemma will be used in the proofs related to the non-convergence results; see e.g. [6, Theorem XIII.83]. Then
The 1D free Dirac operator
We state and prove results for the 1D Dirac operator. On H 1 the one-dimensional free Dirac operator with mass m ≥ 0 is given by the operator matrix where I denotes the identity operator on L 2 (R).
The 1D forward-backward difference model
where I h denotes the identity operator on ℓ 2 (hZ). The operators H 0 and H fb 0,h are given as multipliers in Fourier space by the functions G 0 and G fb 0,h , respectively, where and and Proof. Using Lemma 2.3 together with (3.3) and (3.5) we get proving (3.6).
To prove (3.7) we use Lemma 2.3, (3.4), and (3.5) to get There exists c > 0 such that for |θ| ≤ 3π To estimate the 12 and 21 entries in G fb 0,h (ξ) − G 0 (ξ) we use Taylor's formula: It follows that the 12 and 21 entries are estimated by Ch|ξ| 2 . Using Lemma 3.1 the result follows.
Using Lemmas 3.1 and 3.2 we can adapt the arguments in [2] to obtain the following result. We omit the details here, and refer the reader to the proof of Theorem 4.4 where details of the adaptation are given.
The 1D symmetric difference model
The discrete model based on the symmetric difference operator (1.3) is In Fourier space it is a multiplier with symbol for all h ∈ (0, 1].
Using Lemmas 2.4 and 3.4 together with Theorem 3.3 and properties of J h and K h , we get the following result.
We can introduce a modified operator H s 0,h given by where −∆ h is the 1D discrete Laplacian; see (2.4). We obtain norm resolvent convergence for the modified symmetric difference model, similar to the results in dimensions two and three; see Theorems 4.4 and 5.1. The proof is omitted as it is nearly identical to the proof of Theorem 4.4.
The 2D free Dirac operator
In two dimensions the free Dirac operator on H 2 with mass m ≥ 0 is given by where the Pauli matrices are given in (1.1). In H 2 it is a Fourier multiplier with symbol The corresponding discrete Dirac operator can be obtained by replacing the derivatives in (4.1) by finite differences.
The 2D symmetric difference model
We first consider the model obtained by using the symmetric difference operators; see (2.3) for the definition.
In H 2 h it acts as a Fourier multiplier with symbol The 2D discrete Laplacian is defined in (2.4). We introduce the modified symmetric difference model as h the operator H s 0,h acts as a Fourier multiplier with symbol Related to the symbols G 0 , G s 0,h , and G s 0,h , we define We have G 0 (ξ) 2 = g 0 (ξ)1, G s 0,h (ξ) 2 = g s 0,h (ξ)1, and G s 0,h (ξ) 2 =g s 0,h (ξ)1.
To prove (4.12) we first use Lemma 2.3 and (4.10) to get Then note that there exists c > 0 such that Combining these estimates we get The estimate (4.12) follows.
Lemma 4.2. There exists C > 0 such that The 11 entry in G s 0,h (ξ) − G 0 (ξ) is estimated using |sin(θ)| ≤ |θ|. We get This result implies the estimates that are used to estimate the 12 and 21 entries in G s 0,h (ξ) − G 0 (ξ). Combining these results with the estimates from Lemma 4.1 we get Lemma 3.3] in a form adapted to the Dirac operators and outline its proof.
Proof. We assume d = 2. It suffices to consider K = {i}, since (H 0 − iI)(H 0 − zI) −1 is bounded uniformly in norm for z ∈ K. Let u ∈ S(R 2 ) ⊗ C 2 , the Schwartz space. Going through the computations in [2, section 2] using that ϕ 0 and ψ 0 are scalar functions, we get the result Here G 0 is given by (4.2). If hξ ∈ [− π 2 , π 2 ] 2 then the j = 0 term is the only non-zero term in the sum. Using [2, Lemma 2.7] we conclude that this term and the last term cancel. For hξ / ∈ [− π 2 , π 2 ] 2 we use Lemma 4.1 to get (G 0 (ξ) − i1) −1 B(C 2 ) ≤ Ch, 0 < h ≤ 1. Since ϕ 0 and ψ 0 are assumed essentially bounded, we conclude that the j = 0 term in the sum and the last term are bounded by Ch u H 2 .
Due to the support assumptions on ϕ 0 and ψ 0 , only the terms in the sum with |j| ≤ 1 contribute. Assume |j| = 1 and hξ ∈ supp( ϕ 0 ) ∩ supp( ψ 0 (· + 2πj)). Then for some c 0 > 0 we have |ξ + 2π h j| ≥ c 0 h , which by Lemma 4.1 implies Again using the boundedness of ϕ 0 and ψ 0 we conclude that Squaring and integrating the result gives an estimate of the form Ch u H 2 . By density, adding up the finite number of terms corresponding to |j| ≤ 1 gives the final result.
We have now established the estimates necessary to repeat the arguments from [2]. Using the embedding operators J h and discretization operators K h defined in section 2, we state the result and then show in some detail how the arguments in [2] are adapted to the Dirac case.
Proof. We start by proving the result for K = {i}. We have The last term is estimated using Lemma 4.3.
To estimate the remaining terms we go to Fourier space. We have Let u ∈ S(R 2 ) ⊗ C 2 . We now use a modified version of the computation leading to [2, equation (2.11)]. For the first term we get For the second term we get We need to rewrite (4.16). First we note that Next we can rewrite part of (4.16) as follows, since ψ 0 is a scalar-valued function: We now insert (4.17) and the rewritten (4.16) into (4.15) to get Due to the support conditions on ϕ 0 and ψ 0 in Assumption 2.2, only terms with |j| ≤ 1 contribute. First consider j = 0. We have assumed supp( ϕ 0 ), supp( ψ 0 ) ⊆ [− 3π 2 , 3π 2 ] 2 . Using Lemma 4.2 and Assumption 2.2 we get From the supports of ϕ 0 and ψ 0 we have Assume hξ ∈ M, then Lemma 4.1 implies Since we have a finite number of j with |j| ≤ 1 and since u is in a dense set, the estimate in Theorem 4.4 follows in the K = {i} case. For the general case we use the estimates This is the crucial estimate used above. Further details are omitted.
Next we show that, without modification to the symmetric difference model, the norm convergence stated in the theorem fails.
for all h ∈ (0, 1]. Proof. Using the notation from (4.5) we have From the same reasoning as in the proof of Lemma 3.4, we obtain Here g s 0,h (ξ) is given by (4.8),g s 0,h (ξ) by (4.9), and f h (ξ) by (4.6). Take hξ 1 = π and hξ 2 = π, and insert them in the last term in (4.19). We get The result (
The 2D forward-backward difference model
We now consider the model for the discrete Dirac operator obtained by using the forward and backward difference operators; see (2.1) and (2.2) for definitions. The discretized operator is given by In H 2 h it is a Fourier multiplier with the symbol We also consider the modified model, where the modification is the same as in the symmetric case, i.e.
The corresponding Fourier multiplier is where f h (ξ) is given by (4.6). We recall the expression Straightforward computations show that We now prove the analogue of (4.12) for G fb 0,h (ξ).
Lemma 4.8. There exists C > 0 such that Proof. We have The 11 and 22 entries in G fb 0,h (ξ) − G 0 (ξ) are estimated by Ch|ξ| 2 ; see (4.13). To estimate the 12 and 21 entries we use Taylor's formula: (4.25) It follows that the 12 and 21 entries also are estimated by Ch|ξ| 2 . Using Lemmas 4.1 and 4.7 the result follows.
We can now state the analogue of Theorem 4.4. The proof is omitted, since it is almost identical to the proof of Theorem 4.4; indeed the key ingredients are the estimates in Lemmas 4.7 and 4.8, that correspond to the results from Lemmas 4.1 and 4.2 with the modified symmetric difference model.
The negative result in Theorem 4.6 for the symmetric model holds also in the forwardbackward case.
Proof. As in the proof of Lemma 4.5 we get f h (ξ) (1 + g fb 0,h (ξ)) 1/2 (1 +g fb 0,h (ξ)) 1/2 . Using (4.6), (4.21), and (4.22) we get It follows that we have a lower bound This result implies that the strong convergence result in [7] cannot be improved to a norm convergence result, without modifying the discretization.
For U, W ∈ C 3 there is the following identity related to the Pauli matrices, where the "dot" does not involve complex conjugation: The Dirac matrices α = (α 1 , α 2 , α 3 ) and β satisfy We can choose The free Dirac operator with mass m ≥ 0 in H 3 is given by where 1 in the context of (5.2) denotes the identity operator on L 2 (R 3 ) ⊗ C 2 . In Fourier space H 3 it is a multiplier with symbol then G 0 (ξ) 2 = g 0 (ξ)1. (5.5) As in dimension two there are two natural discretizations of (5.2), using either the pair of forward-backward partial difference operators or the symmetric partial difference operators.
The 3D symmetric difference model
The symmetric partial difference operators are defined in (2.3). We use the notation for the discrete symmetric gradient. The symmetric discretization of the 3D Dirac operator is defined as where 1 h is the identity operator on ℓ 2 (hZ 3 ) ⊗ C 2 . In Fourier space this operator is a multiplier with symbol . As in the two-dimensional case we also define a modified discretization. Let −∆ h denote the 3D discrete Laplacian; see (2.4). Let −∆ h 1 denote the 2 × 2 diagonal operator matrix with the discrete Laplacian on the diagonal elements. Then define Its symbol is
The 3D forward-backward difference model
Using the definitions (2.1) and (2.2) we introduce the discrete forward and backward gradients as . The forward-backward difference model is then given by The symbols of D ± h;j in Fourier space are ± 1 ih (e ±ihξ j − 1), j = 1, 2, 3.
The arguments for norm resolvent convergence of the 3D modified forward-backward difference model do not follow as straightforwardly as in the symmetric difference case, since in particular G fb 0,h (ξ) 2 is not a diagonal matrix. A computation reveals that .
(5.11)
We proceed to show the required estimates related to G fb 0,h (ξ) in detail.
Lemma 5.4. There exists C > 0 such that We estimate the entries in G fb 0,h (ξ) − G 0 (ξ) as in the proof of Lemma 4.8. Thus the entries are estimated by Ch|ξ| 2 . Using Lemma 5.3 the result follows.
Using Lemmas 5.3 and 5.4 we can adapt the arguments in [2] to obtain the following result. We omit the details here, and refer the reader to the proof of Theorem 4.4 where details of the adaptation are given.
As in dimension two, the unmodified forward-backward difference model does not lead to norm resolvent convergence.
Sobolev space estimates and strong convergence
In sections 3-5 we have shown that J h (H 0,h − zI h ) −1 K h converges in the B(H d )-operator norm to (H 0 − zI) −1 for several choices of discrete model H 0,h , and we have also shown that in other cases this norm convergence does not hold. This section is dedicated to the cases where J h (H 0,h − zI h ) −1 K h does not converge to (H 0 − zI) −1 in the B(H d )-operator norm, and instead we will prove that convergence holds in the B(H 1 (R d ) ⊗ C ν(d) , H d )operator norm. These latter results obviously imply strong convergence. In particular, we recover the result in [7] for d = 2 with the discretization H fb 0,h .
The 1D model
The 1D symmetric model H s 0,h is defined in (3.8) and its symbol G s 0,h (ξ) in (3.9). The symbol for the continuous Dirac operator G 0 (ξ) is defined in (3.1). Lemma 6.1. There exists C > 0 such that . Proof. Note that Lemma 2.3 and (3.10) imply the estimate (G s 0,h (ξ) − i1) −1 B(C 2 ) ≤ (1 + m 2 ) −1 and that this estimate cannot be improved for hξ ∈ [− 3π 2 , 3π 2 ]. We have Proof. The result follows if we prove the estimate The proof is very similar to the proof of Theorem 4.4. In the arguments one replaces F * u by F * (H 0 − iI) −1 u and uses Lemma 6.1. Further details are omitted.
Perturbed Dirac operators
In this section we state results on perturbed Dirac operators and their discretizations, with respect to norm resolvent convergence. We use the following condition on the perturbation.
We require another assumption on ψ 0 in addition to Assumption 2.2. We emphasize that concrete examples of ψ 0 satisfying these assumptions are given in [2, subsection 2.1].
Assumption 7.2. Assume there exists τ > d such that Define a discretization of V by V h (k) = V (hk), k ∈ Z d .
Proof. The proof in [2] can be directly adapted to the current framework. We omit the details. Note that ψ 0 (x) is a scalar, such that we have ψ 0 (x)V (x)f (x) = V (x)ψ 0 (x)f (x), f ∈ H d .
We can then state our main result on the perturbed Dirac operators, which follows from Lemma 7.3 and a direct adaptation of the proof of [ where H 0 is the free Dirac operator in the relevant dimension. Assume V ≡ 0 and let θ ′ be given by (7.2). Then the following result holds.
Let K ⊂ C \ R be compact. Then there exist C > 0 and h 0 > 0 such that for all z ∈ K and h ∈ (0, h 0 ].
|
v3-fos-license
|
2016-05-18T12:03:38.475Z
|
2015-12-11T00:00:00.000
|
14175925
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0144785&type=printable",
"pdf_hash": "aebe88ca138e3960d6e520cf1add5a01c22a1929",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45666",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "aebe88ca138e3960d6e520cf1add5a01c22a1929",
"year": 2015
}
|
pes2o/s2orc
|
Development of an In-Patient Satisfaction Questionnaire for the Chinese Population
Background Patients’ satisfaction has been considered as a crucial measurement of health care quality. Our objective was to develop a reliable and practical questionnaire for the assessment of in-patients’ satisfaction in Chinese people, and report the current situation of in-patients’ satisfaction in the central south area of China through a large-scale cross-sectional study. Design In order to generate the questionnaire, we reviewed previous studies, interviewed related people, held discussions, refined questionnaire items after the pilot study, and finally conducted a large cross-sectional survey to test the questionnaire. Setting This study was conducted in three A-level hospitals in the Hunan province, China. Results There were 6640 patients in this large-scale survey (another 695 patients in the pilot study). A factor analysis on the data from the pilot study generated four dimensions, namely, doctors’ care quality, nurses’ care quality, quality of the environment and facilities, and comprehensive quality. The Cronbach’s alpha coefficients for each dimension were above 0.7 and the inter-subscale correlation was between 0.72 and 0.83. The overall in-patient satisfaction rate was 89.6%. Conclusion The in-patient satisfaction questionnaire was proved to have optimal internal consistency, reliability, and validity.
Introduction
Patients' satisfaction is considered to be a measure of health care, and hospitals worldwide use it to improve the quality of health care. [1,2] In the past, we used to evaluate the quality of medical services by evaluating the objective outcomes of patients' physical condition. Recently however, researchers have begun to pay close attention to patients' satisfaction as a yardstick for assessing the effectiveness and quality of medical care. [3] Although the quality of medical services can be evaluated by multiple perspectives, such as doctors, patients or insurer, patients should still be considered as the most important estimator of the quality of care. [4] Patients' opinions and satisfaction status may affect their future behaviors related to the treatment outcomes. [4,5] Analysis of the patients' subjective feedback can fully understand the areas that need to be improved, which can upgrade the quality of medical care. [6][7][8] As a result of the increasing value of patients' satisfaction, various kinds of measurement tools are being developed and tested. Suggestion boxes, formal complaints, qualitative methods, audits, and satisfaction questionnaires are being used to assess the level of patients' satisfaction, the satisfaction questionnaire being the most effective and widely used method. [9] In the last decade, a large number of questionnaires targeting all kinds of patients and different areas of medical care have been developed, especially in well-developed countries. [10][11][12][13][14] However, some of them are criticized for their poor validity and reliability. [6,15] Furthermore, the definition of patients' satisfaction has sometimes been misunderstood, leading to exceedingly high ratings of patients' satisfaction. [16,17] More importantly, there has been no large sample research to fully test an in-patients' satisfaction questionnaire on the Chinese population. Unlike most of the developed countries, we have a special medical environment (scarce or unbalanced medical resources) and large population. Therefore, questionnaires developed in these countries may not be suitable for use in China as well as in other developing countries.
In an earlier study, we developed the Chinese outpatient satisfaction questionnaire (Ch-OPSQ). [18] The objective of the present study was to develop an in-patients' satisfaction questionnaire for the Chinese population, and to test the reliability, validity, and acceptability of the self-administrated questionnaire on a large cross-sectional sample.
Study Population
First, a pilot study involving 695 patients was performed in one teaching hospital using a 41 items questionnaire. Then, on the basis of the results of the pilot survey, a following cross-sectional study on three A-level hospitals in the Hunan province of China, including 6640 patients and using a final version of 28 items questionnaire, was performed.
The draft questionnaire
Two members from our team reviewed previously published studies. The Medline and Embase databases were searched using the following key words: "patients," "hospital," "satisfaction," and "questionnaire." We screened all the relevant studies and extracted useful information. The information were mainly the evaluated dimensions and items from existed inpatients satisfaction questionnaire developed by other countries. Then the research group discussed which items to choose for developing our questionnaire. Evaluating items related to the treatment process, medical provider (doctors and nurses), and hospitalization environment were considered to be included in the questionnaire. Evaluating items were not suitable for the Chinese medical condition, such as reservation process, cultural difference and medical insurance system, were excluded from the questionnaire.
A modified version of the questionnaire and pilot survey
The research group interviewed five patients, five administrators of different hospitals and five officers from the health department of the Government about the draft questionnaire. According to patients' admission number and job number, simple random sampling method were applied to select these interviewees. Consultations with these participants were primarily held face-to-face or via email. All the interviewees were asked to rate the importance of each item, provide their opinions and suggestions about the items in the item pool, comment on the relevance of the issues covered, and comprehensibility of the questionnaire and response options. The Research group reviewed the suggestions from the interviews, refined the wording and content of the questions in the draft questionnaire, and build consensus on the items and response options according to their feedback. Then a modified version of the questionnaire was created for a pilot study.
The pilot survey was conducted in one teaching hospital using the modified version of the questionnaire, which contained 41 items, including patients' basic information and the rating of their feelings about each statement on a 5-point Likert scale: very satisfied (= 5), relatively satisfied (= 4), fairly satisfied (= 3), relatively dissatisfied (= 2) and very dissatisfied (= 1) (Table A in S1 File). We handed out the questionnaires to patients in their sickrooms and collected them after they had completed them. After analyzing the collected data, a discussion was held and the research group further discussed the selection of the items. Some items were excluded because of the high non-response rate (such as "security's service"; "food from hospital cafeteria"; "the opportunity of asking for the medication condition of yourself"; "the right to know your medication decision"; "introduction of the ward environment and points for attention"). And some items were excluded based on the similarity with other items and the responses showed poor variability, they were amended and merged with others (such as "polite language usage by doctors"; "the initiative of explanation of the medication by nurses"; "explanation of the side effect of the medication"; "how well nurses cared about your pain and uncomfortable feelings"; "how well nurses responded to your complains"). We also excluded items based on the principal component exploratory factor analysis. Items with poor factor loadings were considered to be excluded (such as "daily medical cost"; "Disease improvement"). Thus, the final version of the questionnaire including 28 items was generated (Table B in S1 File).
The cross-sectional study on a large sample
We sent the final version of the questionnaire to three A-level hospitals in the Hunan province of China for further evaluation. Additionally, the current situation of in-patients' satisfaction in the central south area of China was evaluated. Conscious patients who had stayed in the hospital for over three days were randomly selected for this satisfaction survey. The research group trained twenty investigators and sent them to the chosen hospitals. The investigators were all from a third party institution that was not related to these hospitals. All patients independently completed the questionnaires in their sickroom. The research was approved by the Ethics Committee of Central South University. Written informed consent was obtained from all the subjects in this study. A total of 6640 patients were included in the cross-sectional study, only 4618 patients' basic characteristics were recorded. The data of 1822 patients were missing because these were pediatric patients, whose satisfaction level was rated by their parents.
Statistical analysis
The number and frequency for categorical variables, and the mean and standard deviation for the continuous variables, were calculated as descriptive statistics. Construct validity refers to the extent to which the new questionnaire conforms to existing ideas or hypotheses concerning the concepts that are being measured. [19,20] A principal component exploratory factor analysis by varimax rotation was used to establish the structure and test the construct validity of the questionnaire. Factors with an eigenvalue greater than one or a cumulative contribution rate of above 70% were extracted. Items were included in the dimensions only if they revealed loadings greater than 0.4 after rotation. Items with poor factor loadings were considered for removal from the final questionnaire. Further, if items showed multiple loadings above 0.4, they were included in the factor with which they had a better conceptual relationship. [21] The hypothesis was that it is possible to obtain meaningful, independent, and efficient dimensions to evaluate patient satisfaction.
Reliability indicates a consistency of the performance on the questionnaire. Good reliability produces similar results under consistent conditions. [19]The internal consistency and reliability of each dimension was examined by the Cronbach's alpha and inter-subscale correlations. The Cronbach's alpha assesses the overall correlation between items within a scale. An alpha value of 0.7 or higher is recommended as an indicator of sufficient reliability of a scale. Additionally, in order to prove the independency of each dimension, the inter-subscale correlation should be lower than the corresponding Cronbach's alpha. [22] The feasibility and acceptability of the tool refers to the ease of use of the questionnaire. [23] They were examined by the percentage of missing item responses, interviewer-reported acceptability, and the time and ease of administration. Finally, the score and satisfaction level were reported. The satisfaction rate was calculated in accordance with the following formulae from previous studies [18,24]: Satisfied rate ¼ mean score 5 à number of items à 100% ðfor the dimensions and overall satisfied rateÞ A multiple logistic regression analysis was conducted to identify whether the potential determinants such as patients' age, sex, occupation, education background, and medical insurance type were significantly associated with the overall satisfaction as the dependent variable.
Patients' responses were entered into the Epidata 3.1, and the data analysis was subsequently performed using the SPSS 17.0. A P value of less than 0.05 was considered to be statistically significant.
Results
The pilot study included 695 patients. The data from the pilot study were gathered for assessing the quality of the questionnaire. The 28 items that were related to the quality of medical care were included in the factor analysis. The results indicated that four factors with eigenvalues greater than one explained 73.7% of the variance. These were Doctors' care quality (9 items), Nurses' care quality (12 items), Quality of the environmentand facilities' (5 items), and Overall medical quality (2 items). The final items have been listed in Table 1.
The large-scale cross-sectional study on a sample of 6640 in-patients was conducted from July 2012 to July 2013. The demographic characteristics of 4618 patients were recorded. Among the 4618 patients, 2256 were men (48.9%), and the mean age of the sample was 45.1 years. Further, 52.5% of them lived in rural areas, 33.7% of them were famers, and 30.3% of them had only studied up to middle school. Nearly half of them (48%) were covered under the rural cooperating medical insurance (Table 2).
We further confirmed the reliability of the final version of the satisfaction questionnaire. In each dimension, the Cronbach's alpha was above 0.8 for all the items and the inter-subscale correlation was between 0.722 and 0.841. (Table 3) According to the results (Table 4), patients were the most satisfied with bed making (satisfaction rate was 92%) and the least satisfied with the restfulness of the hospital and cleanliness of the toilets and showers (satisfaction rate was 82%). Comparing the four dimensions, the highest satisfaction rate was observed for the doctors' care quality (87.6%), while the same was the lowest for the quality of the environment and facilities' (83.2%). The overall satisfaction, evaluated by all 28 items, was 89.6%. The results of the multiple logistic regression suggested that age, occupation, educational background, and medical insurance were the determinants of the patients' overall satisfaction rate (Table 5). Specifically, the satisfaction rate was higher in patients who were older (OR = 1.12), who were covered under the urban workers' medical insurance (as compared to those covered under the rural cooperating medical insurance) (OR = 1.21), those with higher education (OR = 1.2), and those who were farmers (as compared with those who were workers) (OR = 0.76).
Discussion
Our research group developed a Chinese self-administered questionnaire on in-patients' satisfaction. Patients rated their satisfaction level according to their experience regarding several important aspects of medical treatment. The results from the pilot survey and large-scale survey indicated that the in-patients' satisfaction questionnaire had optimal quality. The questionnaire was subject to a series of testing processes to assess its reliability and validity. Four main dimensions of the questionnaire were similar to the tools used in previous studies. [16,[25][26][27][28][29] Moreover, the interpretation of the dimensions was verified in another study.
[30] The Cronbach's alpha coefficients of the four dimensions were all above the recommended minimum of 0.7, [31] the inter-subscale correlations were lower than the internal consistency for each scale. These outcomes indicated that the reliability of the too, as indicated by the internal consistency, was excellent and these findings were consistent with those of other studies [19,27,28]. [31][32][33] The questionnaire also had good acceptability. The core items related to medical care quality had a high response rate (higher than 99.7%), and the patients could complete the questionnaire within ten to fifty minutes, which showed perfect acceptability and feasibility.
According to the large-scale survey, the overall satisfaction rate was 89.6%. Our study showed that the satisfaction related to doctors' care quality was the highest, while that on the quality of the environment and facilities' was the lowest. These results concurred with those presented in other studies. [32,34,35] Results of a logistic regression identified that age, occupation, education background, and type of medical insurance of the patients could be determinants of their overall satisfaction. Several studies [26,32,34] concluded that younger patients had a lower satisfaction rate. Hali, J. et al [26] conducted a meta-analysis on the determinants of patients' satisfaction, which revealed that education and social status could predict patients' satisfaction. Kats M. et al [36] suggested that patients' satisfaction was associated with their medical insurance type among HIV-Infected men. These were consistent with our results too. Compared with most standard instruments developed in North America and the UK, where the surveys were conducted after hospitalization, we collected our data while the patients were in the hospital. Recently, the Hong Kong (HK) government conducted a thematic household survey using the Piker patient experience questionnaire-15 (PPE-15) to measure in-patients' satisfaction. [37] The survey revealed an overall satisfaction rate of 77.9%, [38] which was lower than our findings. The HK Hospital Authority (HA), an independent public sector organization, developed a patients' experience tool named HK Inpatient Experience Questionnaire (HKIEQ) in 2009. [39] However, the medical care system of HK is very different from that of mainland China. In addition, our survey employed a larger sample as compared to that in previous studies. We also developed a CH-OPSQ in a previous study. [18] Both CH-OPSQ and inpatients satisfaction questionnaire went through a strict development process and were tested to be with good reliability and validity. The mainly differences between the OPSQ and the inpatients satisfaction questionnaire were the structure of the questionnaire and the satisfaction outcomes. There were 6 dimensions (waiting time, service attitude, medical care quality, special service quality, environment quality, global assessment) in CH-OPSQ due to the complicated organization of outpatient services comparing to 4 dimensions in the inpatients satisfaction questionnaire. In addition, for satisfaction outcomes, the waiting time appears to be a major issue for outpatients. But in the present study, the quality of the environment and facilities was with the lowest satisfaction for inpatients. The first limitation of our study is that we did not evaluate the test-retest reliability of the tool. However, this was not feasible as most of the participants lived in rural areas and the communication methods were limited. Future research should focus on testing the test-retest reliability of our instrument using accept techniques. Another possible limitation is that we gathered background information only from 4618 patients. The data of 1822 patients were missing because these were pediatric patients, whose satisfaction level was rated by their parents. It was almost impossible to collect complete information of the demographic characteristics of these patients. For example, they did not have a job and some of them were too young to be educated. Therefore, we chose not to record their basic information.
In conclusion, the in-patients' satisfaction questionnaire developed in this study had optimal validity, reliability, and acceptability. Additionally, the in-patient satisfaction in the central south area of China was relatively high in terms of the medical processes and relatively low in terms of the hospital environment and comfort. Finally, age, occupation, educational background, and type of medical insurance of the patients were the determinants of patients' overall satisfaction rate.
Supporting Information S1 File. Inpatients satisfaction questionnaire for pilot study (41 items) ( Table A). Final version of the inpatients satisfaction questionnaire (28 items) ( Table B). (DOCX)
|
v3-fos-license
|
2024-04-05T05:08:23.567Z
|
2024-04-01T00:00:00.000
|
268885518
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.3002561&type=printable",
"pdf_hash": "9b942ffbe82d78289c8d97581fef2f35a6f83691",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45668",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "9b942ffbe82d78289c8d97581fef2f35a6f83691",
"year": 2024
}
|
pes2o/s2orc
|
What choanoflagellates can teach us about symbiosis
Environmental bacteria influence many facets of choanoflagellate biology, yet surprisingly few examples of symbioses exist. We need to find out why, as choanoflagellates can help us to understand how symbiosis may have shaped the early evolution of animals.
Fig 1. Choanoflagellate-bacteria interactions. (A)
Choanoflagellates survive by eating bacteria.Schematic of a feeding choanoflagellate cell highlighting their "collar complex": the apical flagellum surrounded by an actin-filled microvilli collar (A).Flagellar beating draws bacterial prey (blue) into the collar, where they become trapped and phagocytosed at the collar membrane.Bacteria are later digested in food vacuoles.n = nucleus.DIC image of Salpingoeca rosetta consuming environmental bacteria (A 0 ).Scale bar = 5 μm.(B) Bacterial cues regulate S. rosetta choanoflagellate Salpingoeca rosetta.As is true for many choanoflagellate species, S. rosetta can develop from a single cell into a multicellular colony through serial rounds of oriented cell divisions [3].Although S. rosetta was initially isolated from the environment as a multicellular "rosette" colony (Fig 1B), rosettes quickly transitioned to single-celled states when cultured in the lab.A series of serendipitous experiments revealed that the co-isolated environmental bacterium, Algoriphagus machipongonensis, induces single cells to develop into rosette colonies [4].Later experiments led to the unexpected finding that bacteria regulate another, very different developmental decision in S. rosetta: some species of Allivibrio and Vibrio bacteria, including Allivibrio fischeri, induce the switch to sexual reproduction (Fig 1B) [5].
Bacteria are reliable proxies for environmental conditions, and S. rosetta is among a contingent of diverse eukaryotes that make important decisions in response to environmental bacteria; for example, external bacteria also stimulate algal differentiation and zoospore settlement, and bacteria-induced metamorphosis is widespread among animals [6].Despite sharing the feature of transience, relationships between eukaryotes and their environmental bacteria can have vastly different evolutionary histories and specificities.The choanoflagellate Choanoeca flexa can use nitric oxide and retinal, both metabolites produced by diverse bacteria, to initiate collective cell contractions [7,8] (Fig 1C).Cell contractions in C. flexa toggle colonies between morphologies that favor either feeding or swimming; because cells have evolved responses to such common bacterial metabolites, C. flexa is able to use bacteria to navigate diverse environments.
By contrast, the interaction between S. rosetta and rosette-inducing Algoriphagus bacteria is remarkably specific.Algoriphagus produces distinct classes of lipid co-factors that act synergistically to regulate rosette development [4,9].The molecular stringency required for rosette development (which is warranted, seeing as the transition to multicellularity is a serious commitment) raises the possibility that populations of S. rosetta and Algoriphagus have lived in close association over time.Could this be a form of symbiosis?Much of what we know about symbiosis comes from obligately multicellular animal hosts.Yet, the life histories of animals and choanoflagellates are different: choanoflagellates have short generation times and unicellular life-stages.Perhaps some choanoflagellate, and possibly other microbial, symbioses have evolved to manifest at the population level (across environmental space) rather than the individual level.This way, choanoflagellates can rely on specific relationships with bacteria while maintaining the ability to nimbly respond to environmental fluctuations.
Exploring the idea that transient associations with bacteria can verge on symbiotic requires us to study choanoflagellates both in the lab and in the wild.While laboratory studies provide critical information about the molecular underpinnings of choanoflagellate-bacteria developmental transitions.Specific cues produced by environmental bacteria regulate rosette development and sexual reproduction.Lipid cofactors produced by Algoriphagus machipongonensis act synergistically to regulate multicellular rosette development in unicellular swimmer cells.A chondroitin lyase produced by Allivibrio fischeri induces swimmer cells to mate and undergo sexual reproduction.Other S. rosetta developmental transitions are influenced by nutrient availability (*), and we hypothesize that these might also be regulated by bacteria.(C) Common bacterially produced metabolites induce collective cell contractions in Choanoeca flexa.Cell contractions that result in colony inversion can be triggered either by exogenous nitric oxide (NO) or by light-to-dark transitions in the presence of retinal produced by environmental bacteria.(D) Barroeca monosierra forms stable, physical associations with bacteria (D).Maximum intensity projection of an immunostained B. monosierra colony shows that the hollow center is filled with bacterial DNA, revealed by Hoechst staining (D 0 ).Apical flagella are highlighted in white, microvilli are highlighted in red, and nuclei are highlighted in cyan.Thin section through an S. monosierra colony, imaged by transmission electron microscopy, reveals the presence of bacteria in the central cavity (D 00 ). Figure adapted from [11].Scale bars = 5 μm.(E) Choanoeca sp.produce tubes of extracellular matrix that are stably colonized by bacteria (E).DIC imaging of a single Choanoeca sp.colony at 2 different Z positions shows bacteria colonizing the interior (E 0 ) and surface (E 00 ) of tubed projections.Scale bars = 20 μm.https://doi.org/10.1371/journal.pbio.3002561.g001interactions (which can help us hypothesize about coevolution), they cannot offer much ecological context.Because choanoflagellates are small and live in fluctuating environments, consistently capturing choanoflagellate-bacteria interactions using classic microscopy-based isolation techniques has proven challenging.Thus, we need to incorporate new approaches to understand how prevalent or stable specific associations are in nature.For instance, metagenomic sequencing and cell sorting methods that enrich for choanoflagellates will be key for sampling numerous microenvironments to track associations over space and time.Similar methods may also prove valuable for identifying new symbioses between choanoflagellates and bacteria.Single-cell sorting and sequencing choanoflagellates in the field has already revealed a co-association between the uncultivated choanoflagellate Bicosta minor and a previously uncharacterized bacterium [10].Interestingly, the co-isolated bacterium has a reduced genome that is suggestive of a host-dependent lifestyle.Yet, because this association is based solely on genomic data and has not been visualized, the details of this interaction remain ambiguous.Nonetheless, similar culture-independent approaches have enormous potential to help us uncover ecologically relevant choanoflagellate-bacteria interactions and symbioses.
If forming associations with environmental bacteria enables choanoflagellates to navigate diverse environmental contexts, what might prompt choanoflagellates to establish stable, physical symbioses with bacteria?And are these symbioses restricted to specific life-history stages?The species Barroeca monosierra provides a visually striking example of choanoflagellate symbiosis, forming large and spherical colonies that stably associate with bacteria [11] (Fig 1D).As B. monosierra colonies grow, environmental bacteria can colonize their hollow centers to establish a microbial community comprised of several coexisting species.The drivers and functions of this symbiosis are still unknown, but it does not hurt to speculate.Choanoflagellates both acquire essential nutrients from eating bacteria and harbor many amino acid biosynthesis pathways that were lost in animals, so it is unlikely that B. monosierra depends on its microbiome solely for nutritional supplementation.Yet, because the natural habitat of B. monosierra is a hypersaline, alkaline lake, it seems plausible that these interactions are based on metabolism or detoxification.In turn, these associations may be driven by bacteria, and the extracellular matrix of large B. monosierra colonies may serve as a nutrient-rich niche for environmental bacteria to exploit.Our recent observation that bacteria also stably colonize the extracellular matrix of a newly identified Choanoeca sp.suggests that similar choanoflagellatebacteria symbioses may be more prevalent than we realize.Cells within a Choanoeca sp.colony are connected by branched tubes of extracellular matrix, resulting in a tree-like appearance (Fig 1E , AW unpublished results).Symbiotic bacteria colonize discrete patches on the surface and within the center of these hollow tubes, although it remains unclear if the bacteria belong to one or more species.The natural habitat of Choanoeca sp.(isolated from a tropical tide pool) differs from that of B. monosierra, yet it is possible that forming ectosymbioses enables both species to withstand environmental stresses.As we being to explore how and why choanoflagellates establish physical symbioses, it will be important to study these interactions in the lab under a range of conditions, and ideally, in nature.
Although interactions with environmental bacteria influence most facets of choanoflagellate biology, we have currently gained but a glimpse into choanoflagellate-bacteria symbioses.Nearly every lineage of eukaryotes, from multicellular animals to unicellular protists, forms different symbiotic associations with bacteria.So why have choanoflagellate symbioses remained so elusive?This can be explained in part by cultivation bias, and in part by human bias (we were not looking).In addition, we have likely failed to recognize known interactions as symbiotic because they have characteristics that are tricky to categorize.While some associations with bacteria clearly resemble animal symbioses, others blur the line between transient and symbiotic depending on context.Nonetheless, pursuing studies of choanoflagellate symbiosis is well worth the challenge: These unique organisms have the potential to enrich our understanding of microbial symbioses while providing exceptional insights into the fundamental mechanisms and evolutionary history of animal-bacteria associations.
|
v3-fos-license
|
2021-12-17T16:41:59.687Z
|
2021-12-01T00:00:00.000
|
245444385
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/18/24/13198/pdf",
"pdf_hash": "d1cc7be20e338880bc8c5645ba3640303f739007",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45670",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "683dc37c2f4a158e6f921bb30c0e9fe850af61b3",
"year": 2021
}
|
pes2o/s2orc
|
The Relationship among COVID-19 Information Seeking, News Media Use, and Emotional Distress at the Onset of the Pandemic
Although several theories posit that information seeking is related to better psychological health, this logic may not apply to a pandemic like COVID-19. Given uncertainty inherent to the novel virus, we expect that information seeking about COVID-19 will be positively associated with emotional distress. Additionally, we consider the type of news media from which individuals receive information—television, newspapers, and social media—when examining relationships with emotional distress. Using a U.S. national survey, we examine: (1) the link between information seeking about COVID-19 and emotional distress, (2) the relationship between reliance on television, newspapers, and social media as sources for news and emotional distress, and (3) the interaction between information seeking and use of these news media sources on emotional distress. Our findings show that seeking information about COVID-19 was significantly related to emotional distress. Moreover, even after accounting for COVID-19 information seeking, consuming news via television and social media was tied to increased distress, whereas consuming newspapers was not significantly related to greater distress. Emotional distress was most pronounced among individuals high in information seeking and television news use, whereas the association between information seeking and emotional distress was not moderated by newspapers or social media news use.
Introduction
The COVID-19 pandemic has not only disrupted basic everyday activities, but also fostered emotional distress [1][2][3]. After isolated cases and clusters started appearing in the early months of 2020, by March the U.S. saw rapidly increasing case counts indicating community transmission [4]. With COVID-19 declared a pandemic by the World Health Organization on 11 March and a national emergency by the Trump administration on 13 March, states implemented shelter-in-place or stay at home orders [5], potentially contributing to unease and mental distress. Research documenting the extent of emotional distress during the COVID-19 pandemic is rapidly emerging (e.g., [1,2,6,7]). This research builds on work showing that there is a significant relationship between the occurrence of infectious disease outbreaks and negative psychological consequences. For example, people are likely to develop greater incidence of depression [8], psychological distress [8,9], and anxiety [10] during pandemics.
Since the COVID-19 outbreak, individuals have sought to understand basic information related to the virus such as its impact, effective treatment, and vaccine development [11]. The Int. J. Environ. Res. Public Health 2021, 18, 13198 2 of 13 lack of predictability, the rising number of confirmed cases and deaths, and changing health guidelines led wide swaths of the public to seek information about the pandemic [12]. In fact, according to a report from the Pew Research Center, 70% of U.S. citizens searched online for information about the coronavirus in the early months of the pandemic [13].
Several theories and empirical findings suggest a positive relationship between information seeking and emotional distress especially during crises. In fact, information seeking about negative events such as natural disasters [14,15], terrorism [16,17], and pandemics [18] is linked to emotional distress. Moreover, following the reliance on heuristics under uncertainty [19,20], an unprecedented amount of information may cause emotional distress. So, when confronted by intense media coverage about COVID-19, people may perceive higher levels of threat, which, in turn, may trigger higher stress. Finally, people might be incapable of avoiding information seeking because of the need-to-know basic information, such as the symptoms of infection.
Information seeking, as a proxy for attention paid to COVID-19 news, may interact with the news source through which information is consumed. Specific combinations of attention and exposure may also be related to emotional distress, with certain types of news sources more likely to spur strong emotions (e.g., [21,22]). Particularly for television, attention must be considered alongside exposure [23,24], especially considering the unique capabilities of video for conveying emotions [17]. This is because news on television features vivid images, motion and sound, whereas newspapers emphasize text and limited use of visuals. Taking into account the medium through which people find news during the COVID-19 pandemic may explain distress mechanisms. Furthermore, the types of media through which individuals find news may moderate the relationship between information seeking and emotional distress. For example, if an individual tends to rely on television as a source for news and is seeking information about COVID-19, the modality of this medium may amplify the association between information seeking and emotional distress beyond the direct relationship of each factor.
Using a U.S. national survey, we examine: (1) the link between information seeking concerning the COVID-19 pandemic and individuals' emotional distress, (2) the relationship between reliance on television, newspapers, and social media as sources for news exposure on emotional distress during the pandemic after accounting for COVID-19 information seeking, and (3) the interaction between information seeking about COVID-19 and use of these news media sources on emotional distress. In doing so, our study attempts to understand the psychological toll of information seeking and news media use during an ongoing pandemic. Understanding these relationships is critical because seeking information via news media has been especially important during the COVID-19 pandemic. However, at the same time, the contentiousness of partisan news and the presentational styles of some media forms about the pandemic could lead to emotional distress. In this study, we attempt to unpack these relationships.
Information Seeking and Emotional Distress
Information seeking is the process by which individuals "purposefully make an effort to change their state of knowledge" ( [25], p. 549; [26]). Both individuals' motivation to seek information and media coverage on the specific topic tend to increase during crises [11,27]. Due to the novel nature of COVID-19 especially, information about COVID-19 has been placed at the forefront of much of the media [28]. The pandemic dominated news content during the first half of 2020 [27,28]. Given its prevalence and potential impact, theories and studies suggest a positive relationship between information seeking and emotional distress during a major pandemic like the one caused by COVID-19.
First, information seeking about certain events using media might be related to negative emotions [14][15][16]18]. This is particularly evident in studies on information seeking about traumatic events, such as disasters [14,15], terrorism [16,17], and pandemics [18]. When a traumatic event occurs, individuals often attempt to reduce uncertainty about the event by engaging in information seeking. However, efforts to learn more about the traumatic event may be linked with negative emotional reactions to said event [16]. In the case of September 11, people sought to alleviate uncertainty by seeking information about the event, and this behavior was related to a variety of negative emotions [16], due partially to underlying uncertainty about the event [16] and the ways in which media covered it. This same logic can be applied to the global COVID-19 pandemic, as the uncertainty and unpredictability of COVID-19 poses risks to individuals' mental health (e.g., [1,2,6,7]).
Second, the reliance on heuristics under uncertainty [19,20] also helps explain why individuals are stressed with COVID-19 information seeking. Uncertain people tend to refer to heuristics, or mental shortcuts. According to the availability heuristic [19,20], there are situations in which people assess the likelihood of an event by how readily examples come to mind [20]. People may perceive higher levels of threat when the events are salient and memorable, with vivid evidence [20]. Media coverage is one way to make the event available in people's minds, ensuring that people are easily able to retrieve information concerning that event. In the case of COVID-19, there has been a remarkable amount of media coverage, making it available to most people who seek information about the pandemic. This higher availability of information about the global pandemic may cause higher levels of stress.
Finally, under certain circumstances, individuals might choose to avoid information seeking when they perceive that more knowledge might lead to distress [29][30][31]. However, avoiding information seeking might not always be an option. In the case of the COVID-19 pandemic, an already unprecedented amount of uncertainty has been increased by the spread of conspiracy theories and misinformation [12]. Even if people know consuming information leads to stress, they might not have a choice to avoid it, due to the need to find basic answers like safe ways to get groceries or symptoms of COVID-19 infection. The evolving nature of the pandemic meant critical information frequently changed, requiring active information seeking to keep up with changing facts and guidelines, despite the potential distress.
Since the onset of the COVID-19 pandemic, there has been a growing body of the literature dealing with information seeking and emotional distress (e.g., [32][33][34]). The previous findings, however, are somewhat inconsistent. While some studies showed that information seeking is significantly related to anxiety [33] or information overload [34], other studies indicated that high levels of information seeking are associated with higher levels of well-being and risk perception [32]. To address the inconsistency in the literature, we examine the relationship between COVID-19 information seeking and emotional distress using a large U.S. national sample. Despite the mixed findings, based on the aforementioned discussion, we propose our first hypothesis as follows: Hypothesis 1. A higher level of COVID-19 information seeking is positively related to emotional distress during the COVID-19 pandemic.
Information Seeking, General News Media Use, and Emotional Distress
The association between news media use and individuals' emotional distress concerning COVID-19 may depend on the modality of the news medium from which individuals get information. This idea is associated with Marshall McLuhan's [35] early work, which emphasizes the differences in media modalities. Studies in the McLuhan tradition focus on "the differences in the physical modalities of video versus print and offer evidence to show that video is the most effective medium for communicating information" ( [36], p. 79). Indeed, audiovisual media such as television have been found to have a greater impact on information recall and counterarguing compared to print media [37,38]. Audiovisual media attract attention and stimulate involvement [39]. By contrast, the presentation of information in print modalities seems to reduce the ability to foster emotional arousal [17]. In line with this research, we consider how consuming news via television, newspapers, and social media may be related to emotional distress beyond information seeking concerning COVID-19. Furthermore, the link between COVID-19 information seeking and emotional distress may not be the same for all news consumers. Instead, the type of media through which individuals find general news may interact with information seeking about COVID-19 to explain emotional distress.
Television News
Because television news, as an audiovisual medium, may require fewer cognitive skills than print media, it is more likely to capture the attention of people who possess fewer cognitive skills [36]. Its combination of audio and visual tracks, repeated usage of strong imagery, and news anchors' visible displays of emotion may elicit emotional responses in news viewers [40,41]. Indeed, television news is more emotionally arousing than newspaper stories [17]. Previous studies show the strong association between television news consumption and viewers' negative emotional outcomes (e.g., [22,[42][43][44][45][46]). However, this association may be due to the kind of thinking television viewers have to do to make sense of a cultural experience [47]. An experimental study showed that exposure to a random newscast triggered increased negative emotions, and manifested in heightened anxiety, total mood disturbance, and decreased positive affect [45]. The emotional distress may be more intense after exposure to televised reports of exceptionally negative events [46]. In addition, a systematic review of literature on disaster news viewing and psychological outcomes linked consumption of televised news with a range of negative emotions [22]. Specifically, television viewing in the context of terrorism was associated with posttraumatic stress (PTS; [43]), stress reactions [44], and negative emotional responses [17]. Given that the technical features of television are particularly appropriate for evoking emotional responses, we propose the following hypothesis: Hypothesis 2. Accounting for information seeking about COVID-19, consuming news via television will be related to increased emotional distress.
Hypothesis 3.
The association between COVID-19 information seeking and emotional distress will be moderated by television news use, with the association between information seeking and emotional distress stronger for individuals with higher television news use.
Newspapers
In contrast to television news, newspapers and other print media's lack of visual, motion, and audio cues reduce a reader's sense of presence. Moreover, newspapers and newsmagazines provide in-depth, thematic, and analytic coverage on issues and matters of public interest, with less emotion-laden language compared to television news, which tends to combine an emphasis on emotional content with episodic coverage [17]. These characteristics position newspapers as a less emotionally arousing medium.
Research shows that newspapers evoke weaker emotions in readers when compared with the effect of television news on viewers (e.g., [36]). For example, while people who watched television news experienced stronger emotions related to terrorist attacks, newspaper usage was not a significant factor in explaining individuals' emotional responses [17]. Similarly, according to a systematic review of literature on various forms of disaster media and psychological outcomes [22], none of the reviewed studies showed significant associations between newspaper use and psychological outcomes such as depression, stress, and anxiety. Given that newspaper stories feature fewer emotion-laden visuals, we propose the following hypothesis: Hypothesis 4. Accounting for information seeking about COVID-19, consuming news via newspapers will be related to decreased emotional distress.
Hypothesis 5.
The association between COVID-19 information seeking and emotional distress will be moderated by newspaper use, with the association between information seeking and emotional distress weaker for individuals with higher newspaper use.
Social Media News
Finally, with the rise of mobile technology, accessing news and information on social media has become commonplace and frequent [48]. In 2019, 53% of U.S. adults received news from social media, up from 47% in 2018 [48]. While social media share traditional media's ability to provide news to users [49], social media have unique characteristics that are markedly different from traditional forms of media. First, while traditional media are defined as either textual media (e.g., newspapers) or audiovisual media (e.g., television news), social media provide a combination of modality (i.e., both textual and audiovisual mode). Social media users can share dramatic multimedia clips about apparent health risks using video sharing sites such as YouTube [21], many of which are unverified. Second, social media are highly personalized platforms, connecting users with similar interests, often with personal or professional relationships [50]. Social media can reflect a social endorsement from 'people like me' via established social contacts (e.g., Facebook) or through like-minded individuals (e.g., Twitter). This aspect of social media allows for the rapid spread of misinformation [51] because users rely on social endorsement [52] rather than verified information. According to a report from the Pew Research Center, those who get most of their news from social media reported seeing at least some misinformation about the COVID-19 outbreak [53]. These same news consumers said media have exaggerated the threat posed by COVID-19.
All of these features of social media may have caused the discourse on social media concerning COVID-19 to be emotionally arousing and stressful. Prior research shows higher levels of emotional distress among social media news users than other media users. One study showed that individuals who consumed news solely from news feeds, or news feeds plus online news websites, had higher rates of neuroticism (feeling anxious or depressed/worried) compared to participants consuming news exclusively offline [54]. Another study compared post-traumatic stress one month after Hurricane Sandy among those who learned about the disaster through traditional media (television, newspapers, and radio) versus those who learned about it through social media (Facebook, YouTube, and Twitter; [21]). The researchers found that posttraumatic stress was higher in those using social media relative to those using only traditional media. This could be because social media exert direct and personal impact, owing to the type of content being shared, compared to traditional media that provide more 'objective' information.
The modality of social media (i.e., combination of audiovisual and textual information), its endorsement functions (i.e., likes, shares), and the lack of gatekeeping of information sources circulated on social media may strengthen emotional responses in those who rely on this as a source for news. Accordingly, we predict the following hypotheses: Hypothesis 6. Accounting for information seeking about COVID-19, consuming news via social media will be related to increased emotional distress.
Hypothesis 7.
The association between COVID-19 information seeking and emotional distress will be moderated by social media news use, with the association between information seeking and emotional distress stronger for individuals with higher social media news use.
Data
Responding to widespread "community transmission" within the U.S. (the virus being transmitted by individuals with no travel history) in mid-March 2020, a survey was rapidly assembled and collected by a cross-disciplinary team of researchers at a large Midwestern university. Data were collected from 26 March to 1 April 2020 using a Qualtrics panel, a representative sample of U.S. residents based on a pre-recruited pool of panelists (n = 2251). This sample also contained a probability sub-sample of residents of the Midwestern state in which the sponsoring university is located. Participants had a mean age of 46.6 (SD = 17.0), 58.2% were female, and 68.9% were white. In terms of education, 22.4% had some high school education or a high school diploma, 21.4% had some college education but no degree, 35.8% had an associate's or bachelor's degree, and 20.4% had an advanced degree.
Measures
Emotional distress. Participants indicated the extent to which they experienced the following feelings since they became aware of the COVID-19 outbreak: (1) "Overwhelmed," General news media usage. General news media usage separated by media type, was assessed by the question "How often do you get news from the following sources?" rated on a 5-point scale from 1 = never to 5 = every day. Television news media usage was measured with the item, "National network news, such as ABC, NBC, CBS" (M = 3.62, SD = 1.38). Newspaper news media usage was measured with the item, "newspaper and news magazines" (M = 3.00, SD = 1.45). Finally, social media news media usage was assessed with the item, "social media platforms such as Facebook, Twitter, and YouTube" (M = 3.11, SD = 1.53).
Control variables. Demographic characteristics were also incorporated into the analysis, including age, gender, ethnicity, and education level. We also included additional variables that may be related to emotional distress during the pandemic, such as (a) the likelihood of getting infected with COVID-19 as measured on a 5-point scale from 1 = very unlikely to 5 = very likely (M = 2.60, SD = 1.11), (b) whether participants knew someone likely to suffer serious negative consequences if infected with COVID-19 (yes = 1275, 58.1%; no = 921, 41.9%), and (c) whether they knew someone who has tested positive for COVID-19 (yes = 326, 14.8%; no = 1870, 85.2%). In addition, a measure of political ideology, measured on a 5-point scale from 1 = liberal to 5 = conservative (M = 3.06, SD = 1.08), was included in the analysis. Table 1 presents descriptive statistics and Pearson correlation coefficients among the variables.
Analytic Strategy
Hierarchical linear regression analysis was performed to examine the proposed hypotheses. The analysis was conducted in four steps. Emotional distress was entered as a continuous dependent variable; control variables including demographics, likelihood of getting infected, whether participants knew someone likely to suffer serious negative consequences or who has tested positive for the COVID-19 coronavirus, and political ideology were entered in Step 1. Information seeking about COVID-19 was entered in Step 2. The three news media use variables for television, newspapers, and social media were entered in Step 3 (to address possible multicollinearity between our multiple news media use terms, we also tested versions of the same model where we added each news media use variable and each interaction term separately. We confirmed that the results held). Finally, the interactions between information seeking about COVID-19 and the news media use measures were entered in Step 4. All predictors were mean-centered before they were entered in the moderated regression model. The analysis was conducted using SPSS version 26 (SPSS Inc., Armonk, NY, USA).
Results
Among the control variables, age and gender were significant predictors of emotional distress. Younger (β = −0.145, p < 0.001) females (β = 0.130, p < 0.001) were more likely to be emotionally distressed. Higher levels of distress were reported when people perceived higher likelihood of getting infected by COVID-19 (β = 0.178, p < 0.001) and if they knew someone who was high risk (β = 0.054, p < 0.01). Moreover, people with conservative ideology were less likely to be distressed (β = −0.068, p < 0.01).
Regarding H1, results revealed that while accounting for a variety of control variables, the more COVID-19 information individuals sought the more likely they were to be emotionally distressed (β = 0.255, p < 0.001; see Table 2). Thus, H1 was supported. Note. All of the coefficients are standardized. Predictors (information seeking and news media usage) are meancentered. ∆R 2 , the R square change, shows the improvement in R-square when the next group of predictors is added. * p < 0.05, ** p < 0.01, *** p < 0.001.
With respect to H3, H5, and H7, findings indicated that emotional distress was significantly higher among those high in COVID-19 information seeking and television news use (β = 0.046, p = 0.033). There was no significant interaction between information seeking about COVID-19 and either newspaper use or social media news use (β = −0.002, p = 0.917 and β = 0.017, p = 0.393, respectively). This result provides support for H3 but not H5 or H7.
To understand the nature of this interaction, we plotted the interactive relationships between COVID-19 information seeking and television news use. These relationships are presented in Figure 1, which shows that the emotional distress experienced by those seeking COVID-19 information was further amplified among television news consumers. Thus, H3 was supported.
Figure 1.
Interaction between information seeking and television news usage on emotional distress.
Discussion
The rapid emergence of COVID-19 has caused considerable psychological stress in the global population [2,6,7]. People seek information about the pandemic and follow the news to keep updated. We set out to understand the relationships among information seeking concerning COVID-19, general news media use, and emotional distress during the early stages of the pandemic, with a focus on media modality.
Our primary findings reveal that the more individuals sought COVID-19 information, the more likely they were to be emotionally distressed. Moreover, after accounting for COVID-19 information seeking, consuming news via television and social media was related to increased distress, while consuming newspapers was unrelated to distress. Our moderation analysis revealed that active COVID-19 information seekers who relied on television
Discussion
The rapid emergence of COVID-19 has caused considerable psychological stress in the global population [2,6,7]. People seek information about the pandemic and follow the news to keep updated. We set out to understand the relationships among information seeking concerning COVID-19, general news media use, and emotional distress during the early stages of the pandemic, with a focus on media modality.
Our primary findings reveal that the more individuals sought COVID-19 information, the more likely they were to be emotionally distressed. Moreover, after accounting for COVID-19 information seeking, consuming news via television and social media was related to increased distress, while consuming newspapers was unrelated to distress. Our moderation analysis revealed that active COVID-19 information seekers who relied on television news were more likely to be emotionally distressed, but the association between COVID-19 information seeking and emotional distress was not amplified by newspaper or social media news use.
These findings contribute to the literature on several fronts. First and foremost, we advanced research on information seeking and emotional response by focusing on information seeking about a novel virus, which has resulted in an unprecedented global burden. The positive association between information seeking and emotional distress during the COVID-19 pandemic is reflective of this unique situation. It is notable that the positive association between information seeking and emotional distress remained significant when the three news sources were added to the model. There could be multiple possible reasons for these findings. First, while information seeking normally reduces uncertainty [55,56], COVID-19 information seeking likely increases uncertainty and anxiety because answers to basic questions, like when the pandemic will end, how the virus is transmitted, and its specific short-term and long-term impact remain unavailable. Although "ignorance may be bliss" from an emotional standpoint, the emotional distress concerning COVID-19 may be adaptive, possibly increasing protective health measures. In late March, the COVID-19 information available was quite limited, and centered on hand washing and social distancing recommendations, the lack of personal protective equipment and other medical equipment, and the increasing number of hospitalizations and deaths.
Next, our findings indicated that consuming news via television was related to increased emotional distress. Moreover, our moderation analysis revealed that people who sought COVID-19 information and viewed more television news tended to be even more emotionally distressed. Television's vivid imagery and sound make it an emotionally arousing medium, so television news users may have a higher likelihood of experiencing distress when COVID-19 information seeking. These findings are consistent with previous research showing a strong association between television news and negative emotions during times of crisis, such as September 11 (e.g., [17]) and natural disasters (e.g., [57]). Our results suggest that the effect of television news on negative emotions can be applied to COVID-19.
In addition, our findings indicate that the more people consumed news from social media, the more likely they were to be emotionally distressed. This again could be due to the modality of social media, given it often combines text, audio, and video. The heightened distress among social media news users could also be due to misinformation and exaggeration of risks [53] and unverified contending opinions about an issue, which may heighten uncertainty [58][59][60]. The political nature of COVID-19 [61] means there is an immense amount of disagreement on social media platforms, extending to the very existence of the virus [62]. In addition, the fact that we found no interaction effects between information seeking and social media use on emotional distress could imply that the distress caused by social media may not be driven by information seeking but by other types of social media uses such as social interactions.
Finally, while we expected that consuming news via newspapers would be related to lower distress, given the less emotionally arousing modality and lesser partisan reporting style, our results revealed no significant association between newspapers and distress. This result could reflect that news users' heightened stress during this pandemic was not accentuated by print media. Taken together, these results suggest that people who relied on television-and to a lesser extent social media-for news were more likely to experience emotional distress concerning COVID-19.
To sum, our findings show that people should be careful about their information gathering habits. We would recommend moderating media exposure because repeated media usage, especially via television news [22,[43][44][45][46] may lead to heightened stress. Individuals should also take caution while gathering pandemic news from social media. Of course, the pandemic necessitates that we stay updated with the news for our own safety and the safety of those around us, but thoughtful information gathering and news consumption habits will perhaps facilitate better emotional health.
Limitations and Future Directions
As with all research, our study comes with caveats. Due to the cross-sectional nature of the study, we cannot draw conclusions concerning causal relationships. It is also possible those with more emotional distress are more likely to seek COVID-19 information. Moreover, although we attribute the positive association between information seeking and emotional distress to unique features of COVID-19 information, such as persistent uncertainty, ubiquitous news coverage, and topic unavoidability, it is possible that information seeking could cause higher emotional distress only immediately; in the long-term, the emotional distress could become weak, possibly because people might gain a sense of control. However, prior research shows that in times of crises, information seeking can lead to emotional distress (e.g., [17,21,22,[43][44][45][46]). Our findings support this phenomenon. Despite our justification, future studies should use longitudinal data to confirm causal relationships.
Related to this, it would be important to statistically control for media use level before the pandemic, since some people might increase their media use at the onset of the pandemic with others' media use remaining static. Similarly, it would be ideal to measure the extent to which emotional distress was changed due to the emergence of the pandemic. Due to the lack of those pre-COVID measures in our dataset, however, we were not able to add those control variables in our model. Future studies should measure pre-pandemic values for primary behavioral variables to understand the dynamics of behaviors caused by the pandemic.
Additionally, our measurement of emotional distress only tracked those feeling overwhelmed, anxious, and afraid about what might happen. Given that emotional distress can also be linked to feeling depressed, worried, and sad, future studies should encompass more specific emotions with valid measurement. Moreover, we measured COVID-19 information seeking with a single item. Although our item clearly captured the extent of information seeking with regard to COVID-19, future studies should check the validity of the variable using a multi-measure approach that attends to exposure and attention in additional to information seeking. Similarly, while newspapers and news magazines may feature different characteristics, we measured them within an item, not differentiating those two. Also, although television news includes a variety of cable channels, including highly partisan outlets, we measured television news with national news networks. Future studies should define television news more broadly with more robust measurement.
Conclusions
Since the pandemic began, COVID-19 has dominated the news cycle [27,63]. Moreover, along with the pandemic, there has been another attack on the public, termed the "infodemic" [64] as people have been exposed to an abundance of false information. People are maneuvering this media environment to get information and manage the emotional stress they are feeling. Our study takes a preliminary step toward examining the association between information seeking, use of various types of news media, and emotional health during the early days of the COVID-19 pandemic. Examining emotional health is crucial in this situation, when people were primarily inside their homes and away from friends and family for months on end. The toll of this pandemic will not only be measured in terms of the loss of life, the long-term medical consequences, or the economic impact, but in terms of the emotional toll on the public.
|
v3-fos-license
|
2022-09-24T15:20:17.108Z
|
2022-09-21T00:00:00.000
|
252486254
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4418/12/10/2282/pdf?version=1663901165",
"pdf_hash": "32424d34d3b091332903966e93e9c423ef16400a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45671",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "c11c978b48d385819cfb4e2ee778b604f2c5d6db",
"year": 2022
}
|
pes2o/s2orc
|
Noninvasive Diagnosis of Hepatic Fibrosis in Hemodialysis Patients with Hepatitis C Virus Infection
Hepatitis C virus (HCV) is a major health problem in hemodialysis patients, which leads to significant morbidity and mortality through progressive hepatic fibrosis or cirrhosis. Percutaneous liver biopsy is the gold standard to stage hepatic fibrosis. However, it is an invasive procedure with postbiopsy complications. Because uremia may significantly increase the risk of fatal and nonfatal bleeding events, the use of noninvasive means to assess the severity of hepatic fibrosis is particularly appealing to hemodialysis patients. To date, researchers have evaluated the performance of various biochemical, serological, and radiological indices for hepatic fibrosis in hemodialysis patients with HCV infection. In this review, we will summarize the progress of noninvasive indices for assessing hepatic fibrosis and propose a pragmatic recommendation to diagnose the stage of hepatic fibrosis with a noninvasive index, in hemodialysis patients with HCV infection.
Introduction
Hepatitis C virus (HCV) infection, which may result in fibrosis, cirrhosis, hepatic decompensation, and hepatocellular carcinoma (HCC), is a leading cause of chronic liver disease in patients receiving hemodialysis [1][2][3][4]. In addition to a solid link to liver-related morbidities, HCV infection is associated with a high risk of cardiovascular and infectiousrelated hospitalization and mortality in hemodialysis patients [5]. In contrast, the healthrelated outcomes are significantly improved once HCV is eradicated with effective antiviral treatment [6][7][8][9][10]. Because nearly all patients can successfully clear HCV infection with a short course of potent and safe direct-acting antivirals (DAAs), they are particularly relevant to practitioners and hemodialysis patients in moving toward HCV microelimination by 2030 [11][12][13][14][15][16][17][18].
Although the introduction of DAAs has tremendously advanced HCV care, accurate staging of hepatic fibrosis remains essential for therapeutic and prognostic purposes. The presence of cirrhosis may affect the treatment duration, the need for ribavirin (RBV) coadministration, and the sustained virologic response (SVR) rates in certain groups of patients [19][20][21]. Furthermore, information about the severity of hepatic fibrosis can efficiently help clinicians determine the surveillance strategies for portal hypertension and HCC before and after viral cure [22][23][24][25]. Currently, percutaneous liver biopsy is the gold standard to stage hepatic fibrosis. However, it is an invasive procedure with poor patient acceptance. Because platelet dysfunction significantly affects hemostasis in kidney failures, hemodialysis patients with HCV infection have a risk of bleeding complications ranging from 1.3% to 5.9%, which is much higher than the risk of nonfatal bleeding of 0.16% in nonuremic patients [26][27][28][29]. In addition, the biopsy specimens are prone to sampling and interpretation variability [30]. The use of noninvasive means to assess hepatic fibrosis in hemodialysis patients with HCV infection is appealing to healthcare providers, particularly in monitoring disease evolution over time. In this review, we will summarize the clinical performance of noninvasive indices to predict the stage of hepatic fibrosis in hemodialysis patients with HCV infection, and propose a pragmatic recommendation regarding the care for this special population based on current evidence.
Aspartate Transaminase (AST) to Alanine Transaminase (ALT) Ratio (AAR)
An elevated AAR has been known to suggest cirrhosis in nonuremic HCV patients, with a positive predictive value (PPV) and specificity of 100% when the cut-off value is ≥1 [31]. Ustündag et al. assessed the AAR features in 49 hemodialysis patients with HCV infection who underwent liver biopsy. They found that the AAR increased with more severe hepatic fibrosis (0.36 ± 0.17, 0.67 ± 0.17, and 0.86 ± 0.07 in patients with no fibrosis, mild fibrosis, and moderate fibrosis) [32]. Although the AAR can be of value in predicting the severity of hepatic fibrosis in hemodialysis patients with HCV infection, no patients exhibited cirrhosis in this study, making the AAR cut-off value of ≥1 to diagnose cirrhosis elusive in this special population.
Schmoyer et al. assessed the diagnostic power of the AAR in predicting significant hepatic fibrosis (≥F2), according to the METAVIR scores in hemodialysis patients with HCV infection, which revealed that the area under the receiver operating characteristic (AUROC) was only 0.59. The PPV and negative predictive value (NPV) were 27.0% and 92.3% at a cut-off value of 0.70 [33]. Because the AAR is designed to predict cirrhosis, which is seldom seen in hemodialysis patients, applying the AAR in predicting ≥F2 is of limited clinical utility (Table 1).
AST-to-Platelet Ratio Index (APRI)
Wai et al. correlated various biochemical parameters with the stage of hepatic fibrosis in 270 nonuremic patients with HCV infection. They found that the levels of platelet count, AST, ALT, and alkaline phosphatase (ALP) were highly associated with patients with ≥F2 and cirrhosis (F4). A novel biochemical index, APRI, was developed by amplifying the different effects of the platelet count and AST level on hepatic fibrosis stage [39]. The AUROCs were 0.88 and 0.94 in predicting HCV patients with a fibrosis stage of ≥F2 and F4, respectively. The sensitivity, specificity, PPV, and NPV for ≥F2 were 91%. 47%, 61%, and 86% with a cut-off value of 0.5, and 41%, 95%, 88%, and 64% with a cut-off value of 1.5. In addition, the sensitivity, specificity, PPV, and NPV for F4 were 89%, 75%, 38%, and 98% with a cut-off value of 1.0, and 57%, 93%, 57%, and 98% with a cut-off value of 2.0. Using these cut-off values, the clinicians can correctly diagnose 51% and 81% of patients with a fibrosis stage of ≥F2 and F4 by the APRI without requiring liver biopsy. The APRI has been widely applied in clinical practice because it is simple, readily available, and validated in meta-analyses [40]. Schiavon and Liu et al. independently assessed the diagnostic accuracy of the APRI in 203 and 209 hemodialysis patients with HCV infection, who received percutaneous liver biopsy [34,35]. The AUROCs to predict a fibrosis stage of ≥F2 were 0.80 and 0.83. The PPVs were 37% and 49%, and the NPVs were 93% and 85% at a cut-off value of 0.40. The PPVs were 66% and 82%, and the NPVs were 84% and 71% at a cut-off value of 0.95. The AUROC was 0.84 to predict a fibrosis stage of ≥F3 [34]. When the cut-off values were 0.40 and 0.95, the PPVs were 28% and 46%, and the NPVs were 87% and 83% [36]. If the cut-off values were 0.55 and 1.00, the PPVs were 24% and 29%, and the NPVs were 99% and 94% [34] ( Table 1). Schmoyer et al. validated the performance of the APRI in 139 hemodialysis patients with HCV infection and found that the AUROC to predict a fibrosis stage of ≥F2 was 0.68, which was lower than Schiavon's and Liu's reports. The AUROC in patients with elevated ALT levels was higher than that in patients with normal ALT levels (0.74 versus 0.42), if the clinicians defined the normal limits of ALT as 35 U/L for men and 25 U/L for women [33]. This finding followed Liu's finding that the diagnostic accuracy of the APRI at off-therapy follow-up tended to decrease in patients who achieved sustained virologic response (SVR) compared to those who did not, probably due to the rapid normalization of AST and ALT levels in SVR patients [35,41] (Table 1).
Based on the APRI results, the fibrosis stage can be correctly diagnosed without requiring liver biopsy in around 50% of hemodialysis patients with HCV infection [34][35][36]. Because the concentration of pyridoxin-5'-phosphate, a cofactor required in the full catalytic activity of AST and ALT, is significantly reduced in hemodialysis patients, the APRI cutoff values to stage hepatic fibrosis are lower in hemodialysis patients than in nonuremic patients [42].
Pestana and Lee et al. independently evaluated the clinical utility of APRI in 70 and 116 hemodialysis patients with HCV infection, taking transient elastography (TE) as the reference standard. They used the same cut-off values of liver stiffness as those in nonuremic patients to stage hepatic fibrosis [37,38] (Table 1). The AUROCs of the APRI in predicting patients with a fibrosis stage of ≥F2, ≥F3, and F4 ranged from 0.70 to 0.80, which were lower than the AUROCs in studies that used liver biopsy as the reference standard. The selected cut-off values of the APRI were lower than in Schiavon's and Liu's reports, implying that the correlation between the APRI and TE was inferior to that between the APRI and liver histology. The clinicians were unable to identify the severity of hepatic fibrosis in hemodialysis patients with HCV infection if they determined the fibrosis stage by the APRI with the cut-off values for nonuremic patients (0.50 and 1.50 for ≥F2; 1.00 and 2.00 for F4) [38,39].
Fibrosis Index Based on Four Parameters (FIB-4)
FIB-4, an index that combined four biochemical parameters including age, AST, ALT, and platelet count, was initially developed to predict the severity of hepatic fibrosis in 832 patients with HCV and human immunodeficiency virus (HIV) coinfection [43]. The AUROC in predicting a fibrosis stage of ≥F3 was 0.765. The NPV and PPV were 90% and 65% at cut-off values of <1. 45 [37,38]. However, wide AUROC and cut-off value variations existed, making the clinical utility of FIB-4 to diagnose the severity of hepatic fibrosis in hemodialysis patients with HCV infection elusive (Table 1). Because the FIB-4 index in hemodialysis patients with HCV infection would be expected to be lower than in nonuremic HCV patients, as is observed in the APRI, it is not practical to apply the conventional cut-off values of 1.45 and 3.25 in hemodialysis patients with HCV infection [33,37,38].
King's Score and Fibrosis Index
In addition to the AAR, APRI, and FIB-4 indices, Schmoyer et al. applied King's score and the Fibrosis index, which have been tested in nonuremic patients with HCV, to diagnose the severity of hepatic fibrosis in hemodialysis patients with HCV infection [45,46]. The AUROC of King's score in predicting a fibrosis stage of ≥F2 was 0.69, which was inferior to the AUROC of 0.79 to predict a similar stage of hepatic fibrosis in nonuremic patients. The cut-off value of a King's score of 6.9 had an NPV of 87.3% for a fibrosis stage of F2, compared to that of 12.3 with an NPV of 77% in nonuremic patients [33,45]. Similarly, the AUROC of the Fibrosis index in predicting a fibrosis stage of ≥F2 was 0.59, and was also lower than the AUROC of 0.85 in nonuremic patients. There was a great disparity of the selective cut-off values for the Fibrosis index, with 10.39 in hemodialysis patients and 2.1 in nonuremic patients to reach NPVs of 84.7% and 78.8%, respectively [33,46] (Table 1).
FibroTest
In early 2000, the MULTIVIRC group developed a novel index, named FibroTest, to grade the severity of hepatic fibrosis in patients with HCV infection. They combined α2 macroglobulin, haptoglobin, apolipoprotein A1, γ-glutamyl transpeptidase, total bilirubin, age, and sex into a regression model to reach a final score ranging from 0.00 to 1.00 [47]. The same group further confirmed that only 87 of 537 (16.2%) patients included in another prospective study had discordant results for fibrosis stage between FibroTest and liver biopsy. Furthermore, kidney failure did not significantly contribute to inconsistent fibrosis results [48]. Although the diagnostic accuracy of FibroTest showed promise to assess hepatic fibrosis in HCV, a small-scaled study conducted by the same group that enrolled 50 hemodialysis patients with HCV infection, showed that the AUROCs of FibroTest were only 0.47 and 0.66 in predicting patients with a fibrosis stage of ≥F2 and ≥F3, respectively [49]. An independent study that recruited 33 hemodialysis patients with HCV infection also showed an AUROC of only 0.45 in predicting a fibrosis stage of ≥F2 [50] (Table 2).
Hyaluronic Acid (HA)
HA is a chief extracellular matrix (ECM) component and continues to deposit in the liver in response to hepatic inflammation, leading to hepatic fibrosis or cirrhosis [55]. Serum levels of HA correlate with the severity of hepatic fibrosis in nonuremic patients with HCV infection, particularly in those with advanced liver diseases [56][57][58][59]. A cut-off value of 60 ng/mL had NPVs of 93% and 99% for patients with ≥F3 and F4, respectively, while a cut-off value of 72 ng/mL had a PVV of 100% for those with F4 [58,59].
Schiavon et al. assessed the utility of HA in 185 hemodialysis patients with HCV infection, which revealed a modest AUROC of 0.65 in predicting a fibrosis stage of ≥F2. Although the serum HA level of 64 ng/mL had an NPV of 86%, the PPV was only 42% at a cut-off value of 205 mg/mL [51]. Avila et al. conducted a small-scaled study that recruited 23 hemodialysis patients with HCV infection. In contrast to Schiavon's finding, the AUROC was 0.81 in predicting a fibrosis stage of ≥F2. However, the PPV was only 79%, even though they set the cut-off value of HA to 984.8 ng/mL [52]. The significant discrepancy between both studies may be attributed to the limited number and the heterogeneity of the patients who also had hepatitis B virus (HBV) coinfection, drug-induced liver injury (DILI), or autoimmune hepatitis (AIH) in Avila's study (Table 2). Orăşan et al. reported the diagnostic value of HA in 38 hemodialysis patients with HCV infection, taking TE as the reference standard. Although the NPV was 80% when the cut-off value of HA was 39.72 ng/mL for a fibrosis stage of ≥F2, the PPV was only 80% when the cut-off value of HA was 88.56 ng/mL for a fibrosis stage of ≥F3 or F4 [53].
Since HA is not specific to liver fibrosis, the serum levels of HA in hemodialysis patients are expected to be higher that nonuremic patients because of the coexistence of systemic inflammatory/fibrotic reactions. Based on the published data, HA is of little clinical utility in predicting the severity of hepatic fibrosis in hemodialysis patients with HCV infection.
Tyrosine-Lysine-Leucine 40 Kilodalton (YKL-40)
YKL-40, also known as chitinase-3-like protein 1 (CHI3L1), is a glycoprotein expressed and secreted by various cells, including macrophages, chondrocytes, fibroblast-like synovial cells, hepatic stellate cells, and vascular smooth muscle cells. The serum levels of YKL-40 correlate with the severity of hepatic fibrosis of various etiologies [60]. In nonuremic patients with HCV infection, Saitou et al. demonstrated that the AUROCs of YKL-40 were 0.809 and 0.795 in predicting a fibrotic stage of ≥F2 and F4 [61]. The PPV and NPV were 80% and 79% in predicting patients with ≥F2 hepatic fibrosis when the cut-off level of YKL-40 was 186.4 ng/mL, and were 73% and 78% in predicting patients with cirrhosis when the cut-off level was 284.8 ng/mL.
In contrast to the high AUROC of YKL-40 in assessing the stage of hepatic fibrosis in HCV patients without kidney failures, Schiavon et al. showed that the AUROC of YKL-40 in predicting a fibrosis stage of ≥F2 in hemodialysis patients with HCV infection was only 0.607 [51]. Despite the NPV being 84% at a cut-off value of 290 ng/mL, the PPV remained only 35% at a cut-off value of 520 ng/mL (Table 2). Furthermore, Tatar et al. also confirmed a poor correlation of YKL-40 with hepatic fibrosis in these patients [54]. As with HA, systemic tissue inflammation/fibrosis other than that originating from HCV infection is commonly seen in hemodialysis patients, making YKL-40 of limited usefulness in diagnosing hepatic fibrosis [62].
Radiological Index Transient Elastography (TE, FibroScan)
TE is a noninvasive tool that assesses hepatic fibrosis by measuring liver stiffness [63]. To date, TE has been extensively validated in nonuremic patients with HCV infection, with a diagnostic power at least equivalent to various biochemical and serological indices. The AUROCs in predicting a fibrosis stage of ≥F2, ≥F3, and F4 are 0.83, 0.90, and 0.95, respectively [64]. The PPV and NPV are 95% and 48% at a cut-off value of 7.1 kilopascal (kPa), 87% and 81% at a cut-off value of 9.5 kPa, and 77% and 95% at a cut-off value of 12.5 kPa to predict patients with ≥F2, ≥F3, and F4. About 5% of patients, of whom most are obese, may fail to reach reliable results. Appling the XL probe may improve the diagnostic yield of TE in obese patients.
Liu et al. prospectively assessed hepatic fibrosis with TE in 284 hemodialysis patients with HCV infection who received liver biopsy. The AUROCs of TE in predicting a fibrosis stage of ≥F2, ≥F3, and F4 were 0.96, 0.98, and 0.99, which were significantly higher than the AUROCs of the APRI [65]. The PPVs reached 89%, 95%, and 100% when the cut-off values were 7.1 kPa, 9.5 kPa, and 12.5 kPa, and approximately 90% of patients did not require liver biopsy (Table 3). A large-scaled study confirmed the excellent performance of TE, which showed a very similar distribution of fibrosis stage in 659 hemodialysis patients, based on the liver stiffness measurement with TE [66]. Although no studies compare the diagnostic accuracy of TE between hemodialysis and nonuremic patients with HCV infection, the AUROCs of TE tend to increase by 0.05 in hemodialysis patients with HCV infection, compared to those in nonuremic patients [64,65]. Prior studies have shown that increased ALT levels, a surrogate marker of liver inflammation, may overestimate liver stiffness by increasing the spleno-portal flow [67][68][69]. The lower ALT levels in hemodialysis patients with HCV infection than those in nonuremic patients may contribute to the better performance of TE in diagnosing the severity of hepatic fibrosis [70]. Because the portal flow increases in hemodialysis patients with food intake and excess fluid accumulation, clinicians should perform TE in hemodialysis patients who are fasting and complete a session of hemodialysis to avoid overestimating the severity of hepatic fibrosis [65,[71][72][73][74].
Clinical Application of Noninvasive Indices for Hepatic Fibrosis in Hemodialysis Patients with HCV Infection
A pragmatic recommendation of applying noninvasive indices based on the diagnostic accuracy in predicting the severity of hepatitis fibrosis is depicted, to optimize the clinical practice for hemodialysis patients with HCV infection (Table 4).
In medical institutions where TE is readily available and accessible, directly measuring liver stiffness by this sonography-based technique can offer an excellent diagnostic yield, to predict the severity of hepatic fibrosis and avoid invasive liver biopsy in up to 90% of hemodialysis patients with HCV infection [65]. Because liver stiffness measurement is not a synonym of liver fibrosis, patient-related confounding factors that may alter the liver stiffness, such as heart failure-induced hepatic congestion, hepatic necroinflammatory reaction, digestion state, etc., should be taken into consideration with TE [75]. Therefore, patients are recommended to receive TE at fasting state and after hemodialysis. Magnetic resonance (MR) elastography, an advanced technique that can comprehensively assess the stiffness of the whole liver, has been shown to yield higher diagnostic accuracy than TE in nonuremic patients [76]. Although TE has been shown to perform better than biochemical or serological indices in hemodialysis patients with HCV infection, studies aiming at the clinical performance of MR elastography are awaited in this special population. For medical institutions where TE is unavailable, the APRI can be the choice to stage hepatic fibrosis because of its ease of use and access. Three independent studies, which adopted liver biopsy as the reference standard, have shown that the AUROCs of the APRI were 0.80 to 0.84 in predicting a fibrosis stage of ≥F2 and ≥F3. An APRI of 0.40 and 0.95 can have NPVs of 85% to 93% and PPVs of 66% to 82% to predict F2 [34][35][36]. Because the percentage of hemodialysis patients with a fibrosis stage of ≥F3 is limited, the clinical value of the APRI would be more focused on the high NPV for F3. Current data indicate that an APRI of <0.55 can exclude a fibrosis stage of ≥F3 in 99% of hemodialysis patients with HCV infection [34].
Although the FIB-4 index has superior diagnostic accuracy to the APRI to predict the severity of hepatic fibrosis in nonuremic HCV patients, the performance of FIB-4 in hemodialysis patients with HCV infection is not ideal or stable [33,37,42]. Furthermore, we do not recommend King's score or the Fibrosis index in these patients, based on the poor diagnostic performance [33].
Because most hemodialysis patients present with systemic inflammation/fibrosis from nonhepatic origins, all serological indices targeting ECM dynamics, including FibroTest, hyaluronic acid, and YKL-40, are not recommended in clinical practice to predict the stage of hepatic fibrosis in hemodialysis patients with HCV infection.
To date, data regarding the application to monitor the evolution of fibrotic changes following antiviral treatment are scarce. In nonuremic patients with HCV who achieve SVR with antiviral therapy, studies have shown that the APRI, FIB-4, and TE had low diagnostic accuracies in assessing the evolution of hepatic fibrosis [77,78]. Current evidence does not favor the use of the APRI to monitor the evolution of hepatic fibrosis because the diagnostic accuracy of the APRI in hemodialysis patients with HCV infection seemed to decrease once SVR was achieved with antiviral therapy, compared to the pretreatment status [35]. The clinical performance of TE or MR elastography to follow the evolution of hepatic fibrosis needs further investigation.
Conclusions
HCV infection remains prevalent in patients receiving hemodialysis. The introduction of DAAs has revolutionized the care of HCV in this special clinical setting, based on the excellent viral clearance rate and tolerance. Assessing hepatic fibrosis before and after antiviral treatment is essential for therapeutic and prognostic implications. Although percutaneous liver biopsy is the gold standard for assessing liver histology in patients with chronic liver diseases, the postprocedural complications, as well as the sampling and interpretation variability, limit the widespread use of this invasive technique. Because a noninvasive index of hepatic fibrosis has a preponderance of safety and a potential to monitor disease evolution by repeated measurements, it would be of great help in diagnosing the stage of hepatic fibrosis in hemodialysis patients with HCV infection.
Current evidence suggests that TE is the preferred tool to stage hepatic fibrosis in hemodialysis patients with HCV infection. Although the diagnostic accuracy of TE is higher than other noninvasive indices, only one study has been published to date. Therefore, independent research to validate the performance of TE is still needed. The APRI may be feasible in these patients when TE is not available or accessible. We do not recommend the AAR, FIB-4, King's score, Fibrosis index, FibroTest, hyaluronic acid, and YKL-40 in assessing the severity of hepatic fibrosis in these patients because of low diagnostic yields. However, the roles of King's score and the Fibrosis index remain uncertain because the number of studies is limited. More work is needed to confirm the feasibility of applying noninvasive indices to monitor hepatic fibrosis evolution in hemodialysis patients with HCV infection. Funding: There is no funding support for the study.
|
v3-fos-license
|
2024-07-07T15:14:32.265Z
|
2024-07-05T00:00:00.000
|
271003827
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3389/fcell.2024.1431968",
"pdf_hash": "729d9247d4df292bbfc0165563215653db2d8019",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45673",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b8943f4c925675d69977273b8211ad117b57f4d6",
"year": 2024
}
|
pes2o/s2orc
|
Targeting cellular mitophagy as a strategy for human cancers
Mitophagy is the cellular process to selectively eliminate dysfunctional mitochondria, governing the number and quality of mitochondria. Dysregulation of mitophagy may lead to the accumulation of damaged mitochondria, which plays an important role in the initiation and development of tumors. Mitophagy includes ubiquitin-dependent pathways mediated by PINK1/Parkin and non-ubiquitin dependent pathways mediated by mitochondrial autophagic receptors including NIX, BNIP3, and FUNDC1. Cellular mitophagy widely participates in multiple cellular process including metabolic reprogramming, anti-tumor immunity, ferroptosis, as well as the interaction between tumor cells and tumor-microenvironment. And cellular mitophagy also regulates tumor proliferation and metastasis, stemness, chemoresistance, resistance to targeted therapy and radiotherapy. In this review, we summarized the underlying molecular mechanisms of mitophagy and discussed the complex role of mitophagy in diverse contexts of tumors, indicating it as a promising target in the mitophagy-related anti-tumor therapy.
Introduction
Mitochondria are highly complex and dynamic organelles that regulate cellular metabolism in biosynthesis, bioenergetics, redox homeostasis and signaling functions (Zong et al., 2016).Notably, mitochondrial biogenesis is commonly upregulated in tumors, and mitochondria widely participates in different stages of tumorigenesis (Wallace, 2012).The maintenance of mitochondrial integrity and functional network is critical for tumors to survive and adapt to the hypoxic and nutrient-limited tumor microenvironment.Mitophagy is a specific type of autophagy to eliminate damaged and dysfunctional mitochondria, determining the number and quality of mitochondria (Lu et al., 2023).Under hypoxic and stressful conditions, mitochondria will undergo depolarization to remove damaged mitochondria through autophagic mechanism, which plays a key role in regulating the malignant biological behaviors of tumor cells (Ferro et al., 2020;Panigrahi et al., 2020).Typical mitophagy pathways include ubiquitindependent pathways mediated by PTEN induced kinase 1 (PINK1) -Parkin and nonubiquitin dependent pathways mediated by mitochondrial autophagic receptors such as NIX, BNIP3, and FUNDC1 (Wang et al., 2023a) (Figure 1).In this review, we discussed the underlying molecular mechanisms of mitophagy and highlighted the complex role of mitophagy in diverse contexts of tumors, indicating it as a promising target in the mitophagy-related anti-tumor therapy (Table 1).
Mitophagy: elimination of target mitochondria through selective autophagy
It has been well-established that the recognition of targeted mitochondria by the autophagosome occurs mainly through LC3 adapters in an ubiquitin-dependent pathways mediated by PINK1/Parkin or non-ubiquitin dependent pathways mediated by mitochondrial autophagic receptors.
Ubiquitin-dependent pathways mediated by PINK1/Parkin
PINK1 is a serine/threonine kinase that functions as a sensor of mitochondrial damage to cooperate with Parkin, a cytosolic E3 ubiquitin ligase, to induce mitophagy by targeting damaged mitochondria for the lysosomal degradation (Han et al., 2023).When mitochondria are undamaged, PINK1 is imported to the inner mitochondrial membrane (IMM) via the complex of translocase complexes.Once located at the IMM, PINK1 is cleaved for degradation.When mitochondria are impaired (indicated by the accumulation of unfolded mitochondrial proteins or altered mitochondrial membrane potential), the full-length PINK1 is accumulated and stabilized by impairing import to the IMM (Nguyen et al., 2016).PRKN is further phosphorylated and activated by PINK1 at Ser65 (Kane et al., 2014).Upon being phosphorylated, PRKN further regulates diverse proteins with K11-and K63-linked UB chains to recruit autophagy receptors and remove damaged mitochondria to mediate mitophagy.PINK1 was strongly Major molecular mechanisms of mitophagy.correlated with a poor prognosis in cancer patients (Zheng et al., 2023a).
Non-ubiquitin dependent pathways mediated by mitochondrial autophagic receptors
BNIP3 and its homolog BNIP3L/NIX are outer mitochondrial membrane (OMM) proteins and function as mitophagy receptors that mediates mitophagy under stresses, particularly hypoxia.Under hypoxia, BNIP3 and BNIP3L are activated and localized to the OMM via the carboxy-terminal transmembrane domain.The transmembrane domain with a glycine zipper that is essential for the homo-dimerization of BNIP3, which is important for its interplay with LC3 for mitophagy (Zhang and Ney, 2009).Targeting the BNIP3mediated mitophagy has been found to be combined with anti-CD30 antibody to improve the prognosis of CD30 + EBV + diffuse large B-cell lymphoma patients (Wang et al., 2024).Collectively, BNIP3 has emerged as a promising therapeutic and diagnostic target in multiple cancers.Like BNIP3 and BNIP3L/ NIX, FUNDC1 promotes hypoxia-induced mitophagy.FUNDC1 integrates into the OMM and its LC3 interaction regions motif could project into the cytosol to interact with LC3 (Poole and Macleod, 2021).Targeting these mitochondrial autophagic receptors may provide novel and promising anti-tumor strategy.
Functional network of mitophagy and mitochondrial dynamics
Mitochondria are highly dynamic organelles that constantly undergo fusion and fission.Mitochondrial fusion integrates two mitochondria at the outer and inner membrane interfaces primarily via Mitofusin 1 (MFN1), Mitofusin 2 (MFN2), and Optic atrophy protein 1 (OPA1).Mitochondrial fission is the process whereby a mitochondrion divides into two mitochondria, which is mediated by Dynamin-related protein 1 (DRP1).Mitochondrial fusion and fission have been found to be critical for removing damaged mitochondria by mitophagy.MFN2 overexpression induces mitochondrial fusion and leads to increased mitophagy in pancreatic adenocarcinoma cells (Yu et al., 2019).Jiang et al. found that blocking mitochondrial recruitment of MFN2 reduces formation of the PINK1/MFNA2/Parkin complex required for initiation of mitophagy (Jiang et al., 2022a).Herein, mitochondrial fusion and fission play a vital role in cellular mitophagy.
Role of cellular mitophagy in human cancers
Mitophagy plays a role in both cell death and survival.Excessive mitophagy leads to the loss of functional mitochondria, leading to insufficient energy supply and cell death.On the other hand, mitophagy promotes cell survival by eliminating damaged mitochondria to adapt to the environment.In malignant tumors, mitophagy is involved in the abnormal activation and proliferation of cancer cells, suggesting that it can have both tumorigenic and tumor-suppressive effects (Chang et al., 2017).The balance between these two effects coordinates to determine tumor development or apoptosis (illustrated in Figure 2).From this perspective, a novel anti-tumor therapy can not only inhibit the mitophagy of cancer cells for anti-tumor effect, but also enhance the mitophagy normal cells and remove damaged mitochondria, to maintain the stability and function of mitochondrial genome.Therefore, further understanding of the molecular mechanism of mitophagy signaling pathway is expected to provide new ideas for the formulation of clinical anti-tumor therapeutic strategies.
Role of mitophagy in tumor proliferation and metastasis
In some cancers, mitophagy was found to inhibit tumor development.G protein-coupled receptor 176 (GPR176) regulates mitophagy via the cAMP/PKA/BNIP3L axis, leading to initiation and progression of colorectal cancer.Mechanistically, the recruitment of the G protein GNAS intracellularly is essential for the transduction of GPR176-mediated signals (Tang et al., 2023).In gastric cancer, gamma-glutamyltransferase 7 (GGT7) is a tumorsuppressive regulator by interacting with RAB7 and re-locating RAB7 to cytoplasm, leading to enhanced mitophagy and reduced ROS production (Wang et al., 2022).Unc-51 like kinase 1 (ULK1) deficiency has been found to enhance invasive potentials and osteolytic bone metastasis of breast tumors via attenuating mitophagy.Mechanistically, ULK1 inhibition suppresses mitophagy under hypoxia, leading to accumulated damaged mitochondria and NLRP3 inflammasome activation, which ultimately alters cytokine secretion for osteoclast differentiation and bone metastasis (Deng et al., 2021).
On the contrary, multiple studies proposed that enhanced mitophagy promotes tumor proliferation and metastasis.BCL2 like 13 (BCL2L13) targeted DNM1L at the Ser616 site to promote mitochondrial fission and mitophagy, which ultimately enhance the proliferation and invasion of glioblastoma cells (Wang et al., 2023b).In triple-negative breast cancer, divalent metal transporter 1 (DMT1) induces mitochondrial iron translocation via endosomemitochondria interactions.DMT1 knockdown elevates labile iron pool levels and activates PINK1/Parkin-dependent mitophagy to promote the outgrowth of lung metastatic nodules.These findings reveal a DMT1-dependent pathway connecting endosomemitochondria interactions to mitochondrial iron translocation and metastatic fitness of breast cancer cells (Barra et al., 2024).
Role of mitophagy in tumor stemness
Mitochondria plays a key role in the stemness maintenance and differentiation of cancer stem cells (CSCs) (Zheng et al., 2023b).It has been proposed that mitophagy is highly active in CSCs.Deregulation of ADAR1 is closely correlated with self-renewal of liver CSCs (Jiang et al., 2022b).Enhanced mitophagy has been observed in ADAR1-enriched liver CSCs.In addition, GLI1 editing promotes a metabolic shift to oxidative phosphorylation to sustain stemness through PINK1/Parkin-mediated mitophagy in hepatocellular carcinoma (HCC), therefore enhancing metastatic potential and sorafenib resistance of HCC.The highly active mitophagy has also been identified as a key feature of lung CSCs, driving metabolic reprogramming via the Notch1/AMPK axis to induce lung CSC expansion (Liu et al., 2023a).Hyperactivated mitophagy in lung CSCs leads to increased mitochondrial DNA (mtDNA) content in the lysosome.The mtDNA in lysosomal fractions from CSCs was highly oxidized and significantly higher than that from non-CSC cells.Lysosomal mtDNA further serves as an endogenous ligand for Toll-like receptor 9 (TLR9) to enhance the interaction between Notch1 and AMPK to promote lysosomal AMPK activation.Lysosomal mtDNA-dependent TLR9 signaling induces Notch1/AMPK activation to promote mitochondrial metabolism in CSCs.Targeting the TLR9-Notch-AMPK pathway in high-mitophagy lung tumors reduces the CSC pool and blocks tumor growth for nonsmall cell lung cancer treated with chemotherapy.(Liu et al., 2023b).In glioblastoma stem cells (GSCs), platelet-derived growth factor (PDGF) promotes m6A accumulation to regulate mitophagy.PDGF ligands induce EGR1 transcription to upregulate methyltransferase-like 3 (METTL3) to sustain self-renewal of GSC.Targeting the PDGF/ METTL3 axis could impair GSC mitophagy in an OPTN-dependent manner (Lv et al., 2022).Clusterin (CLU) has been found to exert its mitophagy-specific role in oral CSCs.CLU can regulate mitochondrial fission by activating the serine/threonine kinase AKT, triggering the phosphorylation of Drp1 at serine 616 residue and thus initiating mitochondrial fission.CLU-induced mitophagy enhances selfrenewal capability of oral CSCs through mitophagic degradation of MSH homeobox 2 and prevents its nuclear translocation to inhibit SOX2 activity (Praharaj et al., 2023).Interferon-stimulated gene 15 (ISG15) and protein ISGylation is upregulated in pancreatic CSCs for maintaining their metabolic plasticity (Alcalá et al., 2020).ISG15 abrogation inhibits ISGylation, oxidative phosphorylation and mitophagy to impair self-renewal and tumorigenic ability of pancreatic CSCs.Thus, ISGylation is critical for mitophagy to clean dysfunctional mitochondria and maintain pancreatic CSCs (Alcalá et al., 2020).
Role of mitophagy in tumor chemoresistance
Mitophagy plays a multifaceted role in tumor chemoresistance.Multiple studies have elucidated that enhanced mitophagy promote chemoresistance in specific cancer types.In small cell lung cancer (SCLC), METTL3 confers SCLC cells resistance to chemotherapy by upregulating mitophagy.METTL3 induces m6A methylation of DCP2 to induce PINK1/Parkin-mediated mitophagy and promote chemotherapy resistance.METTL3 inhibitor STM2457 could reverse the chemoresistance of SCLC (Sun et al., 2023).Stomatin-like protein 2 (STOML2) is located in the IMM and is highly expressed in cancer cells.STOML2 stabilizes PARL and prevents gemcitabine-mediated PINK1-dependent mitophagy to reduce the chemoresistance of pancreatic cancer, making STOML2-targeted therapy as a potential strategy for gemcitabine sensitization (Qin et al., 2023).In contrast, various studies illustrated mitophagy may inhibit chemoresistance in some cancers.CRL4CUL4A/DDB1, a well-defined E3 ubiquitin ligase, was significantly upregulated in cisplatin-resistant ovarian cancer cells by inhibiting mitophagy.Downregulation of CRL4CUL4A/DDB1, promotes mitophagy by regulating the PINK1/Parkin axis, DRP1 dephosphorylation at Ser637, and the interplay between DRP1 and voltage-dependent anion channel 1 (VDAC1), ultimately driving mitochondrial fission and mitophagy in chemotherapy-resistant ovarian tumor cells (Meng et al., 2022).
Role of mitophagy in resistance to targeted therapy
Drug-tolerant persister (DTP) tumor cells leads to tumor relapse (Dhanyamraju et al., 2022).The efficacy of EGFR-TKIs is limited due to drug resistance.In combination of circular RNA IGF1R (cIGF1R) with EGFR-TKIs could synergize to block tumor regrowth after drug withdrawal.The cIGF1R encodes a peptide that reduces Parkin-induced ubiquitination of VDAC1 to block mitophagy, indicating a molecular switch that transiting of DTP to apoptosis (Wang et al., 2023c).BH3 mimetic antagonists to BCL-2 and MCL-1 has been considered as an anti-tumor strategy to induce cell death in acute myeloid leukemia (AML), and resistance to BH3 mimetics has been identified as a critical clinical problem (Bhatt et al., 2020).AML cells resistant to BH3 mimetics is correlated with high influx of mitophagy, and pharmacologic inhibition of autophagy could sensitize AML cells to BH3 mimetics.MFN2 has been identified as a regulator of mitophagy and functions as a receptor for Parkin onto the damaged mitochondria, which leads to resistance to BH3 mimetics in AML (Glytsou et al., 2023).Targeting MFN2 could synergize with BH3 mimetics by blocking mitophagy and inducing apoptosis in AML.Lenvatinib is a standard therapy option for advanced HCC.In HCC, LINC01607 induces protective mitophagy by upregulating P62, which reduces ROS levels and induces drug resistance.LINC01607 knockdown in combination with lenvatinib could reverse resistance in vivo (Zhang et al., 2023).
Role of mitophagy in resistance to radiotherapy
Enhanced DNA damage repair is essential for radiation resistance in tumor cells, and mitophagy functions as a critical upstream signal to enhance radiation-mediated DNA damage by regulating mitophagy proteins.SIRT3 was found to be upregulated in colorectal tumor cells and leads to PINK1/Parkin-mediated mitophagy.Hyperactivated mitophagy promotes DNA damage repair, therefore inducing radiation resistance.Mechanistically, mitophagy leads to RING1b downregulation and impaired ubiquitination of histone H2A to enhance DNA damage repair (Wei et al., 2023).In melanoma, hyperactivated mitophagy in combination of radiation could augment DNA damage and inhibit tumor progression (Ren et al., 2023).Mitophagy is also essential for ferroptosis under radiation.Radiation leads to the degradation of the peri-droplet mitochondria by lysosomes to release free fatty acids and increase lipid peroxidation for ferroptosis (Yang et al., 2023).
Role of mitophagy in anti-tumor immunity
Mitophagy plays a key role in the maintenance of mitochondrial function, ensuring the effective participation of specific immune cells and the realization of cell-specific immunomodulatory functions.In addition, mitophagy can further regulate immune function by inhibiting the production of mitochondrial components that regulate immune response (Song et al., 2020).
In the process of immune response, T cells highly depend on mitochondria to support their evolving metabolic requirements.Maintenance of mitochondrial health requires removal of damaged mitochondria through mitophagy via PINK1/Parkin-or BNIP3L/NIX-mediated pathways.Franco et al. explored the function of mitochondrial quality control in memory T cell responses and found that mitophagy machinery orchestrates survival and metabolic dynamics required for memory T cell formation (Franco et al., 2023).Urolithin A, generated from gut microbiome from foods, has been found to improve mitochondrial health.Urolithin A can enhance the anti-tumor CD8 + T cell immunity in vivo.It has been found that urolithin A-induced T memory stem cell formation depends on PINK1-mediated mitophagy to induce release of PGAM5 into the cytoplasm.Cytosolic PGAM5 dephosphorylates β-catenin to activate Wnt pathway and mitochondrial biogenesis (Denk et al., 2022).Gupta et al. found that NIX-mediated mitophagy is essential for effector memory formation in T cells.Deficiency in NIX-dependent mitophagy results in HIF1α accumulation and metabolic alteration to impair ATP production during effector memory formation in T cells (Gupta et al., 2019).
Therapeutic response to immunochemotherapy is closely correlated with subcellular re-distribution of PD-L1.Recent study has elucidated that the distribution pattern of PD-L1 is determined by ATAD3A/PINK1-mediated mitophagy.PINK1 could recruit PD-L1 to mitochondria for degradation, while paclitaxel upregulates ATAD3A to impair proteostasis of PD-L1 by blocking PINK1mediated mitophagy.ATAD3A/PINK1-mediated mitophagy determines the efficacy of immunochemotherapy by PD-L1 relocalization, which is a promising target for promoting the therapeutic responses to immunochemotherapy (Xie et al., 2023).
Role of mitophagy in metabolic reprogramming
Cellular mitophagy is essential for maintaining functional mitochondria, which is a prerequisite of tumor cells to mediate metabolic shift from glycolysis towards oxidative phosphorylation.In lung adenocarcinoma cells and organoids, PINK1 has been found to be upregulated to sustain mitochondrial homeostasis during DTP generation, and PINK1-induced mitophagy drives DTP production upon MAPK inhibition.PINK1-induced mitophagy promotes DTP cell survival, while MAPK inhibition leads to MYC-regulated upregulation of PINK1, therefore activating mitophagy of DTP cells.Mitophagy inhibition via chloroquine or PINK1 abrogation could enhance the therapeutic response to MAPK inhibitors (Li et al., 2023).Targeting iron metabolism in tumor cells is an emerging opportunity for anti-tumor therapeutics, and iron is an essential component involved in the electron transport chain within mitochondria.Sandoval-Acuña et al. found that targeting mitochondrial iron metabolism by deferoxamine inhibits tumor progression by inducing mitochondrial dysfunction and mitophagy (Sandoval-Acuña et al., 2021).
Role of mitophagy in cancer-associated fibroblasts
Among the stromal cells in the tumor microenvironment, cancerassociated fibroblasts (CAFs) are the most abundant and actively involved in tumor progression through versatile interplay with other cell types in the TME.Blocking mitophagy by targeting Parkin in the CAFs impairs tumor growth in vivo.Autophagy deficiency in CAFs also enhances proline biosynthesis through mitophagy-induced regulation of NAD kinase 2 (Bai et al., 2023).TNBC cells with integrin beta 4 (ITGB4) overexpression provided with ITGB4 protein via exosomes to induce BNIP3L-mediated mitophagy in CAFs.Co-culture experiment revealed that the ITGB4-meidated mitophagy is impaired in CAFs by ITGB4 inhibition in MDA-MB-231 cells, and ITGB4-positive CAFconditioned medium could promote malignant behaviors of TNBC cells (Sung et al., 2020).Thus, targeting mitophagy of CAFs may be a promising strategy for CAF-targeted anti-tumor intervention.
Role of mitophagy in hypoxic tumor microenvironment
Hypoxic microenvironment is a common feature of solid tumors, and hypoxia exerts great effect on the malignant behavior of tumor cells (Chen et al., 2023).Upon hypoxia, LYPLA1-mediated depalmitoylation of glycerophosphocholine phosphodiesterase 1 (GPCPD1) induces GPCPD1 translocating from cytoplasm to mitochondria.Notably, mitochondrial GPCPD1 binds to and interacts with VDAC1 to impair the oligomerization of VDAC1.VDAC1 monomer further recruits Parkin-induced poly-ubiquitination to induce mitophagy and promote progression of TNBC (Liu et al., 2023a).Under the hypoxic condition, mitophagy receptor FUNDC1 accumulates at the mitochondria-associated membranes to stabilize the FUNDC1/ ULK1 complex for cell survival and tumor development (Ponneri Babuharisankar et al., 2023).
Role of mitophagy in ferroptosis
Ferroptosis is an iron-dependent type of programmed cell death closely correlated with lipid peroxidation (Jiang et al., 2021).Recent study revealed that the inter between mitochondrial integrity and ferroptosis determines the cell survival.Myoferlin is an oncoprotein that upregulated in pancreatic ductal adenocarcinoma and participates in the regulation of cell membrane biology.Pharmacological inhibition of myoferlin via WJ460 could induce mitophagy and ROS accumulation culminating with lipid peroxidation and apoptosisindependent cell death.WJ460 caused a reduction of the abundance of ferroptosis core regulators xc-cystine/glutamate transporter and GPX-4.Mitophagy inhibitor Mdivi1 and iron chelators inhibited the myoferlin-related ROS production and restored cell growth.A synergic effect between ferroptosis inducers, erastin and RSL3, and WJ460 (Rademaker et al., 2022).Oroxylin A (OA), a novel CDK9 inhibitor, showed strong therapeutic potential against HCC and a striking capacity to overcome drug resistance by downregulating PINK1-PRKN-mediated mitophagy.CDK9 inhibitors promoted dephosphorylation of SIRT1 and promoted FOXO3 protein degradation, which was regulated by its acetylation, leading to the transcriptional repression of FOXO3-driven BNIP3 and impairing the BNIP3-mediated stability of the PINK1 protein (Yao et al., 2022).
Future directions for mitophagy-based antitumor strategy
One challenge in the mitophagy-based drug development is specificity.The concept of specificity is required for optimizing drug efficacy and reducing adverse events.However, current mitophagybased drugs are non-selective and do not meet this criterion.Chloroquine and hydroxychloroquine are basic amphiphiles that function in the lysosome and impair lysosomal function as the main mechanism of action.High-dose hydroxychloroquine given for cancer therapy induces irreversible side effect of retinal toxicity (Leung et al., 2015).Chloroquine and hydroxychloroquine have similar properties of pharmacokinetic with high volume of distribution and prolonged plasma half-lives (Schrezenmeier and Dörner, 2020).It has been proposed that the reformulation of chloroquine and hydroxychloroquine is required to benefit their pharmacokinetic and safety to support the use of chloroquine and hydroxychloroquine for the treatment of cancer.
Nanoparticle administration is useful for overcoming poor pharmacokinetic and toxicity as well as promoting site-specific administration by improving the solubility of hydrophobic drugs, preventing drugs from degradation, and altering tissue distribution (Amreddy et al., 2018).In the treatment of cancer, a variety of nanomedicines have been developed with clinical approval, which greatly enhance the safety and effectiveness of anti-tumor drugs.Therefore, nanoparitcle reformulation of mitophagy-based drugs, such as chloroquine and hydroxychloroquine, increase exposure of target tissues relative to off-target tissues and reduce off-target toxicity (Stevens et al., 2020).
Conclusion
It has been well-established that the regulation of mitophagy may be a new direction for the treatment of tumors.Through indepth analysis of the potential molecular mechanism of mitophagy, it may provide theoretical basis for further research on novel antitumor therapy.The importance of mitophagy in tumorigenesis and development has been well established.Mitophagy plays a key role in regulating intracellular environmental homeostasis and clearing damaged mitochondria, thereby regulating mitochondrial function and oxidative stress.Therefore, certain mitophagy inhibitors or activators may have great potential for anti-tumor strategy.Therefore, it is of great significance to explore the influence of mitophagy on tumorigenesis and development.In future studies, through proteomics, transcriptomics, metabolomics and sing-cell sequencing technologies to further explore the molecular mechanisms regulating mitophagy, it will help to explore potential pharmacological small molecules targeting mitochondrial autophagy, so as to provide more effective anti-tumor treatment.
FIGURE 2
FIGURE 2Role of mitophagy in human cancers.
|
v3-fos-license
|
2018-12-11T09:35:06.254Z
|
2018-03-22T00:00:00.000
|
55637305
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/10153/imfi_2018_01_Feng.pdf",
"pdf_hash": "254f4dbc548bafd360efbc4a6d90d14af2689691",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45674",
"s2fieldsofstudy": [
"Business"
],
"sha1": "643e9ff8f7ee07362ed51a0a5964841fed3fa92b",
"year": 2018
}
|
pes2o/s2orc
|
“ Fund performance-flow relationship and the role of institutional reform
Extant literature shows the positive impact of institutional development on investor rationality and market efficiency. The authors extend this evidence by investigating the performance-flow relationship in the Chinese mutual fund market before and after the enforcement of the revised Law of the People’s Republic of China on Securities Investment Fund. Empirical evidence reveals that Chinese investors irrationally chase past star performers before institutional reform, but gradually become rational and less obsessed with star-chasing behaviors after reform. Moving one percentile upward in the relative performance among the star funds is associated with money inflows by 0.532% after reform, much lower than 1.433% before reform. The findings confirm the positive influence of institutional development on investor rationality and market efficiency. The successful experience can be borrowed by other emerging markets with less developed institutions. Jinyu Feng (China), Wenzhao Wang (UK) BUSINESS PERSPECTIVES LLC “СPС “Business Perspectives” Hryhorii Skovoroda lane, 10, Sumy, 40022, Ukraine www.businessperspectives.org Fund performance-flow relationship and the role of institutional reform Received on: 3rd of February, 2018 Accepted on: 7th of March, 2018
of the most important emerging markets in the world.In particular, due to the less developed and younger fund industry, the Chinese mutual fund market has an elevated level of participation of unsophisticated retail investors 1 .They succumb to miscalibration, judgmental biases, and heuristics (see Dhar & Zhu, 2006; Khorana et al., 2005; Kruger & Dunning, 1999, 2002; Lichtenstein & Fischhoff, 1977), and are less capable of information search, collection, and analysis, which are critical in making investment decisions (Huang et al., 2007; Sirri & Tufano, 1998).Hence, a unique pattern of the performance-flow relationship would be expected.
Our attention to the Chinese mutual fund market is particularly motivated by the fact that it experiences an important institutional reform in June 2013, which offers a natural experiment on assessing the influence of institutional development on investor rationality and market efficiency (see Chui et al., 2010; La Porta et al., 1998; Schmeling, 2009).Specifically, June 1st, 2013 witnesses the enforcement of the revised Law of the People's Republic of China on Securities Investment Fund (LPRCSIF) aiming at improving market efficiency and protecting investors' legal rights.Three main modifications in the fund industry come with the new LPRCSIF.
First, the revised LPRCSIF simplifies fund issuing procedures and provides more investment opportunities for investors.The pattern of fund issuance transits from the authorized system to the registered one, and more financial institutions are permitted to issue mutual funds.According to the China Securities Regulatory Commission (CSRC), only 81 fund management companies are permitted to issue mutual funds before the launch of the new LPRCSIF.However, after its enforcement, more financial institutions such as private placements, insurance companies, and commercial banks are granted permissions to issue mutual funds.By April 2016, there are 113 financial institutions obtaining the issuance license.
Second, sales and distribution of mutual funds are virtually monopolized by commercial banks before June 2013, which means that investors have to purchase (redeem) funds from banks.After the launch of the new LPRCSIF, more third-party fund sales and distribution agencies appear.These agencies provide comprehensive services, such as operating Internet-based information portals and sales platforms, offering market news and fund research, and allowing registered members to purchase (redeem) funds with lower transaction fees 2 .In this sense, information costs and transaction costs are much lower for investors.
Third, information disclosure is more strictly required to alleviate information asymmetry.Existing research based on earlier dataset before the enforcement of the revised LPRCSIF reports that Chinese mutual funds lack performance persistence and investors tend to make less optimal decisions in fund investment.Therefore, the smart money effect which is revealed in the US market is not observed in China (Feng et al., 2014; Gruber, 1996; Jun et al., 2014; Zheng, 1999).This can be ascribed to the unfamiliarity of investors with the funds that they are going to trade.As the new LPRCSIF stresses the information disclosure, Chinese investors would have more available information on fund investment.
Beyond the aforementioned three aspects, the new LPRCSIF improves the Chinese mutual fund market institution in a more general sense, such as standardizing fund-raising procedures, protecting investors' legal rights, and promoting the stable and healthy fund market, etc.As a result, it is reasonable to anticipate different patterns of the performance-flow relationship before and after the enforcement of the new LPRCSIF and using the unique dataset from the Chinese mutual fund market allows us to directly survey the role of institutional reform.More notably, if institutional reform proves successful, the experience can be borrowed by other emerging markets with less developed institutional arrangements.
The dataset in our paper spans from January 2005 to June 2017.We allocate all funds into top, medium, and bottom performance regions based on the linear piecewise regression that allows the sensitivities 1 According to the CSMAR, the share of retail investors in the Chinese mutual fund market was 72.29% in 2014.
For example, the Shanghai Tiantian Fund Distribution Co., Ltd. is one of the first independent fund sales and distribution agencies approved by the CSRC.It offers a wide range of services mentioned in our paper.
of fund flows to past performance to vary in distinct groups.In addition, we separately assess investors' purchase and redemption behaviors in an attempt to present the driving forces of the performance-flow relationship.Findings based on the entire sample reveal the differences in the sensitivities of fund flows to past performances.In particular, there is a positive performance-flow relationship in the top and bottom performers, driven by investors' intense purchases of star performers and redemptions of the poorest performers, respectively.In the medium performance region, however, fund flows are negatively related to past performance because of the strong adverse purchases of poorer performers.We also confirm the asymmetric performance-flow relationship that investors chase star performers more intensely than punish the poorest ones in the Chinese mutual fund market.
To check the influence of institutional reform on the fund performance-flow relationship, we split the entire sample period into two subperiods: before the launch of the revised LPRCSIF from January 2005 to March 2013 (the pre-reform period) and after the launch of the revised LPRCSIF from April 2013 to June 2017 (the post-reform period) 3 .In the pre-reform period, investors exhibit evident star-chasing behaviors.Meanwhile, they adversely purchase worse performers in the medium group and punish the poorest performers in the bottom group by great redemptions; however, the performance-flow relationship in these two groups are insignificant.The post-reform period presents that investors' star chasing becomes less pronounced and there is a significantly negative performance-flow relationship in the medium group.These findings show that the launch of the new LPRCSIF makes more information on fund performance persistence readily available and after knowing limited persistence of star performance, investors show less interest, confirming the positive influence of institutional improvement on investor rationality.Our presented results are robust to different risk-adjusted performance measures and the use of the tradable shares in computing pricing factors.
This study makes the following contributions.First, we provide additional empirical evidence on performance-flow relationship from the Chinese mutual fund market, one of the largest and the most important emerging markets across the globe.Second, we take both purchases and redemptions into account to reveal the driving forces of the performance-flow relationship.Third, we conduct comparative analyses on the fund performance-flow relationship under different market institutions and complement to the argument that an advanced system of market institutions generates a more efficient market.Fourth, further to the third contribution, our study proposes the policy suggestion that the successful experience of institutional reform in the Chinese mutual fund market can be borrowed by other emerging markets that have relatively less developed market institutions and a large fraction of irrational investors.
The remainder of this paper proceeds in the following manner.Section 1 presents data, methodology, and descriptive statistics.Section 2 illustrates main empirical results and a series of robustness tests.Last section concludes. 3 We separate our sample in this way because our data are at the quarterly interval.However, it does not affect our result if we exclude the second quarter of 2013 -that is the first subperiod from January 2005 to March 2013 and the second subperiod from July 2013 to June 2017.4 Our dataset consists of actively-managed equity-leaning mutual funds including equity mutual funds and hybrid mutual funds.
Data and specifications
We collect all data from the CSMAR Database compiled by GTA Data Services from January 2005 to June 2017 4 .We estimate the following equation at the quarterly interval: , where
,
i , t-,t-Rank as the return ranking of fund i 's relative performance which is defined as the raw performance of fund i relative to other funds over the past year, ranging from 0 (worst) to 1 (best).The raw performance is measured as the risk-adjusted abnormal return from the three-factor alpha of Fama and French (1993).To start with, we estimate the following specification at the monthly interval over the past 24 months: , where Realized performance of fund i over the past year, i.e. from quarter (t-4) to (t-1)
Std i,[t-4,t -1]
Annualized standard deviation of monthly fund returns of fund i over the past year, i.e. from quarter (t-4) to (t-1) The natural logarithm of the total net asset of fund i over the past quarter (t-1)
Div i,t
The dividend amount of fund i in the current quarter t
Div_Times i,t
The distribution times of fund i in the current quarter t Ln(TNA i→company,t-1 ) The natural logarithm of the total net asset of all funds in fund i's company over the past quarter (t-1) The number of all funds in fund's i's company over the past quarter (t-1) Ln(Age i→company,t-1 ) The natural logarithm of the age of fund i's company at the past quarter (t-1) Expense i→company,t-1 The expense ratio of fund i's company over the past quarter (t-1) The annualized standard deviation of monthly equity market returns over the past year, i.e. from quarter (t-4) to (t-1) Note: This table reports the descriptions of all control variables adopted in Equation ( 1).We include this set of variables to control their impact on fund flows.
As we relate quarterly fund flows to performance obtained over the preceding 36 months 5 , the crosssectional estimated relationship in each quarter appears auto correlated, causing the underestimation of standard errors and the overestimation of t-statistics.To address this issue, we estimate each quarter's observations individually and store the time series of coefficient estimates.We report the means and t-statistics on the mean as Fama and MacBeth (1973), which generates more conservative significance levels.We use the preceding 24 months to compute fund raw performance from the three-factor model and in Equation (3).We then employ the relative return ranking over the past 12 months in Equation (1).
Descriptive statistics
billion in 2017), this is partially ascribed to the increasing number of funds provided by fund companies, which offers more investment options and diversified profits for Chinese investors so that in the more competitive fund market, investment money flows into different funds.As a matter of fact, we observe from Table 1 that company scales increase rapidly in the post-reform period, demonstrated by the number of funds per company and company total net assets.This explicitly reflects the impact of the new LPRCSIF simplifying new fund issuing procedures and increasing trading channels.
RESULTS AND DISCUSSION
This section presents results on performance-flow relationship in the Chinese mutual fund market.Subsection 2.1 discusses results based on the entire period.Subsection 2.2 conducts a comparative study by separating the entire sample into preand post-reform periods according to the launch of the revised LPRCISF.Subsection 2.3 reports robustness test results.Note: This table presents descriptive statistics of sample Chinese mutual funds and fund companies from January 2004 to June 2017.In particular, we report fund flows, purchase rates, redemption rates, realized performance, fund volatility, fund total net assets (in billion RMB), dividend amount, dividend distribution times, company total net assets (in billion RMB), the number of funds per company, the age of fund companies, and company expense ratio.Fund and fund company data are reported as the annual cross-sectional average.All data are collected from the CSMAR Database compiled by GTA Data Service.
The entire period
Results from the entire period appear in Table 3.The top category presents the positive performance-flow relationship (High i,[t-4,t-1] = 1.092, tstatistics = 2.879), meaning that Chinese investors chase past star performers.Moving 1 percentile upward in the relative performance among the top group is associated with significantly greater money inflows by 1.092%.While investors dem-onstrate the disposition effect -the tendency to redeem star performers but to retain worse performers (Kahneman & Tversky, 1979; Shefrin & Statman, 1985) given the positive relationship between fund performance and redemptions (High i,[t-4,t-1] = 0.657, t-statistics = 3.831), the stronger positive feedback purchase of star performers (High i,[t-4,t-1] = 2.111, t-statistics = 3.468) significantly dominates and hence drives the positive performance-flow relationship., representing the return ranking in the top, medium, and bottom performance region, respectively, obtained based on the raw performance computed from Fama and French (1993) three-factor.In addition, we incorporate a series of control variables to remove other potential effects on fund flows.In particular, we have raw performance of fund i over the past year, [ ] i, t-4,t -1 R ; the annualized standard deviation of monthly fund returns of fund i over the past year, Std i,[t-1,t-4] ; the natural logarithm of the total net asset of fund i over the past quarter, Ln(TNA i,t-1 ); the dividend amount and distribution times of fund i in the current quarter, Div i,t and Div_Times i,t ; the natural logarithm of the total net asset of all funds in fund i's company over the past quarter, Ln(TNA i→company,t-1 ); the number of all funds in fund's i's company over the past quarter, Num i→company,t-1 ; the natural logarithm of the age of fund i's company at the past quarter, Ln(Age i→company,t-1 ); and the expense ratio of fund i's company over the past quarter, Expense i→company,t-1 .We also report the average R-square (Avg.R 2 ) and the number of observations contained in each regression (Obs.).Additionally, we replace Flow i,t with Pur i,t and Red i,t to reveal the relationship between fund performance and purchase rates and between fund performance and redemption rates, respectively.The t-statistics are in brackets.a , b , and c represent statistical significance at the 1%, 5%, and 10% levels, respectively.Different from extant literature, the medium three performance quintiles show a negative performance-flow relationship (Mid i,[t-4,t-1] = -0.156,t-statistics = -2.653).Investors are likely to reedeem worse funds (Mid i,[t-4,t-1] = -0.160,t-statistics = -6.193);however, they exhibit intense adverse purchase behaviors -the purchase of worse performers -implied by the negative estimation of purchases (Mid i,[t-4,t-1] = -0.345,t-statistics = -3.770).The more pronounced adverse purchase leads to the mildly negative relationship in this performance group.For the bottom region, there is no significant relationship between purchases and performance (Low i,[t-4,t-1] = -0.131,t-statistics = -0.705),but investors have strong willingness to dispose of the poorest funds by greater redemptions (Low i,[t-4,t-1] = -0.392,t-statistics = -5.121).Hence, there presents a positive periformance-flow relationship in this group (Low i,[t- 4,t-1] = 0.250, t-statistics = 1.798), with marginal significance, though.
To check the convexity of the performance-flow relationship, we test the difference in the sensitivities between top and bottom groups, which is 0.842% (F-statistics = 4.199, not reported), signaling that Chinese investors chase star funds more strongly than punish poorest ones, in line with existing findings from developed markets.
Estimates of the control variables also provide some interesting points.While neither fund flows nor purchases are sensitive to the past realized performance, a 1% increase (decrease) in fund realized returns would cause a 0.242% (t-statistics = 3.120) increase (decrease) in redemptions.Given the presented results from the relative performance ranking, it indicates that the disposition effect is more dependent on the realized rather than relative performance, because the disposition effect is not detected in medium or bottom regions.Both purchases and redemptions are positively influenced by fund volatility.Investors would redeem funds with higher volatility (Std i,[t- 4,t-1] = 0.111, t-statistics = 2.862); however, they are more willing to take the higher risk (Std i,[t- 4,t-1] = 0.286, t-statistics = 1.897), implying Chinese fund investors to be risk-seeking.Consistent with extant evidence that larger funds grow more slowly than smaller ones (see Sirri & Tufano, 1998), there is a negative relationship between the fund scale and growth (Ln(TNA i,t-1 ) = -0.087,t-statistics = -4.004),due to investors' purchases of small funds (Ln(TNA i,t-1 ) = -0.165,t-statistics = -5.173).Finally, we are aware that instead of the amount of dividend (Div i,t = 0.142, t-statistics = 0.740), Chinese investors care more about the times of dividend distribution (Div_Times i,t = 0.188, t-statistics = 6.308).They purchase funds with more frequent dividend distributions.
Does institutional reform influence the performance-flow relationship?
This subsection examines the influence of institutional reform in Chinese mutual fund market on the performance-flow relationship.The revised LPRCSIF can be regarded as institutional advancement since it deregulates fund issuance and distribution procedures and tightens supervision on fund information disclosure.We split the entire sample period into two subperiods, from January 2005 to March 2013 (the pre-reform period) and from April 2013 to June 2017 (the post-reform period), and replicate the procedures in subsection 3.1 and report results in Table 4.
We see from the top region that in the pre-reform period, there is a strong positive performance-flow relationship (High 4, investors show much stronger willingness to purchase star funds in the pre-reform period than in the post-reform period, indicating that investors gradually become more rational because they do not crazily chase star performers as they do in the pre-reform period, which can be ascribed to insti-tutional improvement in the Chinese mutual fund market, i.e., the enforcement of the new LPRCSIF. Different from publicly available "hard information", "soft information" collection and analyses, as Huang et al. (2007) suggest, is more about the familiarity of potential investors with specific funds and it can help them to "narrow the variance of their expectation of future fund returns" (1270).In the Chinese mutual fund market, for one aspect, fund information is less disclosed in the pre-reform period; for another, unsophisticated investors are unable to analyze information in an optimal way.Theoretically, the participation effect, the individual winner-picker effect, and the no-trading effect enable investors to investigate top performers; however, with high participation costs, Chinese investors can hardly access information on fund performance persistence and thus they follow momentum trading and show the strong willingness to purchase those star funds 6 .
From June 1, 2013, fund information disclosure is more strictly required and a growing number of Internet platforms embark on providing fund research that is readily available for investors.Also, wider trading channels make it more convenient for investors to adjust positions when needed.Realizing the less persistent star performance, Chinese investors become more rational and invest less in past top performers.Although there still presents a positive relationship between fund past performance and flows, the process of the fulfillment of the new LPRCSIF and investors being sophisticated is dynamic and may require long time to complete.Thus, to expect a total reverse of the positive relationship in the short run appears unrealistic.The similar argument applies to the medium group as investors purchase fewer better performers in the post-reform period.
Unlike potential investors screening numerous choices from the market, existing investors can focus more on the portfolios that they have.With high participation cost settings, existing investors can identify the poorest funds due to the no-trading effect and punish them by redemptions, which remains valid in the relatively low participation costs setting.That explains why we do not reveal evident changes in the performance-flow relationship for the bottom performers in pre-and post-reform periods.
The empirical evidence that the performance-flow relationship varies in the pre-and post-reform periods confirms the positive impact of institutional advancement on improving market efficiency.It is also constructive to other emerging markets with less developed market institutions.Specifically, it would be difficult for investors to make optimal choices in a less transparent market, leading to the absence of investors' protection and market fairness and efficiency.Policy makers in these markets thus can borrow the successful experience from the mutual fund market, such as simplifying new fund issuing procedures, increasing trading channels, and requiring information disclosure.6 Huang et al. ( 2007) specify three effects derived from participation costs including the participation effect, the individual winner-picking effect, and the no-trading effect.The participation effect shows that higher past performance makes investors with higher costs realize the utility gain from surveying and investing in the fund.The individual winner-picking effect shows that investors tend to concentrate investment on top performers since their high participation costs limit the amount of funds that they investigate.Finally, the no-trading effect suggests that due to transaction costs, investors would prefer not trading unless past performance is sufficiently good (bad).
Robustness test
In the main test, we compute fund raw performance from the three-factor model of Fama and French (1993) and here we consider another two approaches -Jensen's alpha (Jensen, 1968) and four-factor alpha of Carhart (1997).Both approaches proceed similar methods to the threefactor model employed in Subsection 2.1, i.e.Equations ( 3) and ( 4).The results from the entire sample and two subperiods are presented in Tables 5 and 6, respectively.
The presented fund performance-flow relationship is not distorted in each performance group.CAPM and four-factor models discover the positive relationship in both top and bottom groups, but the negative relationship in the medium one.The positive relationship in top and bottom regions is due to investors' intense purchases of star funds and punishments for the worst performers, respectively.The adverse purchase of poorer performers in the medium region drives the negative performance-flow relationship.
The impact of the launch of the revised LPRCSIF also supports the reported results in Table 4. Shown by the four-factor model in column (II) of Table 6, for example, investors' star-chasing is largely weakened in the post-reform period, from 1.269 (t-statistics = 2.549) to 0.514 (t-statistics = 1.709).The adverse purchase in the medium group becomes more evident, from -0.193 (t-statistics = -1.769) to -0.529 (t-statistics = -4.346).Both findings confirm the positive impact of institutional advancement on investors' trading behaviors.Likewise, the bottom performance region exhibits little changes as investors consistently punish the worst performers by redemptions, from -0.420 (t-statistics = -4.523) to -0.422 (t-statistics = -3.059).
There are two types of stocks in the Chinese stock market -tradable and non-tradable.We combine both in computing the pricing factors including premium of the market, the book-to-market, and the size factor in the main empirical analyses and
R
; the annualized standard deviation of monthly fund returns of fund i over the past year, Std i,[t-1,t-4] ; the natural logarithm of the total net asset of fund i over the past quarter, Ln(TNA i,t-1 ); the dividend amount and distribution times of fund i in the current quarter, Div i,t and Div_Times i,t ; the natural logarithm of the total net asset of all funds in fund i's company over the past quarter, Ln(TNA i→company,t-1 ); the number of all funds in fund's i's company over the past quarter, Num i→company,t-1 ; the natural logarithm of the age of fund i's company at the past quarter, Ln(Age i→company,t-1 ); and the expense ratio of fund i's company over the past quarter, Expense i→company,t-1 .We also report the average R-square (Avg.R 2 ) and the number of observations contained in each regression (Obs.).Additionally, we replace Flow i,t with Pur i,t and Red i,t to reveal the relationship between fund performance and purchase rates and between fund performance and redemption rates, respectively.The t-statistics are in brackets.a , b , and c represent statistical significance at the 1%, 5%, and 10% levels, respectively., representing the return ranking in the top, medium, and bottom performance region, respectively, obtained based on the raw performance computed from the CAPM model (in Panel A) and Carhart (1997) four-factor model (in Panel B).In addition, we incorporate a series of control variables to remove other potential effects on fund flows.In particular, we have raw performance of fund i over the past year, ; the annualized standard deviation of monthly fund returns of fund i over the past year, Std i,[t-1,t-4] ; the natural logarithm of the total net asset of fund i over the past quarter, Ln(TNA i,t-1 ); the dividend amount and distribution times of fund i in the current quarter, Div i,t and Div_Times i,t ; the natural logarithm of the total net asset of all funds in fund i's company over the past quarter, Ln(TNA i→company,t-1 ); the number of all funds in fund's i's company over the past quarter, Num i→company,t-1 ; the natural logarithm of the age of fund i's company at the past quarter, Ln(Age i→company,t-1 ); and the expense ratio of fund i's company over the past quarter, Expense i→company,t-1 .We also report the average R-square (Avg.R 2 ) and the number of observations contained in each regression (Obs.).Additionally, we replace Flow i,t with Pur i,t and Red i,t to reveal the relationship between fund performance and purchase rates and between fund performance and redemption rates, respectively.The t-statistics are in brackets.a , b , and c represent statistical significance at the 1%, 5%, and 10% levels, respectively.the premium of the momentum factor in the robustness test above.In this robustness check, we use tradable shares only.
Table 7 shows the consistent results, with trivial exceptions, though.Across all three models, the positive performance-flow relationship in the top group and the negative one in the medium group remain unchanged.We notice that the presented positive relationship in the bottom region becomes insignificant in three-factor (Low i,[t-4,t-1] = 0.198, tstatistics = 1.443) and CAPM (Low i,[t-4,t-1] = 0.212, t-statistics = 1.611) models.However, this inconsistency does not weaken our argument that investors redeem the poorest performers.As presented in Table 7, a 1 percentile downward movement in the bottom performance group is expected to suffer greater money outflows by 0.395% (t-statistics = -4.997)and 0.326% (t-statistics = -4.371) in three-factor and CAPM models, respectively.; the natural logarithm of the total net asset of fund i over the past quarter, Ln(TNA i,t-1 ); the dividend amount and distribution times of fund i in the current quarter, Div i,t and Div_Times i,t ; the natural logarithm of the total net asset of all funds in fund i's company over the past quarter, Ln(TNA i→company,t-1 ); the number of all funds in fund's i's company over the past quarter, Num i→company,t-1 ; the natural logarithm of the age of fund i's company at the past quarter, Ln(Age i→company,t-1 ); and the expense ratio of fund i's company over the past quarter, Expense i→company,t-1 .We also report the average R-square (Avg.R 2 ) and the number of observations contained in each regression (Obs.).Additionally, we replace Flow i,t with Pur i,t and Red i,t to reveal the relationship between fund performance and purchase rates and between fund performance and redemption rates, respectively.The t-statistics are in brackets.a , b , and c represent statistical significance at the 1%, 5%, and 10% levels, respectively.
R
; the annualized standard deviation of monthly fund returns of fund i over the past year, Std i,[t-1,t-4] ; the natural logarithm of the total net asset of fund i over the past quarter, Ln(TNA i,t-1 ); the dividend amount and distribution times of fund i in the current quarter, Div i,t and Div_Times i,t ; the natural logarithm of the total net asset of all funds in fund i's company over the past quarter, Ln(TNA i→company,t-1 ); the number of all funds in fund's i's company over the past quarter, Num i→company,t-1 ; the natural logarithm of the age of fund i's company at the past quarter, Ln(Age i→company,t-1 ); and the expense ratio of fund i's company over the past quarter, Expense i→company,t-1 .We also report the average R-square (Avg.R 2 ) and the number of observations contained in each regression (Obs.).Additionally, we replace Flow i,t with Pur i,t and Red i,t to reveal the relationship between fund performance and purchase rates and between fund performance and redemption rates, respectively.The t-statistics are in brackets.a , b , and c represent statistical significance at the 1%, 5%, and 10% levels, respectively.
CONCLUSION
A large body of extant literature examines the fund performance-flow relationship in developed mutual fund markets and concurs that such relationship is asymmetric -investors chase star funds more intensely than punish poorest funds.However, there is less evidence currently available on the performance-flow relationship in an emerging market context.To fill this gap, we, hence, base our analysis in the Chinese mutual fund market considering the unsophistication of Chinese investors and less developed market mechanisms.More importantly, the Chinese mutual fund market witnesses institutional reform represented by the enforcement of the revised LPRCSIF designed to improve investor rationality and market efficiency in June 2013, which provides an opportunity to conduct a natural experiment on the influence of institutional reform on investor rationality and market efficiency.
Empirical analyses start from the investigation on the entire period.Findings reveal that while both top and bottom groups exhibit the positive performance-flow relationship, the driving forces are different: For the top group, the positive relationship is due to investors' unreasonably crazy star-chasing behaviors, and for the bottom region, it is because of their willingness to dispose of the poorest performers.However, different from extant findings, we document a negative performance-flow relationship in the medium group, which is triggered by investors' adverse purchases of the poorer performers.
More notably, our paper finds that institutional reform -the launch of the new LPRCSIF -is crucial in influencing the performance-flow relationship.In the pre-reform period, Chinese investors irrationally chase past star performers to a very large extent; however, in the post-reform period, Chinese investors gradually become rational and show less interest in past top performers, which suggests that institutional reform is successful in improving investor rationality and market efficiency.All presented results are robust to different approaches to measure risk-adjusted performance and the use of the tradable shares in obtaining pricing factors.
Our paper makes contributions to both theoretical and practical domains.It offers additional evidence on fund performance-flow relationship in the Chinese mutual fund market -one of the most important emerging markets in the world.In addition, by separating investors' fund trading behaviors into purchases and redemptions, we explore the driving forces of the presented performance-flow relationship.Beyond these contributions to literature, our paper presents further insights into the positive influence of institutional reform on investor rationality and market efficiency.This evidence suggests policy makers, especially those in relatively less developed emerging markets, to enact or revise related laws and regulations such as simplifying new fund issuing procedures, widening trading channels, and requiring stricter information disclosure to make investors better off and improve the market efficiency.
Table 1 .
ft R is the risk-free rate in month , Descriptions of control variables
Table 3 .
Regression results Note: This table reports the regression results from Equation(1).The dependent variable is Flow i,t , the fund flows of fund i at quarter t.The main explanatory variables include High i,[t-4,t-1] , Mid i,[t-4,t-1] , and Low i,[t-4,t-1]
Table 4 .
Two subperiods: pre-and post-reform periods
Table 5 .
Robustness test: the adoption of CAPM alpha and four-factor alpha Note: This table reports the regression results from Equation(1).The dependent variable is Flow i,t , the fund flows of fund i at quarter t.The explanatory variables include High i,[t-4,t-1] , Mid i,[t-4,t-1] , and Low i,[t-4,t-1] , representing the return ranking in the top, medium, and bottom performance region, respectively, obtained based on the raw performance computed from the CAPM model (in column I) andCarhart (1997)four-factor model (in column II).In addition, we incorporate a series of control variables to remove other potential effects on fund flows.In particular, we have raw performance of fund i over the past year, [ ] i, t-4,t -1
Table 6 .
Robustness test on two subperiods: CAPM alpha and four-factor alpha
Table 6 (
cont.).Robustness test on two subperiods: CAPM alpha and four-factor alpha This table reports the regression results from Equation (1) based on two subperiods, before the revised LPRCSIF from January 2005 to March 2013 and after the revised LPRCSIF from April 2013 to June 2017, in columns I and II, respectively.The dependent variable is Flow i,t , the fund flows of fund at quarter t.The explanatory variables include High i,[t-4,t-1] , Mid i,[t-4,t-1] , and Low i,[t-4,t-1] Note:
Table 7 .
Robustness test: the adoption of tradable shares in computing pricing factors This table reports the regression results from Equation(1).The dependent variable is Flow i,t , the fund flows of fund i at quarter t.The explanatory variables include High i,[t-4,t-1] , Mid i,[t-4,t-1] , and Low i,[t-4,t-1] , representing the return ranking in the top, medium, and bottom performance region, respectively, obtained based on the raw performance computed from the Fama and French (1993) three-factor model (in column I), CAPM model (in column II), and Carhart (1997) four-factor model (in column III).In this robustness test, we employ tradable shares in computing these pricing factors.In addition, we incorporate a series of control variables to remove other potential effects on fund flows.In particular, we have raw performance of fund i over the past year, [
Table 8 .
Robustness test on two subperiods: the adoption of tradable shares
Table 8 (
cont.).Robustness test on two subperiods: the adoption of tradable shares This table reports the regression results fromEquation (1) based on two subperiods, before the revised LPRCSIF from January 2005 to March 2013 and after the revised LPRCSIF from April 2013 to June 2017, in Column (I) and (II), respectively, obtained based on the raw performance computed from the Fama and French (1993) three-factor model (in Panel A), CAPM model (in Panel B), and Carhart (1997) four-factor model (in Panel C).The dependent variable is Flow i,t , the fund flows of fund i at quarter t.The explanatory variables include High i,[t-4,t-1] , Mid i,[t-4,t-1] , and Low i,[t-4,t-1] , representing the return ranking in the top, medium, and bottom performance region, respectively.In addition, we incorporate a series of control variables to remove other potential effects on fund flows.In particular, we have raw performance of fund i over the past year, Note:
|
v3-fos-license
|
2017-08-02T22:26:30.148Z
|
2014-07-03T00:00:00.000
|
12768628
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10654-014-9917-0.pdf",
"pdf_hash": "324b0f2638260a66574f010b2e36ca95ea594400",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45675",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "324b0f2638260a66574f010b2e36ca95ea594400",
"year": 2014
}
|
pes2o/s2orc
|
A tradition and an epidemic: determinants of the campylobacteriosis winter peak in Switzerland
Campylobacteriosis is the most frequently reported food borne infection in Switzerland. We investigated determinants of infections and illness experience in wintertime. A case–control study was conducted in Switzerland between December 2012 and February 2013. Cases were recruited among laboratory-confirmed campylobacteriosis patients. Population-based controls were matched according to age group, sex and canton of residence. We determined risk factors associated with campylobacteriosis, and help seeking behaviour and illness perception. The multivariable analysis identified two factors associated with an increased risk for campylobacteriosis: consumption of meat fondue (matched odds ratio [mOR] 4.0, 95 % confidence interval [CI] 2.3–7.1) and travelling abroad (mOR 2.7, 95 % CI 1.1–6.4). Univariable analysis among meat fondue consumers revealed chicken as the type of meat with the highest risk of disease (mOR 3.8, 95 % CI 1.1–13.5). Most frequently reported signs and symptoms among patients were diarrhoea (98 %), abdominal pain (81 %), fever (66 %), nausea (44 %) and vomiting (34 %). The median perceived disease severity was 8 on a 1-to-10 rating scale. Patients reported a median duration of illness of 7 days and 14 % were hospitalised. Meat fondues, mostly “Fondue chinoise”, traditionally consumed during the festive season in Switzerland, are the major driver of the epidemic campylobacteriosis peak in wintertime. At these meals, individual handling and consumption of chicken meat may play an important role in disease transmission. Laboratory-confirmed patients are severely ill and hospitalisation rate is considerable. Public health measures such as decontamination of chicken meat and improved food handling behaviour at the individual level are urgently needed.
Introduction
In recent years, campylobacteriosis emerged as the most commonly reported zoonosis in Europe, including Switzerland [1,2]. In 2012, the notification rate was 106 cases per 100,000 population corresponding to 8,567 laboratory confirmed cases [3], the highest rate since campylobacteriosis became a notifiable disease in 1988 [1]. By registering only laboratory-confirmed cases, substantial underreporting is very likely.
Human Campylobacter infections generally lead to selflimiting, acute gastroenteritis with diarrhoea, abdominal pain, fever, vomiting and bloody stool as commonly reported symptoms [4]. Patients suffering of a severe infection and pregnant or immunocompromised patients require antibiotic treatment [5]. Rare but serious sequels of Philipp J. Bless and Claudia Schmutz have contributed equally to this paper.
Campylobacter infections include reactive arthritis, febrile convulsions and Guillain-Barré syndrome [4] and contribute considerably to morbidity and economic costs of campylobacteriosis [6,7]. Varying case-definitions, targeted age groups and co-morbidities, methodologies, and follow-up periods result in a broad range of reported casefatality rates. Risk factors for sporadic and outbreak-related Campylobacter infections have been extensively studied [8,9]. Some 50-80 % of sporadic human Campylobacter infections are attributable to chicken as a reservoir either through transmission via handling and consumption of poultry, eating undercooked poultry or via contact with live poultry [10][11][12][13][14]. Recent case-control studies identified chicken consumption as source of infection for 24-29 % of all cases [14]. Similarly, consuming chicken is an attributable risk exposure for 27 % of campylobacteriosis cases in Switzerland [15]. Indirect evidence for an association between chicken consumption and human campylobacteriosis is provided by: (1) a significant reduction of campylobacteriosis case notifications after large-scale marketwithdrawals of chicken due to dioxin-contaminated feed components [16] or an avian influenza outbreak [17] and (2) congruent seasonality patterns of the incidence of campylobacteriosis in humans and Campylobacter colonisation of broiler flocks [18]. Other reported exposure risks originate from drinking unsafe water, consuming raw milk and unpasteurised dairy products, eating barbecued meat, travelling abroad and from contact with farm animals and pets [2,8,9]. Campylobacteriosis outbreaks in Europe are rare, accounting for about 2 % of campylobacteriosis cases only [14,19]. They are mostly associated with consumption of contaminated drinking water, raw milk and chicken products [9,19,20].
In temperate regions, seasonal patterns of human campylobacteriosis exist with an increased incidence during summer months [21,22]. In Switzerland and Germany, seasonal patterns exhibit two distinct peaks: one in summer and one in winter [1,23]. Reasons for this remain speculative: in Switzerland, suspected causes for both peaks include handling of raw and consumption of undercooked meat from barbecuing and from preparing a traditional meat fondue, a festive Christmas and New Year's dish, which implicates the handling of raw meet by the consumer at the table [1]. The objectives for this study were to investigate determinants of the campylobacteriosis winter peak in Switzerland and to elucidate illness perception, symptomatology, and help seeking patterns of campylobacteriosis patients.
Methods
A case-control study recruiting prospectively laboratoryconfirmed campylobacteriosis cases and population-based controls was conducted between December 2012 and February 2013.
The National Notification System for Infectious Diseases (NNSID) of the Swiss Federal Office of Public Health (SFOPH) covers entire Switzerland. Campylobacter infections must be mandatorily reported by diagnostic laboratories. Four private laboratories, covering entire Switzerland and diagnosing about one-third of all notified cases, participated in case recruitment from 21st December 2012 until 24th January 2013.
Considering the seasonal nature of Campylobacter infections, the study commenced after the SFOPH enacted that the mandatory notifications of participating laboratories had to include person-identifiable data as stipulated by the Swiss Epidemics Act.
Cases
All cases reported by the four laboratories to the NNSID were screened for eligibility. Eligibility criteria for cases were age C5 years and Swiss residency. Cases were excluded if they reported antibiotic treatment 4 weeks prior to disease onset or were not speaking German, French or Italian.
Controls
Controls were selected from a random sample of the Swiss population obtained from the Federal Statistical Office. They were matched for sex, age group and canton of residence. Controls were excluded if they reported a diarrhoeal illness 4 weeks prior to the corresponding case's disease onset. In addition, the same exclusion criteria as for cases were applied.
Sample size
The study was designed to detect an effect size [odds ratio (OR)] of 2.5, with a power of 80 % at a two-sided significance level of 0.05 assuming a case-to-control ratio of 1:1. Rejection rates were estimated at 50 % for cases and 75 % for controls. To achieve a sample size of 100 cases and 100 controls and to account for refusals and for exclusions after enrolment, sampling foresaw contacting 300 cases and 600 controls. All eligible controls were included, resulting in a case-to-control ratio ranging from 1:1 to 1:4.
Recruitment process
Within 24 h upon receiving a positive laboratory report we sent an information letter together with a photo-illustrated questionnaire to the case by priority mail. The same package was mailed to four matched controls within 24 h after completion of the case interview. Following the written notice cases and controls were contacted by telephone and, after giving verbal consent to participate, either interviewed immediately or a suitable appointment for the interview was fixed. If controls refused participation, additional controls were selected until at least one per case could be interviewed. Cases and controls were excluded after 15 unsuccessful call attempts or if no telephone number was available in the telephone directory or upon request via postal mail. For participants \15 years, letters were sent to their parents and either parent was interviewed as surrogate.
Questionnaire
The questionnaire comprised a section on food-and nonfood exposures and, for cases, a part on illness experience. It contained questions regarding food consumption, origin of meat, eating and hygiene behaviour, contacts to animals and humans, knowledge about food borne pathogens, recent travel history, occupational exposure and co-morbidity. For both, cases and matched controls, exposure information was collected for the 7 days preceding the onset of the case's disease, except for travel history (preceding 2 weeks). For case interviews, the questionnaire addressed morbidity, health seeking behaviour and treatment. Computer-assisted telephone interviews using LimeSurvey software were performed. In parallel, participants were encouraged to follow the interview questions in the photo-illustrated questionnaire.
Statistical analyses
Collected data were exported to Stata 10.1 (Stata Corporation). Pair-matched analyses were performed where applicable and matched odds ratios (mOR) are presented. Univariable conditional logistic regressions were performed. Variables with cells containing zero values in contingency tables were analysed using exact logistic regression.
For the multivariable conditional logistic regression we considered variables with p B 0.2 in the univariable analysis. In case of correlated predictor variables only the one which was biologically more plausible was kept in the model. In addition, we performed a subgroup analysis investigating risk factors among persons who reported fondue consumption.
The population attributable fraction (PAF) was calculated for each statistically significant risk factor of the multivariable model as difference of nationwide observed cases and expected cases in absence of the risk factor. Expected cases were calculated using the multivariable mOR, frequency of exposure among cases and controls and the sex-, age-and canton-specific prevalence of Campylobacter notifications during the study period.
Subsequent exploratory data analysis including additional subgroup and stratified analyses was conducted in order to assist in the interpretation and to generate new hypotheses. When conditional analysis was not possible the results are presented descriptively.
Response rate and basic characteristics of study participants
A total of 303 campylobacteriosis case notifications were received by the study team. After exclusion of cases \5 years and non-Swiss residency, 289 cases and 898 controls were invited to participate in the study (Fig. 1). We enrolled 180 (62 %) cases and 324 (36 %) controls of which 159 (55 %) cases and 280 (31 %) controls were included in the analysis. Case-to-control matching ratios were 1:1 for 72, 1:2 for 57, 1:3 for 26 and 1:4 for 4 cases, respectively. Participating cases represented 15 % of all registered laboratory-confirmed campylobacteriosis cases during the study period.
The median number of call attempts was 2 for cases and 3 for controls. The median time period for cases between disease onset and interview was 15 days (range 5-63 days). Median age of participants was 38 years and the sex ratio was close to unity. Both study groups were consistent with regard to most socio-demographic characteristics (Table 1). An imbalance was observed in nationality as only 8 (5.0 %) cases compared to 40 (14.3 %) controls were not Swiss nationals.
Univariable conditional logistic regression analysis
Among foods consumed during the week prior to disease onset, meat consumption was identified as significant risk factor (mOR 5.2, 95 % confidence interval [CI] 1.2-23.3), but the only type of meat significantly associated with an increased risk was chicken (mOR 2.5 95 % CI 1.5-4.1) (Fig. 2). Eating raw or undercooked meat was associated with increased risk of disease (mOR 1.6, 95 % CI 1.0-2.6); however the effect was not statistically significant. Conversely, the consumption of raw vegetables was significantly associated with a decreased risk (mOR 0.4, 95 % CI 0.2-0.7). In addition, the consumption of dried and smoked meat (mOR 0.6, 95 % CI 0.4-0.9) and the consumption of ham (mOR 0.6, 95 % CI 0.4-1.0) were associated with a decreased risk.
The univariable analysis showed no significant association of travelling abroad (mOR 1.7, 95 % CI 0.8-3.4) and campylobacteriosis. Having contact with children \5 was significantly associated with a decreased risk of illness (mOR 0.5, 95 % CI 0.3-0.8). No significant association of the disease with occupational contacts involving ill persons, animals and children, raw and cooked foods was found. The same observation was made for non-occupational contacts to animals. Swiss nationality was associated with a significantly increased risk of disease (mOR 3.1, 95 % CI 1.4-6.7). People with high education were less likely to suffer from disease (mOR 0.7, 95 % CI 0.4-1.1).
Among the fondue consumers, chicken showed again the strongest effect (mOR 3.8, 95 % CI 1.1-13.5) of all meat types (Fig. 3). There was no noteworthy difference between fondue meals consumed at home, or outside home at friends or at restaurants. Five out of six participants who reported fondue consumption at other locations (e.g. at holiday or alpine huts) were cases. The consumption of previously frozen meat at a meat fondue was significantly associated with a decreased risk of disease (mOR 0.1, 95 % CI 0.0-0.6). The type of plate used for raw and cooked meat at a meat fondue was significantly associated with campylobacteriosis: both, using one plate with compartments and using two separate plates were associated with a decreased risk of disease (plate with compartments: mOR 0.4, 95 % CI 0.1-1.1; two plates: mOR 0.2, 95 % CI 0.1-0.6).
Multivariable conditional logistic regression analysis
While the mOR for meat fondue remained unchanged, the effect was lower for chicken consumption in general (mOR 1.4 vs. 2.5) and for Swiss nationality (mOR 2.1 vs. 3.1) (Fig. 4). In contrast, the observed association with travelling abroad was stronger (mOR 2.7 vs. 1.7). The estimated PAFs for the significant risk factors of the multivariable model were 51.9 % (95 % CI 31.4-68.5 %) for meat fondue and 13.5 % (95 %-CI 1.1-33.5 %) for travelling abroad.
Campylobacteriosis case characterisation
Most frequently reported disease onset dates were December 27th/28th and January 2nd/3rd (Fig. 5). Median duration of illness was 7 days (range 2.5-33). Only half of all patients (48 %) reported full recovery. Most commonly reported signs and symptoms were diarrhoea, abdominal pain, fever, nausea, vomiting and headache ( Table 2). Other reported symptoms included limb pain, shivering, fatigue, loss of appetite and vertigo. Irrespective of their sex, more than half of the patients rated the severity of their illness as 'severe' denoted by a median severity score of eight on a one-to-ten scale.
First health care seeking
Pharmacies and medical hotlines were consulted by 20 and 5 % of the patients before seeing a physician, respectively. One third (33 %) of all patients had approached a physician directly. More than half (54 %) visited a physician within 3 days after symptoms onset. Most patients (63 %) visited a general practitioner ( Fig. 5; Table 2). Emergency facilities were visited by 26 % of patients.
Hospitalisation
The hospitalisation rate was 14 % and did not differ between sexes, and was increased among patients C60 years (33 %). Half of the hospitalisations lasted at least 3 nights.
Pharmacotherapy
With one exception, all patients reported drug treatment; about two-thirds received antibiotics. Other medications were applied for symptomatic treatment. Among the 24 % of all patients who received an infusion for rehydration or intravenous drug application, 42 % were in outpatient treatment.
Discussion
We assessed determinants for Campylobacter infections in wintertime in Switzerland with a case-control study design among laboratory-confirmed campylobacteriosis patients. A traditional meal (meat fondue), typically consumed at festive occasions in wintertime, was identified as the most important risk factor, especially if chicken meat was served. Furthermore, our findings suggest that the campylobacteriosis cases registered in the national disease registry are severely ill. The last investigation of determinants of campylobacteriosis in Switzerland dates back more than two decades and did not include the winter festive season [24]. Year. In our study, disease onset dates peaked 2-3 days after those events. This is in line with the incubation period of 2-5 days [4]. More than 50 % of Campylobacter-related gastroenteritis can be attributed to the consumption of meat fondue during the study period. The ''Fondue chinoise'' comprises sliced raw meat being individually handled and boiled in a familyshared broth hotpot. In contrast to chicken none of the other meat types consumed during fondue dishes were associated with Campylobacter infections. This is coherent with other studies identifying chicken as a risk exposure [11,[24][25][26][27][28][29][30]. This includes two outbreaks of Campylobacter infections in which meat fondue including chicken meat was the suspected source of infection [31]. Since Germans consume meat fondue with increased popularity on New Year's Eve rather than at Christmas [32][33][34] Campylobacter-contaminated chicken could also be the cause for the peak of infections observed by Schielke et al. [23] in early January. Further we observed that meat fondue eaters who put their raw and cooked meat on the same plate were more likely to suffer from campylobacteriosis. Conversely, the use of a compartmented plate or using two separate plates appeared to be protective in our study and has been previously recommended [35]. Campylobacter spp. are quickly inactivated after dipping the sliced chicken meat into the boiling broth. Therefore, on-the-plate cross-contamination of boiled meat from raw chicken meat juice is the most probable transmission route especially considering the low infectious dose of Campylobacter spp. [36].We found women to have significantly higher odds than men for acquiring a Campylobacter infection after consumption of chicken meat or meat fondue. Among our study participants women consumed more often chicken at meat fondues than men which, however, does not explain the elevated risk.
The consumption of undercooked meat as a risk factor for campylobacteriosis is well known [11,13,27,28,37]. In our study the consumption of raw or undercooked meat was associated with campylobacteriosis especially in people not consuming meat fondue. We hypothesise that the strong effect of meat fondue consumption outweighs the known effect of raw or undercooked meat consumption and, therefore, is only statistically significant in the subgroup of people not consuming meat fondue. Travelling abroad was the only behavioural factor in the multivariable analysis significantly associated with increased odds for Campylobacter infections. This risk factor has been described previously for Switzerland [24] and other countries [11,25,26,28,30]. Further, almost all acute gastroenteritis patients with travel history are tested for gastrointestinal pathogens and are more likely to be diagnosed (personal communication).
One can argue that meat fondue represents an intermediate variable on the pathway from chicken consumption to Campylobacter spp. infection. Intermediate variables, if included in the multivariable analysis, might bias the estimates-usually towards the null. Therefore, we re-ran the regression models omitting meat fondue-consumption: as expected, chicken consumption showed a higher odds ratio (2.3) compared to the full model. The point estimates for all other variables remained similar, with the exception of travelling abroad which was associated with a smaller effect.
Factors associated with reduced risk of Campylobacter infections
The finding that a reduced risk of disease is associated with having contact to children \5 years is difficult to interpret; especially because a high incidence is noticed for this age-class in the NNSID [1]. Persons having contact with young children may differ in general and food hygiene and dietary habits [38]. High education was associated with a reduced risk of disease. The association with gastrointestinal diseases in high-income countries is discussed controversially [38][39][40][41]. Another factor associated with a decreased risk was the consumption of raw vegetables. Similar findings are described from several European countries and elsewhere [13,25,27,28,42] linking the protective effects of the consumption of raw vegetables to high amounts of antioxidants and carotenoids which act as bacterial growth inhibitors and generally increase immunity to infection. Several reports underscore that people who eat raw vegetables differ from others concerning cooking and eating preferences and behaviour [13,25,27,28,42]. The consumption of raw vegetables, especially during winter time, may reflect a generally healthy lifestyle [25,27,28,42].
An exploratory subgroup analysis among meat fondue consumers indicates that consuming previously frozen meat is associated with a decreased risk of campylobacteriosis. Similar experiences were made in Iceland where the number of campylobacteriosis cases declined after freezing of meat originating from Campylobacter-infected broiler flocks [43]. In Switzerland, Baumgartner et al. [44] showed that chicken products were less contaminated with Campylobacter spp. after freezing,-a finding which is corroborated by the studies in Iceland [45] and Norway [46].
In summary, risk and preventive factors in this study point at contamination risks upstream at food productionand downstream at retail-and consumer sides. Consequently, potential preventive risk reduction measures could be applied upstream and downstream: upstream -, through decontamination at slaughter using peracetic acid [47] resulting in a decreased bacterial load at retail level or freezing of chicken meat before reaching retail [43,45,46]. Downstream risk prevention measures could include improving consumer awareness in handling raw chicken meat additionally to the current hygiene notice on Swiss chicken meat packages.
Illness perception and treatment of acute campylobacteriosis
Patients suffering from Campylobacter infection reported typical symptoms of an acute gastroenteritis and a high perceived severity of illness. Comparable studies for Switzerland are lacking; however, the pattern is coherent with experiences from other countries [13,[48][49][50][51]. The reported severity of illness appears to be slightly higher compared to others [48]. Compared to other countries the proportion of hospitalised patients (14 %) was higher [13,48] or slightly lower [52]. This variability could be due to differences in health systems, including differing notification criteria, case definitions and health care provider structures.
Although antibiotics are not generally recommended for treatment of campylobacteriosis more than 60 % of our study patients received antibiotic treatment. In absence of information on the individual patient's medical history we cannot judge whether antibiotic use was medically indicated.
Generally, case-fatality rates in high-income countries range from 0.04 to 0.6 % [2, [52][53][54]. We observed no death during our study. However, due to the similarity of epidemiological patterns in Europe Campylobacter-attributable mortality is likely to occur also in Switzerland [2,54].
Strengths and limitations
We recruited all our cases from laboratory-confirmed campylobacteriosis patients registered in the NNSID. Patients with a mild course of disease are less likely to consult a physician or to be tested for campylobacteriosis and, hence, less likely to be notified. Participating laboratories were from the private sector only; therefore, the hospitalisation rate and the proportion of patients approaching emergency departments and policlinics directly may be underestimated. Similarly, recruiting cases from private laboratories, serving mainly general practitioners, could explain the imbalance in nationalities. Swiss nationals more often consult their general practitioners while non-Swiss are more likely to approach emergency departments. As expected, patients more often volunteered to participate in the study and contacted back the study team after initial contacting failed. Cases may remember their exposures more accurately than controls, since they might have been reflecting about what caused their illness. Nevertheless, ''don't know'' was answered equally often by cases and controls. In addressing potential biases from recalling exposure risks we applied photo-illustrated questionnaires.
Conclusion
The study provides strong evidence that the consumption of a national festive dish (''Fondue chinoise'') is a risk factor for human campylobacteriosis in Switzerland. The main risks associated with this dish are probably twofold: firstly, chicken meat is frequently contaminated with Campylobacter spp. [44]. Secondly, the possibilities of and occasions for cross-contamination and ingestion of bacteria are manifold and the infection risk is exacerbated through individual food-handling at the table. Our findings, therefore, highlight the importance of food hygiene for chicken preparation and consumption at meat fondues. The steadily increasing number of notified campylobacteriosis cases, the high population attributable fraction for meat fondue and the previously unknown severity of illness and hospitalisation rate underline the relative importance for Swiss public health over the festive season and point toward the necessity for public health interventions. Prevention measures could include decontamination of chicken meat at slaughter resulting in a decreased bacterial load at retail level, freezing of chicken meat before reaching retail and improving consumer awareness in handling raw chicken meat.
Acknowledgments This work was supported by the Swiss Federal Office of Public Health (SFOPH). The authors acknowledge Dr.
Daniel Koch (Swiss Federal Office of Public Health) for his support to conduct this study and for reviewing the manuscript and Dr. Christian Schindler (Swiss Tropical and Public Health Institute) for his statistical advice. We thank Dr. Sabine Walser (Swiss Federal Office of Public Health) for her help in setting up and supporting the study, Dr. Marco Jermini (Cantonal Laboratory Ticino) for help in translating the questionnaire, Steven Paul and his team (Swiss Tropical and Public Health Institute) for setting up the IT infrastructure and Mr Andreas Birrer (Swiss Federal Office of Public Health) for his help with access to the notification data. The team of interviewers and all pilot and study participants are gratefully acknowledged. The authors thank the team of the Federal Statistical Office for providing the random sample of the general population.
Conflict of interest This study was supported by the SFOPH with a view to understand campylobacteriosis and how the disease presents in the general population. Marianne Jost and Mirjam Mäusezahl-Feuz are on the staff of the SFOPH and participated in their capacities as public health specialists and their function as scientific collaborators within the organisation. The SFOPH played no part in the study design, data collection, analysis and interpretation of the results. Philipp Bless, Claudia Schmutz, Kathrin Suter, Jan Hattendorf and Daniel Mäusezahl are on the staff of the Swiss Tropical and Public Health Institute and received funding (incl. for a student practical for CS) from the SFOPH.
Ethical statement The study was conducted under the Swiss Epidemics Act (SR 818.101 EpG). All participants were asked for oral informed consent before conducting the interview. The study was conducted in accordance with the Helsinki Declaration.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
|
v3-fos-license
|
2023-11-29T06:17:05.156Z
|
2023-11-27T00:00:00.000
|
265463899
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1073/pnas.2309047120",
"pdf_hash": "6430034a37682f3e460e9b2dfee9082866809ec9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45676",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "05285342cf11d2dc98066a1c8273ad16c444a5d6",
"year": 2023
}
|
pes2o/s2orc
|
PARP7-mediated ADP-ribosylation of FRA1 promotes cancer cell growth by repressing IRF1- and IRF3-dependent apoptosis
Significance PARP7 inhibition has emerged as a compelling and novel option for cancer therapy in different tumor types. However, the molecular mechanism of how PARP7 inhibitors exert their antitumor effect has not yet been clarified. Here, we report that the transcription factor FRA1 is stabilized by PARP7-mediated ADP-ribosylation, thereby preventing its PSMC3-dependent proteasomal degradation and maintaining the suppression of IRF1- and IRF3-dependent gene expression. PARP7 inhibition consequently destabilizes FRA1 and allows for the expression of inflammatory and proapoptotic genes, culminating in CASP8-mediated apoptosis of cancer cells. This mechanism was verified with multiple lung and breast cancer cell lines, and the study demonstrated that in FRA1-driven cancer cells, PARP7 expression alone is necessary and sufficient to predict PARP7 inhibitor sensitivity.
ADP-ribosyltransferases (ARTs) are important regulators of the cellular immune response (1)(2)(3).The diphtheria toxin-like ARTs (also known as ARTD subfamily) comprise an enzyme family of 17 members and catalyze the transfer of ADP-ribose moieties from nicotinamide adenine dinucleotide (NAD + ) to amino acids on target proteins (mono-ADP-ribosylation).Some ARTDs can extend the modification by adding further ADP-ribose moieties (poly-ADP-ribosylation) (4).Among the ARTDs, PARP7 (also known as TiPARP) has emerged as a critical repressor of the intratumoral immune response (5,6).Initially, PARP7 was reported to be the main target gene of the aryl hydrocarbon receptor (AHR) and to form a negative feedback loop by degrading AHR in an ADP-ribosylation-dependent manner (7)(8)(9).In addition, AHR-dependent expression of PARP7 was discovered to be important for constraining type I interferon (IFN) signaling in response to RNA viruses and nucleic acid (NA)-ligands (10).In cancer cells, genomic instability, a characteristic of almost all human cancers, is one of the main sources of cytoplasmic NA (11)(12)(13).The resulting innate immune response can restrain tumor growth; thus, cancer cells are under constant selective pressure to inhibit potentially del eterious NA-induced immune signaling (14).In this context, PARP7 inhibition by RBN-2397 restored cytoplasmic NA-dependent type I IFN signaling and reduced cancer cell growth in a cell-autonomous manner.RBN-2397 also contributed to tumor regression by enhancing cancer cell immune recognition in lung cancer xenografts and patients suffering from advanced solid tumors (5,15).
At the molecular level, it was proposed that PARP7 exerted its repressive function on type I IFN signaling by ADP-ribosylation and inhibition of the TANK binding kinase 1 (TBK1) (10).However, a recent report highlighted that PARP7 regulates type I IFN signaling and tumor growth downstream of TBK1, thereby raising questions about the proposed mode of action of PARP7 inhibitors (6,16).Moreover, the underlying cell death pathway(s) mediating the cell-autonomous effect of PARP7 inhibition on cancer cell survival were not yet defined.Thus, it remains crucial to identify PARP7 targets as potential biomarkers for patient stratification as well as to comprehensively understand how PARP7 inhibition affects tumor growth.Several strategies were recently developed to identify PARP7 substrates and gain insight into the molecular mechanism driving PARP7-dependency in cancer cells.However, rather than identifying the targets of endogenous PARP7, all reported approaches either identified PARP7 targets following the ectopic expression of PARP7 or using an engineered recombinant PARP7, thereby limiting the physiological relevance of the identified targets (17)(18)(19).
FRA1 (FOSL1) belongs to the AP-1 transcription factor family and is frequently over expressed in tumors (20).The expression of FRA1 is critical for promoting cancer cell proliferation, growth, and invasion (21)(22)(23), and the FRA1 expression profile (i.e., OPEN ACCESS 2 of 12 https://doi.org/10.1073/pnas.2309047120pnas.orgFRA1-dependent genes) is a prognostic marker in multiple cancers (24).Moreover, constitutive mitogen-activated protein kinase (MAPK) signaling promotes the oncogenicity of FRA1 by induc ing prolonged FRA1 expression and stabilizing FRA1 protein levels via C-terminal phosphorylation (24,25).Intriguingly, the loss of FRA1 increases the expression of type I IFNs in breast cancer cells (26).Similarly, the downregulation of FRA1 in com bination with poly(I:C) treatment further induces type I IFN expression, suggesting that FRA1 is a crucial transcriptional repres sor of cytokine expression (26,27).These findings indicate that controlling FRA1 expression may be a promising strategy for treating cancer.However, the pathways and, more importantly, the posttranslational modifications (PTM) governing FRA1 pro tein stability are not fully understood (25).
PARP7 Localizes to the Nucleus and Modifies Transcriptional
Regulators on Cysteine Residues.Previous clinical trials have demonstrated that the PARP7 inhibitor RBN-2397 is well tolerated and displays preliminary antitumor activity in patients with advanced solid tumors (15).However, the endogenous targets of PARP7 and the molecular mechanism of PARP7 dependency remain unknown.To understand the function of PARP7 in cancer cells, we selected the lung adenocarcinoma cell line NCI-H1975, which was previously described as sensitive to PARP7 knockout (https://depmap.org/portal/).Indeed, PARP7 inhibition for six days and knockdown for three days strongly reduced the cell viability of NCI-H1975 cells, thereby confirming their PARP7 dependency (Fig. 1A and SI Appendix, Fig. S1 A and B).Since protein localization and function are tightly interconnected, we first aimed to determine the cellular localization of PARP7.Endogenous PARP7 predominantly localized to the nucleus as observed by confocal immunofluorescence (IF) analysis (Fig. 1B), and the nuclear localization of PARP7 and PARP7-mediated ADP-ribosylation was further confirmed by ectopically expressing HA-tagged PARP7 in A549 cells using a Doxycycline (Dox)inducible construct (SI Appendix, Fig. S1C).Together, these results suggest that PARP7 and PARP7-mediated ADP-ribosylation predominantly localize to the nucleus.
To elucidate how PARP7-mediated ADP-ribosylation contrib utes to cell viability, we identified endogenous PARP7 target pro teins using label-free quantification (LFQ) tandem mass spectrometry (LC-MS/MS) (28).In short, we treated NCI-H1975 cells with RBN-2397 or DMSO for 24 h, followed by the enrichment of ADP-ribosylated peptides for LC-MS/MS analyses.The quanti fication of ADP-ribosylated peptides revealed that PARP7 inhi bition significantly decreased the modification of 85 unique proteins (Fig. 1C and Dataset S1).Surprisingly, RBN-2397 treat ment also led to a significant increase in the modification of 19 unique proteins, including the ADP-ribosyltransferase PARP14, suggesting that PARP7 inhibits the activity of other ARTs (Fig. 1C).Considering that proteins exhibiting increased ADP-ribosylation after PARP7 inhibition are unlikely to be direct targets of PARP7, they were not further pursued here.Interestingly, RBN-2397 treatment significantly reduced the modification on cysteine res idues and not on the other potential ADP-ribosylation acceptor sites analyzed (Fig. 1D).Consistent with these findings, we observed that overexpressed PARP7 in A549 cells led to the mod ification of proteins almost exclusively on cysteines, which was abrogated upon PARP7 inhibition by RBN-2397 (SI Appendix, Fig. S1D and Dataset S2).
To gain insight into the cellular functions of the PARP7 target proteins identified here, we performed a STRING network analysis of the PARP7 targets that exhibited a significant loss in ADPribosylation after RBN-2397 treatment of NCI-H1975 cells (Fig. 1E).We found that the majority of these PARP7 targets localize to the nucleus and are involved in the regulation of gene expression (SI Appendix, Fig. S1 E and F), corroborating the observed nuclear localization of PARP7 (Fig. 1B).In support of this finding, a signif icant enrichment of GO terms related to nuclear localization and the regulation of gene expression was observed in PARP7 overex pressing A549 cells (SI Appendix, Fig. S1 G and H).To confirm the regulatory role of PARP7 in gene expression, we analyzed tran scriptional changes of four PARP7-dependent proinflammatory genes (5) at different time points after PARP7 inhibition in NCI-H1975 cells.RBN-2397 treatment resulted in a significant and immediate upregulation (Log 2 (FC) ≥ 2 after 1 h) of IL6 and CXCL8 (Fig. 1F).At the same time, CXCL10 and CCL5 were only up-regulated after prolonged (Log 2 (FC) ≥ 2 after 8 h) RBN-2397 treatment periods (Fig. 1F).The same time-dependent expression pattern was observed by analyzing the pre-mRNA levels of these genes (Fig. 1G), which indicates two distinct waves of gene expres sion rather than a difference in pre-mRNA stability.Together, these observations suggest that the upregulation of IL6 and CXCL8 is an immediate response to PARP7 inhibition.In contrast, the late upregulation of CXCL10 and CCL5 pre-and mRNA levels indi cates that these genes are not directly transcriptionally regulated by PARP7 but are likely up-regulated through signaling events activated by PARP7 inhibition.In conclusion, these results provide evidence that PARP7 controls transcription both directly and indi rectly through the ADP-ribosylation of its nuclear targets.
The Cellular Sensitivity to PARP7 Inhibition Is Dependent on FRA1.To investigate which of the identified PARP7 targets are involved in the RBN-2397-mediated decrease in cell viability, we performed a siRNA screen to knockdown 45 identified PARP7 targets with the strongest reduction in ADP-ribosylation after RBN-2397 treatment.As expected, PARP7 knockdown resulted in reduced sensitivity to RBN-2397, suggesting that the decrease in cell viability observed in NCI-H1975 cells is a direct consequence of PARP7 inhibition (Fig. 2A).Likewise, the knockdown of AHR, a regulator of PARP7 expression (SI Appendix, Fig. S2A), reduced RBN-2397 sensitivity (Fig. 2A).Among the ADP-ribosylated targets of PARP7, knockdown of AHDC1, FAM222B, BCL9, and FRA1 strongly reduced RBN-2397 sensitivity in NCI-H1975 cells (Fig. 2A).To confirm that AHDC1, FAM222B, BCL9 and FRA1 reduce the cytotoxic effect of RBN-2397, we analyzed cell viability after siRNA-mediated knockdowns of all four candidate genes and following the treatment with RBN-2397 or DMSO (Fig. 2B and SI Appendix, Fig. S2C).As a positive control for cell death, we used a siRNA targeting the common essential gene PLK1, and as a control for the reduced cellular sensitivity to RBN-2397, we again knocked down AHR (SI Appendix, Fig. S2 B and C).Remarkably, we observed that only the knockdown of FRA1 and BCL9 reduced the cellular sensitivity to RBN-2397, while cells retained their sensitivity to PARP7 inhibition following the depletion of FAM222B and AHDC1 (Fig. 2B and SI Appendix, Fig. S2C).This finding suggests that FRA1 and/or BCL9 contribute to the RBN-2397-mediated decrease in cell viability.To further investigate whether FRA1 or BCL9 regulates cell viability downstream of PARP7 activity, we compared the genetic dependencies of PARP7 and its protein targets across different cell lines using ShinyDepMap (29).Genes are characterized as codependent if their effects on cell viability positively correlate.Interestingly, PARP7 clustered only with FRA1, which is ADP-ribosylated by PARP7 on C97, thus indicating that these two proteins regulate the same pathways (Fig. 2C).Since FRA1 knockdown alone decreased cell viability, but additional PARP7 inhibition did not further affect viability, PARP7 likely functions upstream of FRA1 and promotes cell survival through the ADP-ribosylation of FRA1 (Fig. 2B).
PARP7 Inhibition Promotes the Degradation of FRA1.FRA1 belongs to the AP-1 transcription factor family and is frequently overexpressed in tumors (20).Moreover, the oncogenicity of FRA1 is promoted by PTM-mediated stabilization (30).Interestingly, immunoblot and immunofluorescence analyses demonstrated that PARP7 knockdown and RBN-2397 treatment significantly reduced FRA1 protein levels (Fig. did not affect c-Jun protein levels, another AP-1 transcription factor with a similar genetic codependency as FRA1 (Fig. 2 D-F and SI Appendix, Fig. S2D).Moreover, while PARP7 inhibition decreased FRA1 protein levels, FRA1 mRNA levels were not reduced in the same period, suggesting that PARP7-mediated ADP-ribosylation specifically regulates FRA1 protein levels (Fig. 2F).To explore whether PARP7-mediated ADP-ribosylation directly stabilizes FRA1, we measured FRA1 degradation rates after DMSO or RBN-2397 pretreatment for 24 h using the translation inhibitor cycloheximide (CHX).RBN-2397 treatment significantly enhanced FRA1 degradation, suggesting that the enzymatic activity of PARP7 is required for FRA1 stabilization (Fig. 2G).Remarkably, we found that the lower-migrating isoform of FRA1 was preferentially degraded compared to the highermigrating isoform, which corresponded to the phosphorylated FRA1 isoform (Fig. 2G).These findings suggest that PARP7 activity primarily stabilizes the unphosphorylated isoform of FRA1.
ADP-Ribosylation of FRA1 Prevents Its PSMC3-Dependent
Proteasomal Degradation.To explore how FRA1 ADP-ribo sylation enhances its protein stability, we generated an ADP-ribo sylation deficient FRA1 mutant (C97A).Therefore, we transduced NCI-H1975 cells with lentiviral vectors constitutively expressing FRA1-WT, C97A, or an empty vector (EV) control.After puromycin selection, single clones were picked and expanded.FRA1-C97A protein levels were substantially lower compared to the wild-type (WT) counterpart, confirming that C97 of FRA1 contributes to its stability (Fig. 3A).Importantly, the nuclear localization and chromatin binding capacities of FRA1-WT and C97A were highly similar, suggesting that the C97A mutation did not abrogate the molecular functions of FRA1 (SI Appendix, Fig. S3 A and B).Furthermore, while ADP-ribosylated FRA1-WT was pulled down from whole cell lysates using eAf1521 (28), RBN-2397 treatment, as well as the C97A mutation, prevented the pull down of FRA1, confirming that C97 serves as ADPribose acceptor site (Fig. 3B).As expected, RBN-2397 treatment did not further increase the degradation of FRA1-C97A.At the same time, FRA1-WT, like endogenous FRA1, was significantly decreased (Fig. 3C).
To gain a deeper mechanistic understanding of how PARP7mediated modification of FRA1 prevents its degradation, we cotreated NCI-H1975 cells with RBN-2397 and the proteasome inhibitor MG132.Interestingly, MG132 increased baseline FRA1 levels, indicating that FRA1 is degraded by the proteasome (Fig. 3D and SI Appendix, Fig. S3C).In addition, we observed that proteas ome inhibition for 4 h did not fully rescue FRA1 levels after pro longed (>24 h) PARP7 inhibition (Fig. 3D and SI Appendix, Fig. S3C), again confirming the RBN-2397-mediated degradation of FRA1.A recent report described that PSMC3 (TBP1), a 19S proteasome subunit, recruits FRA1 to the proteasome in a ubiquitin-independent manner (31).Alternatively, proteasomal degradation of FRA1 can be reversed by the ubiquitin-specific pep tidase USP21 (32).To elucidate whether one of the described mech anisms regulates the turnover of FRA1, we knocked down PSMC3 and USP21, respectively, and analyzed FRA1 protein levels by immunoblotting (Fig. 3 E and F).Downregulation of USP21 only marginally affected endogenous FRA1, FRA1-WT, or C97A protein levels (Fig. 3E and SI Appendix, Fig. S3D).In contrast, depletion of PSMC3 substantially increased the protein levels of endogenous FRA1 and ectopically expressed FRA-WT or mutant (Fig. 3F and SI Appendix, Fig. S3D), suggesting that in NCI-H1975 cells, FRA1 degradation is mediated by PSMC3.Indeed, PSMC3 depletion impaired FRA1 degradation induced by RBN-2397, particularly the lower-migrating, nonphosphorylated FRA1 variants, advocating that PARP7-mediated ADP-ribosylation of FRA1 at C97 prevents its PSMC3-dependent degradation (Fig. 3G).Next, we tested whether the downregulation of PSMC3 increased FRA1 protein levels nonspecifically by decreasing general proteasome function.Therefore, we analyzed NRF2 protein levels after MG132 treatment and PSMC3 knockdown by immunoblotting (Fig. 3H).Under physiological conditions, NRF2 is constitutively expressed in cells and rapidly degraded by the proteasome (33).While proteasome inhibition by MG132 drastically increased NRF2 levels, the knock down of PSMC3 did not stabilize NRF2 (Fig. 3H), suggesting that proteasome function is likely not impaired by the lack of PSMC3.Lastly, we investigated whether PARP7-mediated ADP-ribosylation would inhibit the interaction between FRA1 and the proteasome by coimmunoprecipitating PSMC3 and FRA1 from cells treated with RBN-2397.Indeed, PARP7 inhibition enhanced the complex formation between FRA1 and PSMC3 (Fig. 3I).Similarly, overex pression of FRA1-C97A augmented its complex formation with PSCM3 compared to FRA1-WT, confirming that ADP-ribosylation of FRA1 at C97A prevents its interaction with PSMC3 (SI Appendix, Fig. S3E).In conclusion, our data indicate that PARP7-mediated ADP-ribosylation of FRA1 inhibits the interaction of FRA1 with PSMC3 and, consequently, the degradation of FRA1 by the proteasome.
FRA1 Regulates the Expression of Genes Involved in Apoptosis,
Immune Signaling, and Cell Cycle Progression.To elucidate how FRA1 would functionally contribute to the decrease in cell viability mediated by PARP7 inhibition, we defined the transcriptional changes in NCI-H1975 cells by RNA sequencing after siFRA1 or RBN-2397 treatment (Datasets S3 and S4).Knockdown of FRA1 resulted in the differential expression of 1732 genes (Fig. 4A and Dataset S3), while PARP7 inhibition led to differential expression of 310 genes (Fig. 4B and Dataset S4), which significantly overlapped with the up-or down-regulated genes after FRA1 depletion (Fig. 4C).Next, we compared our data to the published transcriptome of NCI-H1373 cells treated with RBN-2397 (5).Interestingly, we found that up-or down-regulated genes in RBN-2397 treated NCI-H1373 cells significantly overlapped with our differentially expressed genes in both siFRA1 and RBN-2397 treated NCI-H1975 cells (Fig. 4D).In addition, we performed whole proteome LC-MS/MS analysis of RBN-2397 treated NCI-H1975 cells and identified 162 up-or down-regulated proteins (SI Appendix, Fig. S4A and Dataset S5).Comparison of our proteomic data to the transcriptomics data revealed that the altered gene expression for siFRA1 and RBN-2397 treated cells matched the up-regulated proteins identified in the proteomics dataset (SI Appendix, Fig. S4B).To gain additional functional insight, we performed a GSEA-based pathway analysis of genes differentially expressed after FRA1 knockdown and RBN-2397 treatment and observed an enrichment of TNFα signaling, NA-sensing, apoptosis, and cell cycle genes, respectively (Fig. 4 E and F).To confirm that PARP7 activity regulates transcription via FRA1, a selected number of immune signaling, apoptotic, and cell cycle genes were analyzed by RT-qPCR after 6 h or 48 h of RBN-2397 treatment and knockdown of PARP7 or FRA1, respectively (Fig. 4G).As expected, we observed similar transcriptional dynamics for the tested genes.Likewise, we found identical protein expression changes within our proteomics dataset after PARP7 inhibition (Fig. 4G).Taken together, our data suggest that FRA1, in a PARP7-dependent manner, regulates the transcription of genes involved in immune signaling, apoptosis, and the cell cycle.
Next, we determined whether the observed differential expression of cell cycle and apoptosis genes, following FRA1 depletion and RBN-2397 treatment, could induce cell cycle dysregulation and apoptosis.Indeed, using well-accepted cell cycle and senescence markers, we observed a reduction in RB and H3 phosphorylation and an increase in βgalactosidase staining after treating cells with RBN-2397 or following PARP7 and FRA1 knockdown, respectively (SI Appendix, Fig. S4 C and D).Strikingly, prolonged RBN-2397 treatment or knockdown of PARP7 and FRA1 (>48 h), respectively, lead to an increased number of early (Annexin-V + /PI − ) and late (Annexin-V + /PI + ) apoptotic cells and promoted the robust cleavage of CASP8, CASP3, and PARP1 (Fig. 4 H and I and SI Appendix, Fig. S4E).Taken together, we demonstrated that the depletion of FRA1 induced major transcriptional changes, which resulted in reduced cell viability and was likely mediated by the activation of CASP8.
FRA1 Suppresses Apoptosis by Inhibiting IRF1-and IRF3-Depend
ent RIG-I-Like Receptor Signaling.Given that FRA1 knockdown induces genes associated with immune signaling and apoptosis, we investigated which transcriptional regulators are critical for the immune response and CASP8-dependent apoptosis.The interferon regulatory factor 3 (IRF3) and its target gene IRF1 are critical regulators of immune signaling and apoptosis (34)(35)(36)(37)(38).Moreover, FRA1 was found to directly repress IRF3 activation by translocating to the cytoplasm and inhibiting TBK1 (27).Therefore, we depleted IRF3 or IRF1 and analyzed the transcriptional changes of selected FRA1 target genes by RT-qPCR.Compared to siFRA1 alone, the knockdown of FRA1 and IRF3 or IRF1 reduced both the induction of immune signaling and apoptosis-associated genes (Fig. 5A and SI Appendix, Fig. S5A).As expected, IRF3 was also essential for the transcriptional upregulation of IRF1 following Log 2 (FC) the depletion of FRA1 (Fig. 5A), confirming that IRF1 is a direct target gene of IRF3.Remarkably, the lack of IRF3 or IRF1 also decreased the cleavage of CASP8 and significantly improved cell viability following the knockdown of FRA1 (Fig. 5 B and C).In addition, we also determined whether AHR, an activator of PARP7 expression (SI Appendix, Fig. S2A), contributes to apoptosis in the absence of FRA1.Simultaneous knockdown of FRA1 and AHR only slightly reduced CASP8 cleavage, despite the FRA1-dependent activation of AHR target genes CYP1A1 and IL1B (SI Appendix, Fig. S5 B-D), suggesting that not AHR but rather IRF3 and IRF1 are necessary for the induction of apoptosis.Next, we investigated which upstream signaling pathways pro mote the proapoptotic function of IRF3 and IRF1 after FRA1 knockdown.In cancer cells, aberrant cytoplasmic NAs are potent activators of cytosolic NA-sensing pathways and can promote the activation of IRF3 and IRF1 (39,40).The binding of cellular double-stranded DNA to the cytoplasmic guanosine monophosphateadenosine monophosphate (cGAMP) synthase (cGAS) stimulates the production of cGAMP and activates the stimulator of interferon genes (STING) (41).In addition to cGAS/STING activation, defective DNA damage responses increase aberrant cytoplasmic RNAs that trigger binding of Retinoic acid-inducible gene I (RIG-I)-like receptors (RLRs) to mitochondrial antiviral-signaling protein (MAVS) (13).Constitutive activation of STING and MAVS promotes the TBK1-dependent phosphorylation and nuclear trans location of IRF3 (42).Similarly, IRF1 can be activated by cytoplas mic DNA and RNA sensing (43,44).Moreover, previous studies suggest that PARP7 is a critical negative regulator of cGAS/STING and RLR/MAVS signaling (5,6).To investigate the potential proap optotic role of NA-sensing signaling, we depleted FRA1 and trans fected cells with STING and RLR agonists (cGAMP or poly(I:C)), respectively.Interestingly, we observed that RLR but not STING activation synergistically induced FRA1 target genes (i.e., TNF) and CASP8 cleavage (Fig. 5 D and E).Consistently, following FRA1 knockdown, cells treated with the cGAS inhibitor G140 only slightly reduced the transcriptional upregulation of FRA1 target genes and did not prevent apoptosis via the cleavage of CASP8 (SI Appendix, Fig. S5 E and F), indicating that RLR-signaling con tributes most significantly to apoptosis in NCI-H1975 cells.Given the substantial upregulation of TNF after FRA1 depletion, we inves tigated whether the secretion of TNFα would initiate apoptosis in a paracrine manner by activating complex IIa (45).Although we observed that treatment with exogenous TNFα synergized with the knockdown of FRA1 in inducing CASP8-dependent apoptosis (SI Appendix, Fig. S5 G and H), blocking endogenous TNFα with a neutralizing antibody did neither rescue the cleavage of CASP8 nor cell viability after the knockdown of FRA1 (SI Appendix, Fig. S5 G and H).Together, these data suggest that increased TNFα expres sion following PARPP7 inhibition is not the initiator of apoptosis but promotes its amplification and that the activation of CASP8 upon FRA1 loss is TNFα independent.
Next, we investigated how FRA1 exerts its repressive function toward the RLR-signaling-dependent activation of IRF3/IRF1.In contrast to a previous report (27), neither poly(I:C) treatment nor PARP7 inhibition induced a cytoplasmic translocation of FRA1 (SI Appendix, Fig. S5I), suggesting an alternative mechanism for the FRA1-dependent inhibition of IRF3.Remarkably, we found that both FRA1 knockdown and PARP7 inhibition, comparable to poly(I:C) transfection, promoted the nuclear translocation of IRF3 (Fig. 5F and SI Appendix, Fig. S5J), indicating that nuclear FRA1 indirectly exerts its repressive function toward cytoplas matic IRF3.A previous study suggested that IRF1-dependent upregulation of DDX58 (RIG-I) and IFIH1 (MDA5) leads to the activation of IRF3 and, thus, inflammatory and proapoptotic gene expression (38).Consistent with these observations, we found that upregulation of RIG-I and MDA5 after PARP7 inhibition and FRA1 depletion was dependent on IRF1 (Fig. 5G), suggesting that FRA1 represses IRF3 activation by inhibiting the IRF1-dependent RIG-I and MDA5 upregulation.Indeed, the depletion of IRF1 not only dampened the increase in RIG-I and MDA5 expression but also reduced the nuclear translocation of IRF3 following RBN-2397 and siFRA1 treatment (Fig. 5H) without abrogating the NA-dependent activation of RLR signaling (SI Appendix, Fig. S5K).Noteworthy, in comparison to immediate response genes like IL6, we found that IRF1 was transcriptionally increased only after extended PARP7 inhibition (Fig. 5I).Therefore, suggesting that, under basal conditions, FRA1 blocks the IRF1-dependent transcription of RIG-I and MDA5 and that IRF1 expression likely increases only after the activation of IRF3 (Fig. 5A).Together, these findings provide strong evidence that FRA1 functions as a negative regulator of IRF1, which suppresses an RLR-signaling and IRF3-dependent feedforward loop that ultimately inhibits immune signaling and apoptosis.
PARP7 Expression Is a Marker for RBN-2397 Sensitivity of FRA1-
Driven Cancer Cells.To investigate whether the identified PARP7/ FRA1/IRF1/IRF3 axis is observed in other cancer cell lines, we compared FRA1 and PARP7 expression levels across 1,078 cell lines from various origins using the DepMap project dataset.Higher FRA1 mRNA levels corresponded with higher FRA1 cell line dependency (SI Appendix, Fig. S6A).Of note, a recent report found that PARP7 mRNA levels positively correlated with PARP7 cell dependency (5).Based on these observations, we explored whether high FRA1 expression levels would indicate PARP7 dependency by comparing FRA1 expression levels with the dependency scores of all assessed genes in the DepMap database.Among the top two genes exhibiting a substantial correlation with FRA1 expression, we identified FRA1 and PARP7, suggesting that high FRA1 expression levels correlated with a higher PARP7 dependency (SI Appendix, Fig. S6 B and C).Thus, we determined whether other lung as well as breast cancer cell lines that express high levels of FRA1 and/or PARP7 also exhibited increased RBN-2397 sensitivity compared to cell lines with lower FRA1 and/or PARP7 expression levels (Fig. 6A and SI Appendix, Fig. S6D).Indeed, RBN-2397 decreased the viability of cell lines expressing higher levels of FRA1 and PARP7 in a dose-dependent manner, with IC 50 values comparable to those observed in NCI-H1975 cells (Fig. 6B).In contrast, all other tested cell lines were insensitive to RBN-2397, including MDA-MB-436 cells which have very little PARP7 but express FRA1 at levels similar to all of the RBN-2397-sensitive cells (Fig. 6 A and B and SI Appendix, Fig. S6D).Together, these findings indicate that PARP7 levels can be used to predict the RBN-2397 sensitivity of FRA1-positive lung and breast cancer cells.
To confirm that PARP7 inhibition leads to FRA1 degradation in all RBN-2397 sensitive cell lines, we treated cells with the PARP7 inhibitor for 72 h and analyzed FRA1 protein turnover by immunoblotting.All RBN-2397 sensitive cell lines showed a significant decline in FRA1 protein levels (Fig. 6C and SI Appendix, Fig. S6E).In contrast, FRA1 protein levels did not decrease in the insensitive cell lines, suggesting that FRA1 is not controlled by PARP7s' enzymatic activity in these cells (Fig. 6C and SI Appendix, Fig. S6E).Next, we verified whether PARP7-mediated ADPribosylation of FRA1 would also inhibit its proteasomal degrada tion in the sensitive cell lines.Indeed, the knockdown of PSMC3 rescued the RBN-2397-dependent degradation of FRA1 in all sensitive cell lines, confirming that in RBN-2397-sensitive cell lines PARP7 stabilizes FRA1 and inhibits its degradation by PSMC3 (Fig. 6D and SI Appendix, Fig. S6F).Similarly, RBN-2397 treatment and PARP7 or FRA1 knockdown increased the expres sion of inflammatory and apoptotic genes (Fig. 6E and SI Appendix, Fig. S6 G and H).At the same time, we observed a reduction in the proliferative signature of HCC827 and MDA-MB-231 cells (Fig. 6E); further validating our results in NCI-H1975 cells fol lowing PARP7 and FRA1 depletion (Fig. 4G).Lastly, we observed that IRF3 and IRF1 knockdown in HCC827 and MDA-MB-231 cells significantly improved cell viability following FRA1 down regulation (Fig. 6F).Collectively, our findings suggest that PARP7 inhibition induces IRF1-and IRF3-dependent apoptosis by pro moting the degradation of FRA1 in FRA1-driven lung and breast cancer cell lines (Fig. 6G).
Discussion
In recent years, combination therapies, which harness the synergistic effects of immune checkpoint inhibitors and intratumoral innate immunity, have emerged as promising strategies to control tumor development and progression (46).Considerable attention has been given to PARP7, mainly because its inhibition restores type I IFN signaling in cancer cells and results in durable, complete tumor regression in human cancer xenografts and clinical trials (15).Moreover, the PARP7 inhibitor RBN-2397 was also shown to inhibit cancer cell growth in a cell-autonomous manner by regulat ing cell death and cell proliferation (5,47).However, the mode of action of PARP7 inhibitors, as well as the identification and vali dation of PARP7 targets and, thus, potential biomarkers that sup press cancer cell immune signaling and prompt cancer cell viability, have been missing.To address this issue, we explored PARP7 activity in PARP7 inhibitor-sensitive lung and breast cancer cell lines using an integrated approach that combined MS-based ADP-ribosylome analyses with transcriptomics and proteomics.
Here, we identified endogenous PARP7 targets using an LC-MS/MS-based enrichment strategy of ADP-ribosylated proteins by comparing the modified peptides identified in the presence or absence of PARP7 inhibition (28).We found that endogenous PARP7 predominantly localizes to the nucleus and modifies its targets solely on cysteine residues.Among the PARP7 target proteins, we identified the AP-1 transcription factor FRA1 as essential for cell survival and a crucial regulator of PARP7 inhibitor-mediated cell death.Our data suggest that PARP7-mediated ADP-ribosylation of FRA1 at C97 prevents the binding of FRA1 to PSMC3, a 19S pro teasome subunit, and thus its proteasomal degradation.Moreover, comparable to the increase in FRA1 stability mediated by PARP7 ADP-ribosylation, it was previously reported that FRA1 is stabilized by phosphorylation at its C-terminus (20,25,48).However, we observed that the stabilization of FRA1 by PARP7 is regulated inde pendently of FRA1's phosphorylation.Hence, we hypothesize that PARP7-mediated ADP-ribosylation specifically regulates FRA1 bind ing to PSMC3 and, thus, FRA1 degradation.
In contrast to a previous report, AHR was not ADP-ribosylated in NCI-H1975 cells (9).Nevertheless, we confirmed that AHR expression sensitizes cells to PARP7 inhibition (47).Therefore, further investigations are required to understand how AHR sen sitizes cells toward PARP7 inhibition, independent of AHR ADP-ribosylation.Furthermore, we found that AHR partly con trols PARP7 expression and that FRA1 inhibits the upregulation of a subset of AHR target genes (e.g., CYP1A1).These findings point toward a complex interplay between AHR, PARP7, and FRA1, in which PARP7-mediated ADP-ribosylation of FRA1 might impair AHR-dependent transcription.
Consistent with previous reports describing PARP7 as a negative regulator of innate immune signaling, we found that FRA1 represses the expression of genes associated with innate immunity (5,6).However, we did not detect the previously described PARP7mediated ADP-ribosylation of TBK1 (5,49).In addition, our results suggest that PARP7 exclusively localizes to the nucleus, whereas TBK1 localizes to the cytoplasm.Therefore, our findings support the conclusion of a previous study demonstrating that PARP7 inhibition regulates type I IFN signalling downstream of TBK1 (6).Remarkably, in the absence of FRA1, we found that the downstream target of TBK1, IRF3, and its target gene IRF1 were crucial transcription factors for the induction of immune signaling and apoptosis.While IRF3 activation was solely depend ent on RLR-signaling in NCI-H1975 cells, other studies found the cGAS/STING pathway to induce IRF3-dependent immune signaling across various RBN-2397 sensitive cell lines (5,6).These findings emphasize the critical and central role of IRF3 activation in inducing apoptosis in cancer cells following PARP7 inhibition, independent of the cell-type specific upstream signaling (i.e., RLR or cGAS/STING signaling).Moreover, we found that the repres sive function of FRA1 toward IRF3 is indirect since IRF3 was only activated and, in turn, translocated to the nucleus in an IRF1-dependent manner after the loss of FRA1.Consistent with a previous report (38), we found that IRF3 was initially activated through the IRF1-dependent upregulation of RIG-I and MDA5, which suggests a feedforward loop in which IRF1 activates IRF3 and IRF3 in turn transcriptionally up-regulates IRF1.Based on our data, we cannot conclusively exclude that PARP7 inhibition through an uncharacterized mechanism would also increase aber rant cytoplasmic NA and thereby activate IRF3.In conclusion, our findings highlight how the PARP7-FRA1 axis regulates IRF1and, consequently, IRF3-mediated cell intrinsic apoptosis signa ling in lung and breast cancer cells.
FRA1 is regarded as a potent oncogene, and its overexpression is associated with more malignant tumors and poor patient outcomes (20).Intriguingly, we could demonstrate that high PARP7 expression levels are critical for RBN-2397 sensitivity of FRA1-positive lung and breast cancer cell lines and FRA1 protein stability.Based on this find ing, we hypothesize that assessing PARP7 expression levels might be of clinical importance for most FRA1-positive cells, especially since the PARP7 inhibitor RBN-2397 has entered clinical trials (15).
Materials and Methods
Detailed methods are provided in supporting information.Human cell lines were obtained from ATCC or were a gift from Ursula Klingmüller (DKFZ, Heidelberg, Germany) and verified regularly for Mycoplasma infection.NCI-H1975, NCI-H1650, MDA-MB231, MDA-B436, A549, and HEK293T cells were cultured in high glucose-containing Dulbecco's modified Eagle's medium (DMEM); HCC827 cells were cultured in high glucose RPMI-1640 medium.All media were supplemented with 10% fetal bovine serum and 1% penicillin/streptomycin.All cells were grown at 37 °C in a humidified atmosphere with 5% CO 2 .
Data, Materials, and Software Availability.Proteomic and sequencing data have been deposited in ProteomeXchange (PXD041870) (50) and GEO (GSE229674) (51).Previously published data were used for this work [We used the publicly accessible transcriptomics data set GSE177494 (RBN-2397 treatment of NCI-H1373 cells for 24 h).The RNA sequencing data of 1,078 cell lines, cell line annotations, and gene dependency scores were downloaded from the portal of the Dependency Map (DepMap) project (https://depmap.org/portal,release: Public 22Q4)] (52).
Fig. 1 .
Fig. 1.PARP7 controls transcription by ADP-ribosylation of its nuclear protein targets in NCI-H1975 cells.(A) Cell viability of NCI-H1975 cells was measured after six days of treatment with increasing concentrations of RBN-2397.Data are depicted as the mean ± SD of N = 3 biological replicates.Curves were fitted using a four-parametric nonlinear model.(B) IF analysis of endogenous FLAG-tagged PARP7 after the knockdown of PARP7 in NCI-H1975 cells.Representative image from a single experiment of N = 3 biological replicates.The scale bar represents 20 μm.(C) Volcano plot shows changes in ADP-ribosylation in NCI-H1975 cells treated with 100 nM RBN-2397 or DMSO for 24 h, N = 4 technical replicates.Red: significant down; blue: significant up; gray: nonsignificant.Significant changes were defined by FDR < 0.05 and FC ≥ ±1.(D) Bar graphs showing the count of unique ADPr-PSMs, unique ADPr-proteins, and unique ADPr-sites with ≥95% sitelocalization confidence (Upper).ADPr amino acid residue distribution was assessed by EThcD and HCD fragmentation (Lower).Data are shown as mean ± SD of N = 4 technical replicates.(E) STRING network visualization of proteins exhibiting a significant decrease in ADP-ribosylation after RBN-2397 treatment in NCI-H1975 cells (Node size and color: −Log 10 (P)).Default STRING clustering confidence was used (P > 0.4), and disconnected proteins were omitted from the network unless they were identified by FDR < 0.05 and FC ≥ 2. (F and G) Heat maps showing RT-qPCR analysis of mRNA and pre-mRNA levels in NCI-H1975 cells treated with RBN-2397 for the indicated periods.The data are represented as the mean Log 2 (FC) of N = 5 and N = 3 biological replicates, respectively.
Fig. 2 .
Fig. 2. The PARP7 inhibition-dependent decline in cell viability is mediated by the degradation of FRA1.(A) Waterfall plot showing the robust z-transformed (MAD z-score) SI (sensitivity index) for the mean of each siRNA of N = 3 biological replicates.Red: <−1.5; blue >1.5; dark red: genes of which two out of three siRNA showed a robust z-score ≤ −1.5.(B) Cell viability of NCI-H1975 cells 5 d after siRNA transfection and RBN-2397 treatment.The data are normalized to the siSCR + DMSO control and shown as the mean ± SD of N = 3 biological replicates.(C) STRING network visualizes the genetic codependencies between genes that exhibited a strong correlation in the DepMap dataset (Spearman ≥ 0.1; edge size: strength of Spearman's correlation).(D) Immunoblot of NCI-H1975 cells following 48 h PARP7 knockdown.Representative image of N = 3 biological replicates.(E and F) Immunoblot of NCI-H1975 cells treated with RBN-2397 for the indicated periods.Quantification of FRA1 and c-Jun immunoblots and FRA1 mRNA is shown as the mean ± SD of N = 3 biological replicates.(G) Immunoblot of NCI-H1975 cells treated first with RBN-2397 for 24 h and then with CHX (50 μg/mL) and RBN-2397 for the indicated periods (Left).Quantification of FRA1 immunoblots is shown as the mean ± SD of N = 3 biological replicates (Right).
FRA1Fig. 3 .
Fig. 3. ADP-ribosylation of FRA1 on C97 reduces its degradation by PSMC3 and the proteasome.(A) Immunoblot of NCI-H1975 cells transduced with lentiviral constructs for empty vector (EV), FRA1-WT, or C97A and expressed under a constitutive promotor.Representative image from a single experiment with N = 3 biological replicates.Quantification of FRA1 immunoblots is indicated as the average of N = 3. (B) Immunoblot following eAf1521-dependent pull-down (PD) of ectopically expressed FRA1-WT or -C97A in NCI-H1975 cells.Representative image from a single experiment with N = 3 biological replicates.Quantification of FRA1 PD immunoblots was performed by normalizing to the FRA1 input and is indicated as the average of N = 3. (C) Immunoblot of NCI-H1975 cells treated as in Fig. 2G (Upper).Quantification of FRA1 immunoblots is shown as the mean ± SD of N = 3 biological replicates (Lower).(D) Immunoblot of NCI-H1975 cells treated with RBN-2397 for the indicated periods and then for 4 h with 10 μM MG132.Representative image from a single experiment with N = 3 biological replicates.(E and F) Immunoblots of NCI-H1975 cells following the knockdown of USP21 or PSMC3 for 48 h, respectively.Representative images from a single experiment with N = 3 biological replicates.Quantification of FRA1 immunoblots is indicated as the average of N = 3. (G) Immunoblot of NCI-H1975 cells following the knockdown of PSCM3 for 48 h and the treatment according to Fig. 2G.Representative image from a single experiment with N = 3 biological replicates (Upper).Quantification of FRA1 immunoblots is shown as the mean ± SD of N = 3 biological replicates (Lower).(H) Immunoblot of NCI-H1975 cells after MG132 treatment for 4 h and the knockdown of PSMC3 with two independent siRNA sequences for 48 h.Representative image from a single experiment with N = 3 biological replicates.Quantification of NRF2 and FRA1 immunoblots is indicated as the average of N = 3. (I) IP of PSMC3 from NCI-H1975 cells treated for 24 h with RBN-2397 and immunoblotting for FRA1.Representative image from a single experiment with N = 3 biological replicates.Quantification of FRA1 normalized to FRA1 input is indicated as the average of N = 3.
Fig. 4 .Fig. 5 .
Fig. 4. FRA1 inhibits cellular immune signaling and apoptosis and promotes cell proliferation.(A and B) Volcano plots showing changes in gene expression after 48 h FRA1 knockdown or 6 h RBN-2397 treatment in NCI-H1975 cells.Biological quadruplicates (N = 4) for each condition were subjected to RNA sequencing.Significant changes are indicated as FDR < 0.05 and Log 2 (FC) ≥ ±1.(C) Overlap between up-and down-regulated genes after siFRA1 and RBN-2397 treatment of NCI-H1975 cells.(D) Overlap of up-and down-regulated genes in NCI-H1373 cells after RBN-2397 treatment with differentially expressed genes in NCI-H1975 cells.(E and F) Enrichment of core gene sets after FRA1 knockdown or RBN-2397 treatment in NCI-H1975 cells.Significance as FDR-corrected q-values.(H: Hallmarks; K: KEGG; W: Wikipathway).(G) Heat map showing RT-qPCR analysis of NCI-H1975 cells after RBN-2397 treatment for 6 and 48 h and the knockdown of PARP7 and FRA1 for 48 h.Proteins significantly up-regulated in whole proteome LC-MS/MS analysis (from SI Appendix, Fig. S4A) are indicated by blue points.Data are normalized to DMSO or siSCR controls and shown as the mean of N = 3 biological replicates.(H) Bar graph showing early and late apoptotic cells following RBN-2397 treatment or PARP7 and FRA1 knockdown for 72 h in NCI-H1975 cells.Data are represented as the mean ± SD of N = 3 biological replicates.(I) Immunoblot after RBN-2397 treatment or PARP7 and FRA1 knockdown for 72 h in NCI-H1975 cells.Representative image from a single experiment with N = 3 biological replicates.Quantification is depicted as the mean of N = 3.
Fig. 6 .
Fig. 6.Cellular RBN-2397 sensitivity is dependent on high PARP7 expression in FRA1-positive cancer cell lines.(A) RT-qPCR analysis (Upper) and immunoblot (Lower) are shown for the indicated cell lines and depicted as the mean ± SD of N = 2 biological replicates.Representative immunoblot from a single experiment with N = 3 biological replicates (SE: short exposure; LE: long exposure).(B) Cell viability of indicated cell lines following six days of treatment with increasing concentrations of RBN-2397.Data are depicted as the mean ± SD of N = 2 biological replicates.Curves were fitted using a four-parametric nonlinear model.(C) Immunoblot following RBN-2397 treatment for 72 h in the indicated cell lines.Representative image from a single experiment with N = 3 biological replicates (SE: short exposure; LE: long exposure).(D) Immunoblot following RBN-2397 treatment for 72 h and PSCM3 knockdown for 48 h in the indicated cell lines.Representative image from a single experiment with N = 3 biological replicates.(E) Heat map showing RT-qPCR analysis of HCC827, and MDA-MB-231 cells treated as in Fig. 4G.Data are normalized to DMSO or siSCR controls and shown as the mean of N = 2 biological replicates.(F) Cell viability following 72 h days of knockdown.Data are depicted as the mean ± SD of N = 4 biological replicates.(G) Schematic of the proposed mechanism of action of RBN-2397 in FRA1driven lung and breast cancer cells.Under untreated conditions, FRA1 is ADP-ribosylated by PARP7 on C97 (BR, basic region; LZ, leucine zipper).Loss of FRA1 ADP-ribosylation by PARP7 inhibition results in the PSMC3-dependent proteasomal processing of FRA1.The degradation of FRA1 increases the IRF1-dependent expression of RIG-I and MDA5, which in turn promotes the activation and nuclear translocation of IRF3.The activation of IRF3 allows for the upregulation of cytokine expression and promotes CASP8-dependent apoptosis.
|
v3-fos-license
|
2024-07-01T15:06:15.188Z
|
2024-06-01T00:00:00.000
|
270848875
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.crphar.2024.100193",
"pdf_hash": "43ed36368084ee320e0d2d31a7b2fd8743a42010",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45678",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "bf4f8a42b551c3cbeeed9abe9c17bea94bf2cdfe",
"year": 2024
}
|
pes2o/s2orc
|
The potential of miRNA-based approaches in glioblastoma: An update in current advances and future perspectives
Glioblastoma (GBM) is the most common malignant central nervous system tumor. The emerging field of epigenetics stands out as particularly promising. Notably, the discovery of micro RNAs (miRNAs) has paved the way for advancements in diagnosing, treating, and prognosticating patients with brain tumors. We aim to provide an overview of the emergence of miRNAs in GBM and their potential role in the multifaceted management of this disease. We discuss the current state of the art regarding miRNAs and GBM. We performed a narrative review using the MEDLINE/PUBMED database to retrieve peer-reviewed articles related to the use of miRNA approaches for the treatment of GBMs. MiRNAs are intrinsic non-coding RNA molecules that regulate gene expression mainly through post-transcriptional mechanisms. The deregulation of some of these molecules is related to the pathogenesis of GBM. The inclusion of molecular characterization for the diagnosis of brain tumors and the advent of less-invasive diagnostic methods such as liquid biopsies, highlights the potential of these molecules as biomarkers for guiding the management of brain tumors such as GBM. Importantly, there is a need for more studies to better examine the application of these novel molecules. The constantly changing characterization and approach to the diagnosis and management of brain tumors broaden the possibilities for the molecular inclusion of novel epigenetic molecules, such as miRNAs, for a better understanding of this disease.
Introduction
Gliomas stand out as the most common malignant primary central nervous system (CNS) tumor (Louis et al., 2021).Glioblastoma (GBM) is the most prevalent glioma and is classified as a grade 4 tumor according to the 5th edition of the World Health Organization (WHO) classification (WHOCNS5) (Louis et al., 2021).It accounts for 14.5% of all CNS tumors and 48.6% of all malignant CNS tumors (Grochans et al., 2022).Despite the availability of multimodal therapeutic approaches, GBM still exhibits a poor prognosis with a five-year survival rate of 5.5% (Ostrom et al., 2017).Data has shown how, over the past 30 years, the median survival of GBM has not changed significantly, with a low median survival rate of 2 years or less (Liu et al., 2013).Furthermore, the life expectancy of patients with GBM is approximately 1 year, and for patients exhibiting recurrence is around 4 months (Zhao et al., 2017a).Although there has been significant progress in the comprehension of GBM biology, there is still a conceptual gap concerning the molecular mechanisms responsible for pathogenesis and therapeutic options for treating this disease (Gonzalez-Gomez et al., 2011;Low et al., 2014).However, recent advancements in molecular pathology have unveiled compelling links between glioma development and various epigenetic phenomena involving histone modifications, deoxyribonucleic acid (DNA) methylation, chromatin remodeling, and dysregulation of ribonucleic acid (RNA) profiles (Phillips et al., 2020;Uddin et al., 2022).These advances, which have led to different approaches with numerous and novel therapeutic strategies, such as gene editing, epigenetic drugs, or micro RNA (miRNA) modifications, have molded a path for reducing the pathological impact of this disease (Uddin et al., 2022).
Recently, non-coding RNAs (ncRNAs), such as miRNAs, have emerged as new effectors in the epigenetic field, capable of influencing gene transcription and translation without altering the DNA sequence, as traditionally seen in other epigenetic processes.(Banelli et al., 2017) miRNAs play crucial roles in regulating cell cycle checkpoints and tyrosine signaling pathways (Ames et al., 2017).The significance extends to the regulation of cancer, (Beylerli et al., 2022) neural development, (Ma et al., 2023;Nowakowski et al., 2018;Ivey and Srivastava, 2015) and stem cell functions (Gangaraju and Lin, 2009).For example, miR-21 and miR-26 are overexpressed in GBM, which act on mRNA of many genes related to P53, a well-known tumor suppressor and transcription factor, directly related to cell cycle arrest (Chan et al., 2005).Consequently, the decreased expression of these miRNAs can inhibit cell cycle arrest and cell death (Sati and Parhar, 2021).On the other hand, miRNAs can also regulate the retinoblastoma (RB) pathway.MiR-124 and miR-137 are downregulated in GBM.Restoring their normal expression levels increases cell cycle arrest at the G0/G1 phase (Silber et al., 2008;Godlewski et al., 2008).These expressions are related principally to the regulation of the cyclin-dependent kinases (CDK) signaling pathways.On the other hand, recent studies have identified a specific subset of cancer stem cells (CSCs) within solid tumors like GBM (Piper et al., 2021;Lathia et al., 2015).These CSCs can initiate tumor growth, drive malignant progression, and confer resistance to radiation and chemotherapy.Notably, GBM-derived CSCs share essential characteristics with neural stem cells (NSCs), such as self-renewal and multipotency, which may be influenced by miRNAs (Makowska et al., 2023;Khan et al., 2019).Importantly, the upregulation of miR-21 is the most pronounced within high-grade gliomas (HGGs) (Aloizou et al., 2020;Nieland et al., 2022;Belter et al., 2016).Conversely, a reduction in the expression levels of both miR-219 and miR-7 has been associated with an elevation in the expression of the epidermal growth factor receptor (EGFR), a receptor tyrosine kinase commonly observed to be overexpressed and activated in GBM (Ames et al., 2017).Here we aim to provide an overview of the current understanding of miRNAs in GBM development, with a focus on the current advances in diagnosis and treatments as well as future perspectives.
Materials and methods
A comprehensive narrative review of the latest available literature was done regarding the current use of miRNA in GBM in both English and Spanish languages.A focus was made on pathophysiology, diagnosis, prognosis, and treatment.The search was done by screening titles and abstracts of pertinent articles using the MEDLINE/PUBMED database.References were inspected for gathering additional studies.Schematic illustrations were also included.
We also performed a scoping review regarding the role of miRNAs in liquid biopsies for GBM detection and how such diagnostic tools could significantly enhance therapeutic strategies for managing GBM patients clinically.We reviewed all original studies indexed in PUBMED and EMBASE databases published in English and Spanish.The search included data from 2008 to 2024.The screening guidelines encompassed studies with fundamental demographic data, and follow-up information, and were accessible via these databases.The databases were last consulted on May 21, 2024.Our review included 1924 studies.The abstracts were reviewed and filtered by WJS, AFS, EGO, and NRA.Only original studies were included.A total of 44 were finally incorporated into the review.Data from the articles in the review was extracted using an artificial intelligence (AI) platform (TextCortex [https://textcortex. com/pdf-ai-alternative]).Once the selected articles were obtained in PDF format, the AI submitted and processed them to identify and retrieve specific miRNAs mentioned concerning glioblastoma.The AI was instructed to extract details on the miRNAs' associations with glioblastoma, including the biological fluids in which they were found and their reported utilities.
Biogenesis of miRNA
MiRNAs constitute a class of intrinsic ncRNAs with approximately 18-22 base pairs in length (Fig. 1), playing a crucial role in regulating gene expression through pre-and post-transcriptional mechanisms, particularly messenger RNA (mRNA) degradation (Chen et al., 2021;Bartel, 2004;Xiao et al., 2017).The biogenesis of miRNAs is illustrated in Fig. 2.These molecules modulate gene expression by interacting with the 3′-untranslated region (3′-UTR) of target mRNAs (O'Brien et al., 2018).Functioning through non-mutational mechanisms, miRNAs serve as significant epigenetic effectors (Banelli et al., 2017).Additionally, it is important to note the importance of epigenetics in the biogenesis of miRNAs.This field, which refers to the study of the variations in gene expression due to genetic alterations, (Farsetti et al., 2023) is known for its different reversible and heritable processes involving DNA methylation, histone modifications, and various RNA-mediated changes (Zhang et al., 2020).Epigenetic mechanisms such as DNA methylation and histone modifications influence the transcriptional control of miRNA expression.For example, for miR-127, the methylation of the CpG sites and deacetylation of the histones contribute to its silencing in tumor cell lines (Chuang and Jones, 2007).
miRNAs as biomarkers in GBM
The significance of biomarkers primarily lies in their ability to identify specific tumor treatments and monitoring of diseases, which is primarily done with a tumor biopsy.However, in cases such as GBM, this is not always feasible given the high risks of neurological decline of performing a new intervention if the tumor is located deeply or near to or within an eloquent area (Freidlin and Korn, 2014).Traditional biomarkers such as methyl-guanine-methyl-transferase (MGMT) which are associated with better prognosis and increased sensitivity to alkylating agents such as temozolomide (TMZ), still pose uncertainties in comparison to other molecular markers.The persistence of low survival rates in GBM over time underscores the need for developing new prognostic biomarkers that could aid in clinical decision-making (Huang et al., 2018).Consequently, considering that miRNAs are present in most body fluids, they have been considered potential candidates to serve as biomarkers for various pathologies (Weber et al., 2010).For instance, in gliomas, miRNAs have been described as possible biomarkers that could be associated with prognosis, prevention, or progression of the disease, as well as with the response to adjuvant treatments (Mucaj et al., 2015;Que et al., 2015).
miRNAs expression profiles in GBM
The use of bioinformatic methods (e.g., clustering) and miRNA expression profiling has been shown to produce a better classification of E.G. Ordóñez-Rubiano et al. tumors in terms of histology and prognosis than the sole use of mRNA expression (Huang et al., 2018).The role of miRNAs in cancer biology, including GBM, has been widely explored (de Menezes et al., 2021).One of the most studied miRNAs is miR-21, which is increased in many cases of GBM and appears to act as an oncogene (Brower et al., 2014).Similarly, miR-let-7 is often overexpressed, and this overexpression has been related to a decrease in cellular invasion and migration rates (Kong et al., 2012).Different deregulated miRNAs that are related to GBM are summarized in Table 1 (de Menezes et al., 2021;Bendahou et al., 2020).Additionally, some of them have been identified as possible treatment targets: miR-9, miR-21, miR-7, miR-34a, miR-4492, miR-320a, miR-146 b-5p, miR-320 A, and miR-146 b (de Menezes et al., 2021).On the other hand, different miRNAs have been related to the epithelial-mesenchymal transition (EMT) (Setlai et al., 2022a).The expression of several ligands binding to tyrosine kinase receptors is influenced by specific miRNAs.For instance, the phosphatase and tensin homolog (PTEN) gene, encoding a tumor suppressor protein that negatively regulates the PI3K/AKT signaling pathway and thus controls cellular proliferation, is negatively regulated by miR17-5p, miR-23a-3p, and miR-26a-5p (Ghafouri-Fard et al., 2021;Mukherjee et al., 2009).Similarly, the RAS signaling pathway, which is associated with cancer development by the upregulation of oncogenic transcription, increasing cell motility, survival, growth, metabolism, and migration, is upregulated by miR-143-3p, miR-123-3p, and let5a-5p (Setlai et al., 2022b;Gimple and Wang, 2019).Even critical tumor suppressors, like the p53 gene, are regulated by miRNAs such as miR10p-5p (Setlai et al., 2022b).Additionally, exosomal miRNAs contribute to the understanding of GBM, as some are released during disease progression (miR-21, miR-301, miR-301a) (Aili et al., 2021).These exosomes can release miRNAs to surrounding normal cells through endocytosis or lipid membrane fusion, disrupting the homeostasis of normal cells, and promoting the proliferation and invasion of malignant cells (Aili et al., 2021).Compared to exosomes derived from normal brain tissue, exosomes derived from tumor cells exhibit significantly increased Fig. 1.Illustration of the structure of a miRNA.The illustration is depicting the biogenesis process of miRNA molecules from a pri-miRNA to a pre-miRNA and finally to a mature miRNA, represented as a duplex.
Fig. 2. Biogenesis of miRNAs.
In the nucleus, the genes that code for miRNAs are transcribed in the form of long precursors, giving rise to the so-called primary miRNAs (pri-miRNAs), whose length varies between hundreds of pairs of nucleotides.This precursor is cut by Drosha/DGCR8 ribonucleases into one or several hairpin-shaped RNA molecules, transforming it into pre-miRNAs of 60-70 nucleotides.Drosha is composed of two RNAase III domains (RIIIA and RIIIB) and an Nterminal domain.The pre-miRNAs leave the nucleus towards the cytoplasm helped by Exportin 5 (a RANGTP-dependent binding protein), where the miRNA maturation process will take place.In the cytoplasm, the pre-miRNA is transported by the RLC complex (microRNA-induced silencing complex [miRISC] loading complex) where the RNAase Dicer/TRBP acts.This complex produces the cleavage of the pre-miRNA, generating a duplex miRNA with a mature miRNA chain and its complementary one.The mature strand together with AGO 1-4 and WG182 will form the miRISC and the complementary strand will be eliminated.MiRISC binds to an mRNA molecule (usually in the 3′ untranslated region) that has a sequence complementary to its miRNA component and cleaves the mRNA, leading to degradation of the mRNA or modification of its translation.Image created with www.biorender.com.
Many miRNAs influence tumor pathways such as in GBM, ultimately modifying the regulation of mRNA in terms of their genetic expression (Chen et al., 2021).This is the case for the widely studied miR-21 which has been identified as an apoptotic regulator as demonstrated in studies where knockdown of the molecule resulted in cell apoptosis via caspase activation (Chan et al., 2005).By targeting several proteins such as Tap 63, Heterogeneous Nuclear Ribonucleoprotein K (HNRPK), and Programmed Cell Death Protein 4 (PDCD4), miR-21 achieved inhibition of apoptotic pathways, hence further contributing to tumor cell proliferation (Chen et al., 2021).Cell proliferation on the other side, has been linked to direct action of miR-21 on PTEN, SMARCA4, and ANP32A genes among others (Kwak et al., 2011;Schramedei et al., 2011).On the other hand, multiple miRNAs have been identified to target oncogenes and play tumor suppressive roles as is the case for miR-7 (downregulated in GBM) by targeting PI3K and Raf-1 via the EGFR pathway, (Liu et al., 2014) and miR-128 was found to decrease glioma cell proliferation by targeting E2F3a (Zhang et al., 2009).Several studies have also highlighted the fundamental role of different subtypes of miRNAs in CNS tumor development (Zhang et al., 2012).A study evaluated the invasion potential of miR-221/222 using methods such as diffusion tensor imaging, transwell assay, wound healing, and mouse tumor xenograft assays.In this study, the knockdown of miR-221/222 correlated with decreased cell invasion by interfering with tissue inhibitor of metalloproteinases (TIMP3) levels (Zhang et al., 2012).Additionally, miR-221/222 knockdown was shown to inhibit tumor growth by increasing TMP3 expression.
miRNAs-based GBM classifications
While mRNA-based classifications for GBM exist, they have not gained widespread acceptance, primarily because miRNAs have demonstrated greater accuracy in classifying and diagnosing tumor samples compared to mRNAs and because they have provided more accurate and significant demographic data and clinical information regarding prognosis.MiRNA cluster identification has allowed glioblastoma typification into five subclasses related to its tumor cell precursor.Five clusters have been identified allowing for a differentiationrelated classification system of glioblastoma into five subclasses: oligoneural, neural, astrocyte, neuro mesenchymal, and radial glial precursors subtypes.Each of these suggests a relationship between each subclass and a distinct stage of neural differentiation (Kim et al., 2011).When comparing subtypes based solely on RNA expression, oligoneural precursors correspond to the proneural GBM subtype due to mutations in isocitrate dehydrogenase 1 (IDH1), mesenchymal neural precursors correspond to the mesenchymal GBM subtype due to mutations in NF1, and radial glial may correspond to the classic GBM subtype due to high levels of EGFR.However, GBM classification becomes more intricate when considering the cell subtypes of each tumor, their mixed cellular states (as GBM stem cell subpopulations maintain transcriptomic heterogeneity), and even the neural differentiation stage at which the tumor cell was developed (Huang et al., 2018).
As aforementioned, these precursor-related subclasses are associated with demographic characteristics and prognosis showing cluster associations with race, age, treatment response, and patient survival rates.As shown by Kim et al. when compared with astrocytic tumors, patients with neuro mesenchymal glioblastomas exhibited a trend towards longer survival.Additionally, patients with oligoneural glioblastomas had a notably longer survival time compared to those with radial glial, neural, or astrocytic tumors.On average oligoneural glioblastomas were noted to be diagnosed in younger patients and racial differences across the miRNA-based glioblastoma subclasses, with a higher percentage of non-Caucasian patients found in the neural and astrocytic subclasses compared to the radial glial subclass (Kim et al., 2011).These miRNA clusters could potentially serve as biomarkers for diagnosis, aiding in further classification of these tumors and providing prognostic information.
When comparing subtypes based solely on RNA expression, oligoneural precursors correspond to the proneural GBM subtype due to mutations in isocitrate dehydrogenase 1 (IDH1), mesenchymal neural precursors correspond to the mesenchymal GBM subtype due to mutations in NF1, and radial glia may correspond to the classic GBM subtype due to high levels of EGFR.However, GBM classification becomes more intricate when considering the cell subtypes of each tumor and their mixed cellular states, as GBM stem cell subpopulations maintain transcriptomic heterogeneity (Huang et al., 2018).Furthermore, genetically distinct subclasses are observed based on differences in race, age, treatment response, and patient survival rates.A study done by Kim et al. on 121 selected miRNAs, revealed a highly varied expression closely related to patient survival or previously associated with neuronal development (Kim et al., 2011).Additionally, the presence and deregulation of miRNAs in blood or cerebrospinal fluid (CSF) could potentially serve as biomarkers, such as miR-21 (Zhou et al., 2018).
microRNAs: dynamic interaction of pro-oncogenic vs anti-oncogenic functions in GBM
MiRNAs play a significant role in the regulation of gene expression.They have been proven to be important regulators of gene expression and are involved in modulating many cellular processes including apoptosis, proliferation, invasion, angiogenesis, and chemoresistance in GBM (Chen et al., 2021).Hence alterations in expression and function of different miRNAs contribute to the complex molecular landscape of the disease.The level of individual miRNAs can present different dynamic changes at various stages of the development of a tumor.It is important to consider the miRNA profile in GBM because it indicates the stage of the disease and can be in relationship with the prognosis and selection of an appropriate therapy.(Makowska et al., 2023) miRNAs can act both as anti-and pro-oncogenic factors by down or upregulating tumor-involved genes.Additionally, the functional analysis of different and GBM-specific miRNAs indicates which act as oncogenes or tumor suppressors and are responsible for developing resistance to chemotherapy and radiotherapy, stimulating neo-angiogenesis and cell proliferation, and regulating the cell cycle and apoptosis (Makowska et al., 2023).According to their roles in tumorigenesis, they can either be classified into tumor suppressors or tumor promoters or can act as both.Tumor suppressor miRNAs target oncogenes, meaning that their decreased expression is involved in the promotion of tumor progression given that tumorigenesis is not inhibited.Generally, those that disrupt the activity of the histone methyltransferase EZH2 can be regarded as tumor suppressors (Paskeh et al., 2022).Notably, miR-let-7 is one such miRNA that not only inhibits EZH2 but targets oncogenes like MYC and K-RAS, enhancing its tumor-suppressive properties (Chirshev et al., 2019).Well-studied tumor suppressor miRNAs include miR-7, miR-34, and miR-128.MiR-7 is downregulated in GBM leading to proliferation, migration, invasion, and metastasis of GBM by allowing overexpression of different oncogenes through the EGFR pathway.Both miR-34 and miR-128 are downregulated, being the latter involved in inhibition of self-renewal of glioma stem cells, and attenuating the effects of cell proliferation, tumor growth, and angiogenesis.MiR-34 on the other hand, induces apoptosis and inhibits cell migration, proliferation, and angiogenesis (Chen et al., 2021).Aside from these tumor suppressors, onco-miRNAs will be involved in the development of GBM by targeting the expression of tumor suppressor genes promoting oncogenesis.
Onco-miRNAs will be upregulated hence promoting GBM progression.The most important onco-miRNAs are miR-10 b, miR-21, and miR-93.MiR-10 b has been implied in the development of HGGs by enhancing the invasive capabilities of the tumor.It has been well documented that a decrease in expression of miR-10 b results in the reduction of cell growth, invasion, and angiogenesis as well as an increase in apoptosis through many mechanisms that involve targeting of RhoC, uPAR, and HOXD10 genes.MiR-21, being the most widely investigated miRNA, has been shown to influence cell invasion, metastasis, and resistance to chemotherapeutics (Chen et al., 2021).It has been identified as an apoptotic regulator with high expression in GBM cells through intricate mechanisms that involve HNRPK, TAp63, FASL, P53, TGF-B, and PDCD4 genes (Chan et al., 2005).Cell proliferation and chemoresistance are also made possible by miR-21 through targeting of specific genes such as MMPs, Ras/Raf, ERK, RECK, and TIMP3 (Chen et al., 2021).Finally, there is evidence that miR-93 is also a critical target in GBM founding to be upregulated in the development of the disease and involved in proliferation, migration, and invasion by affecting cell cycle arrest and promoting angiogenesis through targeting integrin-β8 (Fang et al., 2011).
On the other hand, GSK-3β acts like a potent tumor suppressor of the Wnt/β-catenin axis, due to inhibition of Wnt signaling through targeting β-catenin.Several studies indicate that regulatory miRNAs can also inhibit the axis WNT due to the promoting GSK-3β activity in diverse groups of cancer cells.For example, the tumor suppressor miR-34a has been reported to be downregulated in patients with GBM resulting in poor prognosis and a shorter survival rate (Rahmani et al., 2023).
Some studies have documented in vitro that let-7 acts like a tumor suppressor gene and inhibits the malignant behavior of glioma cells and stem-like cells.However, it is necessary to elucidate many mechanisms of interactions.Additionally, regulation of RAS protein level and RAS/ MAPK cascade are regulated by various miRNAs without a clear mechanism (Messina, 2024).
Each miRNA can modulate the expression of several miRNAs, creating an extraordinarily complex regulatory network where different miRNAs can be modulated by several other miRNAs.These biomarkers work as an intricate system of modulation and feedback that can serve both as diagnostics and potential therapeutics (Chen et al., 2021).It is therefore indispensable to understand this miRNA biology in order to continue identifying the emergent and continuous number of miRNAs with their corresponding targets for developing novel molecular therapies and diagnostic methods for better treatment of GBM.
How can microRNAs be important in future diagnosis and treatments?
Current diagnosis and treatment of GBM represent a challenge that requires an integrated approach combining histologic, molecular, and imaging information.Classification and grading of these tumors were once entirely based on morphological parameters such as pleomorphism, angiogenesis, presence of necrosis, and mitotic activity.Parameters that carried important limitations given tumoral heterogeneity at multiple levels, including genomic, morphological, cellular, clinical, and functional ones (Balana et al., 2022).Also, technical limitations such as sampling errors, both of which imply a high variability in diagnosis and therefore, treatment.
With the arrival of molecular characterization of gliomas, grading became more specific, impacting patient prognosis, improving treatment planning, and reducing diagnostic variability making molecular analysis crucial in the management of these entities.More recently, the WHOCNS5 has incorporated several molecular biomarkers (IDH1/2 mutation, 1p19q co-deletion, MGMT methylation, etc.) that have aided in the definition of both grade and histological subtypes of diffuse gliomas (Balana et al., 2022).
For example, the WHOCNS5 has classified diffuse gliomas into IDH mutant and IDH-wildtype tumors, making identifying and guiding further molecular classification easier.IDH mutant tumors include oligodendrogliomas (expressing a 1p/19q codeletion), astrocytomas, IDH mutants, grade 2 and 3 (expressing P53 and ATRX mutations), and astrocytomas, IDH mutants, grade 4 (expressing the CDKN2A/B mutation).On the other side, IDH wildtype gliomas include astrocytomas, IDH wildtype, grade 2 and 3, and GBM (expressing TERT or EGFR mutations, or gain of chromosome 7 and loss of chromosome 10).This impacts directly not only on a better characterization and classification of tumors into different entities but also provides information on the impact on survival (Louis et al., 2021;Rubiano et al., 2023).
On the other hand, imaging, which was once considered the cornerstone of glioma diagnosis, has somewhat diminished in importance due to factors such as interobserver variability heterogeneity and tumor presentation heterogeneity.Despite advancements in diagnostic radiology, imaging still falls short in detecting molecular and cellular changes, limiting its ability to accurately identify tumor types (Khristov et al., 2023).However, this technology is hindered by its limited role in the evaluation of therapeutic response, showing limited utility when differentiating complete or partial response to therapy, and stable or progressive disease (Shankar et al., 2017).
GBM's high heterogeneity is a hindrance to diagnosis and hence an adequate treatment that targets molecular therapeutic needs.Considering the limitations of current diagnostic methods for GBM, (Skouras et al., 2023) there is an emphatic need to identify novel methods that in the context of a molecular era, contribute to the idea of finding additional molecular biomarkers that can aid in early diagnosis while preventing invasive diagnostic strategies such as the current tissue biopsy approach.Both to avoid complications, and to properly classify patients early in the disease providing an adequate molecular characterization, prognosis, and oriented therapy (Saenz-Antonanzas et al., 2019).Given this, less invasive methods are becoming increasingly attractive, such as liquid biopsy as a diagnostic option, which, although continues to be studied, has provided a favorable and innovative panorama in the diagnosis of GBM.
Upon directing attention toward neoplastic diseases, biomarkers can be grossly classified into two classes: tumor-derived biomarkers and tumor-associated biomarkers.Both of which have proven to serve to identify both disease presence and progression.The former type is directly related and traced to the tumor, while associated biomarkers appear in response to the disease state of the body (Khristov et al., 2023).Body fluids, particularly blood and its components and CSF, being in close contact with the central and deep structures of the CNS, serve as a diffusion platform for local transport of products derived from E.G. Ordóñez-Rubiano et al. neoplasms that ultimately end up representing the biomarkers mentioned above.
The use of miRNAs in liquid biopsies for GBM detection
Liquid biopsy, primarily through blood tests, involves the detection and quantification of tumoral content released into biofluids.Different circulating biomarkers have been proposed for GBM, in particular circulating DNA (ctDNA), and circulating cell-free tumor RNA (ctRNA) that includes mRNAs, lncRNAs, and mainly small non-coding RNAs (sncRNAs).SncRNAs include in turn miRNAs, small interfering RNAs (siRNAs), circular RNAs (circRNAs), small nuclear RNAs (snRNAs), and small nucleolar RNAs (snoRNAs).Among them, miRNAs have arisen as promising biomarkers for cancer diagnosis in the last decade, since they have unique characteristics that make them suitable for isolation.MiRNAs are remarkably stable in plasma and serum, given that they are resistant to RNAase activity, (Garcia and Toms, 2020) and they are the most abundant circulating free molecules in the blood.Also, detectable miRNA levels can be observed in additional cell-free body fluids as well as in tissues.As miRNAs are directly derived from cells serving as important regulatory molecules, altered miRNA expression patterns in biological fluid samples will correlate with tumor presence, providing information on tumoral response to therapy, relapse of the disease, and progression.As has been proposed previously, altered miRNA expression patterns in biological fluid samples correlate with tumor tissue samples, volume, functional performance status, and even prognosis (Saenz-Antonanzas et al., 2019).
MiRNAs can be found either free within serum or CSF or locked within lipid membranes known as exosomes, (Garcia and Toms, 2020) serving as regulatory molecules that affect signal transduction pathways involved in cellular proliferation and suppression by either promoting or suppressing apoptosis (Ahmed et al., 2021).Exosomes are membrane-enclosed extra-cellular vesicles (EVs), that are actively released by both healthy cells and cancer cells carrying nucleic acids (mRNA, DNA, non-coding RNA), lipids, and proteins.These exosomes released by cancer cells can be extracted as non-invasive, circulatory biomarkers containing molecular characteristics of the original tumor and can be screened for detecting these signatures (Makowska et al., 2023).
Liquid biopsies, appear as an innovative and attractive diagnostic alternative that can also serve a follow-up role to identify early recurrence.These diagnostic, and prognostic potentials in conjunction with the possibility of predicting and establishing both an adequate or inadequate therapeutic response, have been studied and associated with specific miRNAs.Some of them have a diagnostic value such as miR-21, miR-128, and miR0342-3p, (Lai et al., 2015) overlapping with prognostic ones, and drug resistance prediction abilities such as in the case of miR-21 (Huang et al., 2018;Sun et al., 2018;Kim et al., 2003).Radio resistance prediction, on the other hand, has been demonstrated to be linked to other biomarkers such as miR-128, and miR-301 (Costa-Silva et al., 2015;Liu et al., 2016).This ability to work as biomarkers was also described by André-Grégoire et al., who demonstrated higher extracellular vesicle levels in GBM patients compared to healthy controls.Aside from this, specific sets of miRNAs have proven to have a diagnostic utility such as in the case of miR-320e, miR-223, miR-23a, and miR-21, which when used as a combined '4-miRNA test' has a diagnostic accuracy of 99.8%.This demonstrates that a miRNA signature may have the potential to have perfect accuracy in distinguishing glioma patients (Morokoff et al., 2020).Tumors are also able to quickly evolve and modify their molecular profiling to gain resistance to certain treatments, so having a reliable platform that allows for real-time assessment of the changes occurring in the primary tumor is highly valuable (Shankar et al., 2017).Potential miRNAs with diagnostic and prognosis in serum and CSF liquid biopsies are resumed in Fig. 4. Further work is still required to disentangle the molecular complexities of miRNAs and the functional properties of these biomarkers need further investigation to establish adequate patterns and clusters with a diagnostic potential (Ahmed et al., 2021).
All information regarding our scoping review is resumed in Table 2. Liquid biopsies in GBM hold significant potential for improving
The miRNA genome is a treasure for GBM treatment
A profound understanding of diverse genetic mechanisms and their interactions is the future of diagnosing and treating GBM.This may involve utilizing diagnostic biomarkers present within the body and delivering personalized delivery of drugs via nanoparticles.Such an approach can offer a less invasive and precise alternative to surgery in some specific scenarios in the future.Once integrated with neuronal differentiation modeling and the intricate networks of miRNAs, the subsequent challenge is to identify specific epigenetic targets for GBM therapy and advance strategies for novel drug discovery.The objective of miRNA-based glioma therapy is to halt tumor progression and trigger apoptosis in malignant cells, restoring normal cellular pathway functions.The efficacy of miRNA-based therapy is evaluated by assessing the glioma cell population or metabolism post-treatment using various assays.(Jimenez-Morales et al., 2022) miRNAs present a promising and innovative treatment avenue for GBM.However, their clinical implementation faces significant challenges, particularly related to the blood-brain barrier and miRNA stability in body fluids (Jimenez-Morales et al., 2022).
Combining a miRNA-21 inhibitor or miRNA-7 mimic with TMZ shows great promise as a strategy to potentially overcome TMZ resistance mechanisms.Both the miRNA-21 inhibitor and miRNA-7 mimic have been recognized as crucial regulatory elements associated with the four most significant cancer hallmarks related to therapy (Jimenez-Morales et al., 2022): 1) replicative immortality, 2) invasion and migration, 3) resistance to cell death, and angiogenesis induction (Rupaimoole and Slack, 2017).For example, the following microRNAs have been found to intervene in cancer hallmarks inhibiting the following processes: 1) cell cycle arrest (miRNA-10 b and miRNA-21), 2) metastasis inhibition (miRNA-10 b and miRNA-21), 3) apoptosis recovery , and 4) angiogenesis inhibition (miRNA-21) (Jimenez-Morales et al., 2022).By using these miRNA-based approaches in conjunction with TMZ, there is a possibility of enhancing the effectiveness of GBM treatment and addressing the challenges posed by TMZ resistance mechanisms.The targeted regulation of these miRNAs holds the potential to improve outcomes and provide a novel approach to tackling glioma therapy.
Future directions
The constantly evolving field of neuro-oncology has been integrating the molecular profiling of CNS tumors into clinical practice.The importance of approaching these tumors from different molecular perspectives, especially in highly morbid tumors such as GBM, is crucial for achieving better outcomes.The inclusion of miRNAs into the neurooncological management of CNS tumors shows great promise as their role has been elucidated in recent studies (Anthiya et al., 2018;Beylerli et al., 2023).These molecules open new ways for developing molecular biomarkers and novel treatments that could be integrated into clinical practice.Furthermore, combining the histologic, imaging, and molecular methods of this disease asserts a more complete and comprehensive way of approaching CNS tumors.Expanding the potential applications of molecular tools such as miRNAs with the use of less-invasive diagnostic techniques such as liquid biopsies, could improve the individualization of patients regarding the diagnosis, management, and prognosis of aggressive tumors.The need for continuous research into this highly morbid disease makes necessary continuous efforts for new and novel treatments.
Finally, there is a big need to provide physicians with accurate tools.As mentioned before, many miRNAs work together and overlap in different mechanisms of action.Consequently, the development of signatures or clusters may help to establish new rapid and accurate diagnostic and prognostic tools for GBM.Also, the detection and correlation between tumoral and serum or CSF miRNAs is still debatable and needs further investigation.
Critical view
The manuscript presents a comprehensive review focusing on the current state of the art regarding microRNA (miRNA) and non-invasive techniques for miRNA detection in glioblastomas (GBMs), with a specific emphasis on liquid biopsies in cerebrospinal fluid and serum.This topic holds significant relevance in the context of current advancements in molecular diagnostics and treatment strategies aimed at enhancing targeted therapies for GBM in clinical practice.
What distinguishes our review from the existing literature is its concentrated focus on the utilization of miRNAs as biomarkers in liquid biopsies for GBM detection.While previous studies have explored various molecular diagnostic approaches for GBM, our manuscript places particular emphasis on understanding the molecular aspects of miRNAs and the potential of miRNAs in liquid biopsies as a less invasive means of diagnosis, management, and prognosis for GBMs.The significance of our work lies in its contribution to the evolving field of neurooncology, where molecular profiling of central nervous system (CNS) tumors is becoming increasingly integrated into clinical practice.Furthermore, the inclusion of miRNAs in the management of CNS tumors shows great promise, as their roles have been elucidated in recent studies.Expanding the applications of molecular tools such as miRNAs, particularly through less invasive techniques like liquid biopsies, has the potential to enhance the individualization of patient care, ranging from diagnosis to prognosis and treatment selection for aggressive tumors like GBM.
In summary, our manuscript offers a unique perspective on the current state of the art regarding miRNAs and GBM and the role of liquid biopsies in GBM, contributing to the advancement of molecular diagnostics and personalized medicine in neuro-oncology.We believe that this review fills a critical gap in the literature and has the potential to significantly benefit the current knowledge and future clinical management of GBM patients.
Conclusions
MiRNAs have been demonstrated to play a potential tool in the diagnosis, treatment, and prognosis of GBM.New strategies for rapid and accurate detection like liquid biopsies may be a minimally invasive solution to provide sequential information before and after treatments, improving the diagnostic and prognostic information of these tumors.MiRNAs may work as signatures or clusters and further investigation to develop new diagnostic markers are needed.GBM remains a fatal and heterogeneous tumor that requires intense research to improve survival, miRNAs seem to be promissory and remain a remarkable research topic.
AI disclosure
During the preparation of this work, the author(s) used TextCortex in order to improve Table 2 of the scoping review and provide useful information regarding liquid biopsies in GBM given the large amount of varied information in the literature.After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 3 .
Fig. 3. MiRNA mechanisms in the pathogenesis of GBM.miRNAs play a fundamental role in GBM through the upregulation or downregulation of essential cellular processes, resulting in cell immortality, uncontrolled cell proliferation, immune evasion, and brain invasion.The figure depicts examples of miRNAs known to be involved in these processes, with arrows indicating upregulation and downregulation.Image created with www.biorender.com.
Fig. 4 .
Fig. 4. Potential miRNAs with diagnostic and prognosis in serum and CSF liquid biopsies for GBM.miRNAs are listed according to their potential role according to CSF or serum biopsies.Image created with www.biorender.com.
Table 1
Expression of miRNAs involved in the molecular pathways of glioblastomas.
Table 2
Characteristics of studies regarding liquid biopsies in GBM.
Table 2
(continued ) (Table 2).Overall, liquid biopsies provide a non-invasive, comprehensive approach to managing GBM, offering insights into the disease that can improve patient outcomes through tailored interventions.
|
v3-fos-license
|
2021-08-06T14:20:23.993Z
|
2021-08-06T00:00:00.000
|
236930721
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.nature.com/articles/s41366-021-00919-x.pdf",
"pdf_hash": "cfe6875dd0130ddbb896525561b6fe10959f0207",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45679",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "cfe6875dd0130ddbb896525561b6fe10959f0207",
"year": 2021
}
|
pes2o/s2orc
|
Relationship between BMI and alcohol consumption levels in decision making
Background Decision-making deficits in obesity and alcohol use disorder (AUD) may contribute to the choice of immediate rewards despite their long-term deleterious consequences. Methods Gambling task functional MRI in Human connectome project (HCP) dataset was used to investigate neural activation differences associated with reward or punishment (a key component of decision-making behavior) in 418 individuals with obesity (high BMI) and without obesity (lean BMI) and either at high (HR) or low (LR) risk of AUD based on their alcohol drinking levels. Results Interaction between BMI and alcohol drinking was seen in regions of the default mode network (DMN) and those implicated in self-related processing, memory, and salience attribution. ObesityHR relative to obesityLR also recruited DMN along with primary motor and regions implicated in inattention, negative perception, and uncertain choices, which might facilitate impulsive choices in obesityHR. Furthermore, obesityHR compared to leanHR/leanLR also demonstrated heightened activation in DMN and regions implicated in uncertain decisions. Conclusions These results suggest that BMI is an independent variable from that of alcohol drinking levels in neural processing of gambling tasks. Moreover, leanLR relative to leanHR, showed increased activation in motor regions [precentral and superior frontal gyrus] suggestive of worse executive function from excessive alcohol use. Delayed discounting measures failed to distinguish between obesity and high alcohol drinking levels, which as for gambling task results suggests independent negative effects of obesity and chronic alcohol drinking on decision-making. These findings highlight distinct associations of obesity and high-risk alcohol drinking with two key constituents of decision-making behavior.
BACKGROUND
Heavy drinking is associated with a greater waist-hip ratio in midlife even when taking other influences into account such as having overweight parents, maternal smoking in pregnancy, and physical inactivity [1,2]. Further, regular and/or heavy episodic drinking in young adults increases the risk of being overweight or obese [3]. On the other hand, some cross-sectional studies have shown an inverse relationship between moderate alcohol consumption and high waist circumference [4] and the prevalence of metabolic syndrome [5]. A systematic review of large cross-sectional and long-term prospective cohort studies found no conclusive evidence for a positive association between alcohol consumption and weight gain [6]. Moderate to hazardous levels of alcohol consumption have been linked with lower BMI in females due to decreased carbohydrate intake from other sources (for example sucrose) [7]. Reduced energy intake from food or non-alcoholic beverages in heavy alcohol drinkers (both males and females) has been reported through the National Health and Nutrition Examination Survey (NHANES) by various groups [8][9][10]. However, there are inconsistent reports on the effect of alcohol as a major energy source contributing to the BMI of drinkers. Colditz et al. reported an inverse association between alcohol consumption and BMI, particularly in women, which could be related to alcohol calories being less efficiently utilized [7]. In contrast, higher total energy was associated with higher BMI in male heavy drinkers as compared to those consuming lower quantities of alcohol on days when drinking occurred [10]. Furthermore, some epidemiological studies have reported that energy intake from alcohol beverage type and drinking pattern (i.e., high intensity/volume, high frequency) contribute to total energy intake and are associated with excess body weight amongst young adults [3,11,12]. Higher consumption of energy-dense alcoholic beverages was associated with lower diet quality scores in males and females [9]. One of the major adverse effects of higher calorie intake among drinkers is the lower nutrient densities of protein, fat, carbohydrate, and some minerals and vitamins [13].
The metabolic imbalance due to obesity is associated with chronic low-grade inflammation due to elevated circulating proinflammatory cytokines. This chronic inflammation extends beyond the adipose tissues to the central nervous system (CNS). Ingestion of a high saturated fat diet increases the expression of inflammatory cytokines in the hypothalamus, which presumably are regulated by microglia [14,15]. The susceptibility of the CNS to inflammation following high-fat diets was revealed by a rodent study that observed gliosis and inflammation in the hypothalamus within 3 days of high-fat diet exposure [16]. Cognitive impairment and brain dysfunction have been reported with obesity-triggered chronic neuroinflammation. Specifically, a preclinical study revealed activation of the IKK/NF-kB pathway (with constitutive activity in the hypothalamus) resulting in excessive release of inflammatory cytokines such as TNF-α and IL-1β during obesity, which reduced neurogenesis, led to cognitive deterioration and degeneration of hypothalamic stem cells [17]. Accumulating evidence therefore suggests that CNS and cognitive function are deleteriously affected by obesity [18,19].
Overlaps in the pathways that lead to excessive eating (leading to obesity) and alcohol dependence have been studied. Both obesity and alcohol use disorders (AUD) have been linked to the brain's reward system [20]. Overconsumption can trigger a gradual increase in the reward threshold, requiring more and more palatable high-fat food or alcohol to satisfy cravings [21]. Evidence suggests an imbalance in three neural systems during the development of AUD and obesity (i) a system that promotes habitual behaviors in response to salient rewards, (ii) an interoceptive system that evaluates internal states and affects responses to uncertain risks and rewards, and (iii) an inhibitory control and decision-making system [22]. Decision-making is often assessed using the Iowa Gambling Task (IGT), which requires inhibition of impulsive responses by factoring in uncertainty, reward, and punishment. Interpretation of IGT performance is challenging since several cognitive constructs are assessed simultaneously, including memory, reward sensitivity, and inhibitory control. Nonetheless, decision-making behaviors have been measured with high ecological validity [23,24] and impairments in decision-making have been repeatedly demonstrated in addictions and eating disorders [25][26][27].
Neuropsychological studies support the hypothesis of food/ alcohol addiction-related alterations in inhibitory control, emotion regulation, and overall executive function for which a core cognitive trait is decision-making [28]. Individuals with obesity prefer immediate rewards despite negative long-term consequences relative to lean BMI controls [29]. Furthermore, when assessed by the IGT, individuals with obesity and AUD present significant decision-making impairments in overall task performance [26,27,30,31]. Moreover, individuals with comorbid gambling disorder and AUD showed an additive effect in choosing greater immediate rewards reflecting worse decision-making deficits, relative to those with only one condition [32]. Similarly, there is overlap in neurocognitive disruption between obesity and gambling disorder; in gamblers, obesity is associated with decision-making and sustained attention impairments, along with more significant monetary losses from gambling [33].
The published literature suggests that both individuals with obesity and AUD suffer from decision-making deficits; here, we expand this inquiry to investigate differences in neural activation associated with reward or punishment during the gambling task (a key component of decision-making behavior) in individuals with and without obesity (lean) who are either at high or low-risk of AUD. We posited that groups with high BMI and at high AUD risk (obesityHR) would show greater activation to rewards in brain regions critical for inhibitory control, uncertainty, and memory function, compared to the obesity low-AUD risk (obesityLR) group, reflecting greater reward sensitivity. It is also expected that individuals with lean BMI and low AUD risk (leanLR) would exhibit lesser reliance on immediate monetary rewards than high-risk AUD groups (obesityHR and leanHR). Therefore, the results would help us understand the effect of BMI and alcohol drinking on decisionmaking for low and high reinforcing rewards.
Design and participants
For the present study, we obtained permission from the human connectome project (HCP) to use Open and Restricted Access data from the S1200 (final) release of the Young Adult HCP (ages [22][23][24][25][26][27][28][29][30][31][32][33][34][35]. Participants reported no significant history of neurological disorder, cardiovascular disease, or Mendelian genetic disease and did not present any MRI contraindications. General HCP information can be found in Van Essen et al. [34]. Participants were recruited in Missouri and Minnesota. All participants gave informed consent, and all aspects of the protocol were approved by the Washington University School of Medicine Institutional Review Board.
Categorization of participants into groups
From the list of obesity and lean participants for whom the gambling task fMRI data were available, we included 418 subjects [109 with obesity and 309 lean categorized based on SSAGA_BMICat in HCP dataset]. Subjects were sub-categorized based on their risk status for AUD. Accordingly, highrisk (HR) comprised of both binge (BD) and heavy drinkers (HD), while the low-risk (LR) group included individuals who drink less than 4 drinks on a single day and for less than one day per week in the past 12 months. Furthermore, subjects who met DSM IV criteria for alcohol dependence or abuse were excluded from the LR category. Obesity and leanHR group [(ObesityHR, n = 24; 66% males); (LeanHR, n = 86; 63% males) and LR group [(ObesityLR, n = 85; 35% males); (LeanLR, n = 223; 30% males)]. More details on subjects selection criteria are given in Supplementary Fig. S1. Participant characteristics are presented in Table 1. Consistent with the design's intentions, the two groups differed substantially in BMIs (Table 1). Pearson's Chi-squared test with Yates' continuity correction was conducted to see if there is a difference in the number of HR and LR individuals in the obesity and lean groups.
Gambling task for fMRI
To measure decision-making, we used the HCP's fMRI gambling task (GT) studies, developed by Delgado and colleagues [35], as it taps into the relevant cognitive systems [36]. The reward-related BOLD signal was measured during a card-guessing gambling task played for monetary reward, as previously described [37,38]. Briefly, participants were required to guess the number (range 1-9) on the mystery card, which would determine if they win or lose money. The instructions were to press one of two buttons on the response box after guessing the number on the mystery i.e. if it is more or less than five. The participants were given feedback by revealing the card number they chose and a cue to inform them if they received a monetary reward, loss, or neutral (no reward/loss; for number 5) trial. The task was presented in blocks of eight trials that were either mostly reward (six reward trials pseudo-randomly interleaved with neutral and/or loss trials) or mostly loss (six loss trials interleaved with reward and/or loss trials). There were two mostly reward and two mostly loss blocks for each of the two runs, interleaved with four fixation blocks (15 s each). Although the participants gambled for potential monetary reward, all participants are rewarded with a standard amount of money during the task [37,38].
Delay discounting task
Immediate reward preference or devaluing of delayed rewards was assessed in the HCP dataset using an adjusting-amount monetary choice task. In this paradigm, each trial asks participants to indicate whether they would rather receive a smaller immediate reward (e.g., $100 today) or a larger delayed reward (e.g., $200 in 1 month). Briefly, participants were to make 5 choices of amounts based on the delayed amounts ($200 and $40,000) at each of six delay time points: 1 month, 6 months, 1 year, 3 years, 5 years, and 10 years. The delay choices based on both the delayed amounts ($200 and $40,000) were made in a certain fixed order of time combinations: (i) today vs. 6 months; (ii) today vs. 3 years; (iii) today vs. 1 month; (iv) today vs. 5 years; (v) today vs. 10 years; (vi) today vs. 1 year. The reward amounts were titrated based on participants' choices until points of indifference (value for a "sixth" choice) were determined based on an increment or decrement from the immediate value of their fifth choice; that is, the point at which a person is equally likely to choose a smaller reward (e.g., $100) sooner versus a larger reward later (e.g., $200 in 3 years) [39]. The variable used to measure how steeply participants discounted delayed rewards was the area under the curve (AUC), a valid and reliable index of immediate reward preference [40]. In this study, we considered the average of the AUC variables for the $200 and $40,000 delayed reward conditions.
Imaging data were analyzed with Statistical Parametric Mapping (SPM12, Welcome Department of Imaging Neuroscience, University College London, UK). Standard image preprocessing was performed. Images of each subject were first realigned (motion corrected). A mean functional image volume was constructed for each subject per run from the realigned image volumes. These mean images were co-registered with the highresolution structural MPRAGE image and then segmented for normalization with affine registration followed by nonlinear transformation. The normalization parameters determined for the structural volume were then applied to the corresponding functional image volumes for each subject. Finally, the images were smoothed with a Gaussian kernel of 6 × 6 × 6 mm at full width at half maximum.
Imaging data modeling
We modeled the BOLD signals to identify regional brain responses to win block versus neutral, loss block versus neutral, and win block versus loss. A statistical analytical block design was constructed for each subject, using a general linear model (GLM) with a boxcar each for win or loss blocks convolved with a canonical hemodynamic response function (HRF). Realignment parameters in all 6 dimensions were entered in the model as covariates. The GLM estimated the component of variance that each of the regressors could explain. In the first-level analysis, we constructed for individual subjects a statistical contrast of win block versus neutral, loss block versus neutral, and win block versus loss block to evaluate brain regions that responded to wins and losses and that responded differently to wins and losses. The contrast images (difference in β) of the first-level analysis were then used for the second-level group statistics.
As we observed that in our dataset the percentage of males was higher in obesityHR groups while the percentage of females was higher in leanLR groups, we assessed the effect of gender on brain activation during the gambling task by comparing males and females from the entire sample using a two-sample t-test. For group and sub-group analysis, we used a full-factorial general linear model with the independent, between-group factors of interest as BMI (groups: obesity and lean) and, alcohol drinking (groups: HR and LR) and four levels (obesityHR, obesityLR, leanHR, and leanLR), including age and sex as control covariates in SPM12. Multiple sub-group comparisons were made where we first used a standard double threshold method; first chose a cluster forming voxel threshold of p < 0.025 with k > 84 (minimum of 84 neighboring voxels), and then applied a threshold of p < 0.05 to correct for family-wise error (FWE) across the pvalues of the surviving clusters [42]. Effectively, this combined voxel-and cluster-level statistic reflects the probability that a cluster of a given size, consisting only of voxels with p < 0.001, would occur by chance in data of the given smoothness. The surviving clusters were then used to form ROIs around the voxel with peak intensity in that cluster for further comparisons. The Marsbar tool in SPM12 was used to extract peak activation differences following significance thresholding and entered into an SPSS data matrix to assess the differential sub-group activations.
Statistical analysis
We found < 1.0% missing data for all variables of interest. The participants with complete data were retained for further analysis. Levene's test was applied to assess the equality of variances across the groups. Since we observed unequal variance in weight and BMI between the groups, we used Welch's t-test to examine between-group differences in patient characteristics. The mean ranks of the ddisc_AUC measures between the groups were not equal. Mann-Whitney non-parametric tests were used to determine the differences in ddisc_AUC measures between the groups and sub-groups. Mann-Whitney was also used to determine the difference in number of drinks of alcohol between the subgroups. We conducted a two-way analysis of variance to investigate the effect of BMI and alcohol drinking on ddisc_AUC measures. To address potential confounders, we included age and sex as covariates (see Table 1). A threshold of p < 0.05 was considered for reporting data significance. All analyses were done using SPSS software. Here, * denotes significant difference (p < 0.05) between the obesity and lean groups; # obesityHR and leanLR; @ obesityLR and leanLR. BMI body mass index (kg/m 2 ), SD standard deviation.
fMRI BOLD activations to win and loss We examined regional responses to the win versus loss contrast in a full factorial model for sub-group comparisons. We observed the main effects of BMI during the win > loss contrast with significant cluster activations in the right postcentral gyrus (PoG), superior parietal lobule (SPL), and precentral gyrus (PrG) ( Table 2). The main effects of alcohol drinking included clusters located in the left superior temporal gyrus (STG), middle temporal gyrus (MTG), and parietal operculum (PO) ( Table 2). Analysis of activation differences between sub-groups for the win-loss contrast showed for obesityHR relative to obesityLR, greater activations in right PCC, PCu, middle cingulate gyrus (MCgG), the supplementary motor cortex (SMC), left STG, posterior insula (PIns), cuneus (Cu), bilateral cerebellum and cerebellar vermal lobules VIII-X (Fig. 1A&B). The obesityHR relative to leanHR comparison, revealed greater activation in clusters in right PoG, PrG, and left SPL, PIns, STG, and lingual gyrus (LiG) (Fig. 2A&C). The obesityHR relative to leanLR comparison revealed greater activation in right and left cerebellum, MCgG, Caudate (Cau), PoG, SPL, SMG, STG, PIns, and MTG (Fig. 2B&D). There were no differences between leanHR and obesityHR groups.
The leanHR relative to obesityLR comparison showed greater activation in the left inferior occipital gyrus (IOG), LiG, and calcarine cortex (CalC) (Fig. 3A&C). The leanLR relative to leanHR showed greater activation in right PrG, medial PrG (mPrG), and left superior frontal gyrus (SFG) (Fig. 3B&D). The locations of the subgroup comparison are summarized in Supplementary Fig. S3 and details for the cluster in Table 2.
Gender comparison revealed significantly higher frontal activation (bilateral SFG, and left MFG, MPrG, MPoG) in males compared to females in the whole dataset in the win>loss contrast (see Supplementary Fig. S4). There were no regions where females showed greater activation than males. We did not observe significant associations between anxiety and depression scores and brain activation signals for any of the sub-groups.
Delay discounting behavioral measures
We observed significantly lower ddisc_AUC values in the obesity and HR group as compared to the lean and LR group (p = 0.001; 0.04), indicating greater discounting of delayed rewards (i.e., greater tendency to choose smaller rewards now, as opposed to larger rewards later) (Supplementary Fig. S5; Table 1). Differences in delay discounting were also observed between sub-groups, where obesityHR showed significantly lower (p = 0.001) values compared to leanLR similarly, obesityLR had lower values than leanLR groups (p = 0.01) (Supplementary Fig. S5; Table 1)
DISCUSSION
In this study, 22% of obesity and 28% of lean subjects were at high-risk of AUD and while drinking intensity was significantly higher in obesityHR compared to leanHR their frequency of consumption did not differ. High-intensity drinkers (regardless of frequency) reportedly have higher BMI, which is most likely associated with their increased intake of calories from foods and drinks [43,44]. The main dietary macronutrients that serve as sources of energy are fat (38 kJ/g), carbohydrates and protein (each 17 kJ/g), and to lesser extent alcohol (ethanol) (29 kJ/g). Alcohol is more energy-dense than carbohydrates and proteins, and calories from consumed alcohol are additive to that from other dietary sources, which can result in a positive energy balance and weight gain [45]. However, in the HCP data we cannot determine if and how much calories from drinking contributed to an individual's weight since it does not provide sufficient details on daily calorie food and alcohol consumption and physical activity.
Our fMRI results showed an interaction between BMI and alcohol drinking in PCu and PrG, which are part of the default mode network (DMN) and implicated in self-related processing, memory, and salience attribution [46][47][48]. The PrG is typically deactivated during task-based activation and is anti-correlated with brain networks associated with executive functioning [49][50][51]. The angular gyrus was also associated with BMI and alcohol drinking. The AG is a part of the inferior parietal lobule that mediates automatic "bottom-up" attentional resources, and its increased activation is strongly related to high memory performance [52]. The activation of angular and parietal regions in the left hemisphere observed most likely reflects the processing of memory and uncertainty components encountered during the gambling task [53].
We observed heightened activation of DMN (Cu/PCu, PCC), the primary motor cortex (SMC, MCgG) and of regions that aid in decision-making during uncertain choices (PIns), regions implicated in attentional deficits (Cerebellar vermal lobules VIII-X), and negative perception (STG) in participants with high BMI and highrisk for AUD relative to their low-risk counter group. Recently, greater BOLD activation in DMN regions, including the ventromedial prefrontal cortex (vmPFC), PCC, and right PrG was reported in subjects with obesity while performing the N-back task [54]. Similarly, DMN regions (PCC and precuneus) were shown to have greater activation during drug-cue exposure in cocaine [55], alcohol [56], nicotine [57][58][59][60], and cannabis use disorders [61][62][63]. Thus in line with this reasoning, we interpret the activation pattern in the high-risk groups to reflect their inability to maintain attention and focus, which in turn may facilitate impulsive choices. Furthermore, the greater activity observed in the parietal lobule and cerebellum might pertain to higher uncertainty associated with choices, which results in negative perception about the outcome and hence loss during the task [64]. The increased STG activity in obesityHR individuals is understandable as the loss involved in the task elicits negative emotions [65]. Therefore, the neural activation pattern in obesityHR group during the gambling task corroborates findings from previous studies on decisionmaking deficits in obesity [66][67][68][69] and AUD [67,70,71] who prefer short-term disadvantageous rewards (despite negative long-term consequences) over advantageous long-term ones.
Increased activation in DMN and in regions implicated in uncertain decisions in obesityHR as compared to leanHR and leanLR groups is consistent with prior findings of increased DMN activation in obesity compared to lean individuals, which was interpreted to reflect increased attention to internal states like appetite or gut signals [72]. The common themes here relate to deficits in attention, memory, and increased uncertainty attributed to the mental processes that underlie decision-making. Since subjects with both high BMI and chronic alcohol consumption recruited brain regions associated with enhanced sensitivity to reward in the gambling task compared to lean BMI groups both at high and low-risk of AUD (HR and LR), we carried out further between-group comparisons with an aim to explore and understand if this is an effect of high BMI or excessive alcohol consumption or a combined effect of these addictive drives. We observed that the individuals who were lean and at a high-risk of AUD in comparison to the obesityLR group recruited occipital regions related to increased visual attention. There has been growing debate on the nonlinear effect of alcohol drinking frequency on BMI [73]. Moreover, alcohol and carbohydrates might compete for the same neuronal receptors leading to the suppressed intake of one nutrient for the intake of the other [74]. Alcohol drinking frequency was similar in obesityHR (4-7days/ week; 24%, 1-3 days/week; 76%) and leanHR (4-7days/week;
Fig. 1
Shows the sub-group differences in regional responses on full factorial analysis of the contrast (win > loss) between obesityHR/ obesityLR with age and sex as covariates. The corresponding BOLD image (A) shows the regional activation while, box plot (B) depicts the difference in extracted beta estimates from the activated clusters between the groups. The initial clustering threshold was chosen as p = 0.025, with k > 84; final pFWE < 0.000. All clusters with cluster p < 0.05 familywise error (FWE) of multiple comparisons are shown in Table 2.
Here * signifies p < 0.05 between the groups. Fig. 2 Shows the sub-group differences in regional responses on full factorial analysis of the contrast (win > loss) between obesityHR/leanHR and obesityHR/leanLR with age and sex as covariates. The corresponding BOLD images (A) & (B) show the regional activation while, box plots (C) & (D) depict the difference in extracted beta estimates from the activated clusters between the groups. The initial clustering threshold was chosen as p = 0.025, with k > 84; final pFWE < 0.000. All clusters with cluster p < 0.05 familywise error (FWE) of multiple comparisons are shown in Table 2. Here *p < 0.05 between the groups.
21.3%, 1-3 days/week; 78.7%) groups. Thus the BOLD activation differences in these groups, suggest that BMI is an independent variable in the neural processing of the gambling task by these groups of individuals. Further, we also observed increased activation in the motor [mPrG, PrG, and PFC (SFG)] regions of leanLR individuals compared to leanHR. PFC is mainly concerned with executive control, and metabolic activity in this region has been demonstrated to negatively correlate with BMI and alcoholism [75,76]. PFC also has a critical role in controlling/inhibiting negative impulsive behavior [77]. Dopamine plays a significant role in costbenefit decision-making preferences [78]. Chronic alcohol intake is associated with pronounced alterations in dopaminergic neurotransmission [79], consequently compromising the function of the PFC, which receives these dopaminergic inputs. Similarly, obesity has been associated with reduced dopaminergic signaling and impaired PFC activity [80]. Thus for the leanHR participants alcohol use might have resulted in worse executive and inhibition control than in the leanLR individuals. Though a priori we would have expected that impairments in PFC would have been even more severe in obesityHR than in leanHR this was not the case. Instead, obesityHR compared to leanHR had greater activation in sensory regions whereas there were no regions for which leanHR had greater activation than obesityHR.
We also compared the delayed discounting task measures between these groups, which complements the gambling task by assessing preference for small immediate rewards versus large delayed rewards, another key component of decision-making. In agreement with prior findings, we observed significant behavioral differences between obesity and lean groups. The obesity group showed stronger discounting of future monetary rewards than the lean group. This may relate to the preference of obesity individuals for highly rewarding unhealthy foods despite their long-term detrimental effects as compared to lean individuals. We also observed that the delayed discounting measure differed in HR and LR both in obesity and lean individuals. Although these differences were also apparent between the sub-groups with obesityHR and obesityLR having lower measures compared to the leanLR group, the interaction between BMI and alcohol drinking was not significant. The delayed discounting measure did not distinguish between obesity and high alcohol drinking levels, which as for the gambling task fMRI results, suggests that obesity and chronic alcohol drinking have independent negative effects on decision-making.
There are certain limitations to the present study. Firstly, the HCP data lacks a measure of reward anticipation, which is another key dimension of decision-making behavior. Secondly, we used only the gambling task-fMRI. Functional connectivity studies using rs-fMRI might provide better information on how intrinsic network function supports decision-making behavior. Thirdly, this explorative study, which solely relies on BMI as a measure of obesity, needs to be extended with precise adiposity measures, other anthropometrics, or metabolic functioning. Moreover, a more detailed analysis of the type of alcohol consumed would give more insights into these findings considering the differential impact of alcohol types on weight changes reported across studies. The fourth limitation is that participants were predominantly of European ancestry and individuals from other ethnicities may carry a higher risk of obesity and have a higher burden for its deleterious consequences [81]. Thus the limited ethnic breakdown of participants in the HCP dataset limits the generalizability of our results. Finally, our subgroups differed in sex composition, with a higher percentage of males in the high-risk AUD groups relative to other groups. While we controlled for sex (as well as age) we can not completely rule out potential sex differences in activation responses and distinct interaction between sex, BMI, and alcohol Fig. 3 Shows the sub-group differences in regional responses on full factorial analysis of the contrast (win > loss) between leanHR/obesityLR and leanHR/leanLR with age and sex as covariates. The corresponding BOLD images (A) & (B) show the regional activation while, box plots (C) & (D) depict the difference in extracted beta estimates from the groups' activated clusters. The initial clustering threshold was chosen as p = 0.025, with k > 84; final pFWE < 0.000. All clusters with cluster p < 0.05 familywise error (FWE) of multiple comparisons are shown in Table 2. Here *p < 0.05 between the groups. drinking, which should be investigated in future studies with larger samples.
CONCLUSION
The current study documents differences in the neural activation patterns during the gambling task in obesity and lean participants at high and low-risk of AUD. The findings demonstrate a significant impact of BMI and alcohol consumption, and interaction of the two, on interoceptive regions including posterior DMN and parietal operculum during the gambling task. However, we found significant heterogeneity in the discounting measures within and across groups. Moreover, delay discounting was seen to independently predict BMI and alcohol drinking. Together, these findings highlight distinct associations of obesity and highrisk alcohol drinking with two key constituents of decision-making behavior.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.